Next Article in Journal
A Methodology to Distribute On-Chip Voltage Regulators to Improve the Security of Hardware Masking
Previous Article in Journal
Assessment in the Age of Education 4.0: Unveiling Primitive and Hidden Parameters for Evaluation
Previous Article in Special Issue
Global Realism with Bipolar Strings: From Bell Test to Real-World Causal-Logical Quantum Gravity and Brain-Universe Similarity for Entangled Machine Thinking and Imagination
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

From Information to Knowledge: A Role for Knowledge Networks in Decision Making and Action Selection

by
Jagmeet S. Kanwal
Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA
Information 2024, 15(8), 487; https://doi.org/10.3390/info15080487
Submission received: 7 June 2024 / Revised: 5 August 2024 / Accepted: 9 August 2024 / Published: 15 August 2024

Abstract

:

Simple Summary

This perspective article examines the differences between memory, information, and knowledge. It is proposed that the creation of knowledge is not simply the extraction of information or sequencing and storage of memories, but its contextualization that offers a point of advantage for survival within a decision-making framework; outside these contexts there are no useful memories and therefore no relevant knowledge. A constellation of neural networks spread across multiple brain regions must work together to grow knowledge over time. Knowledge must be stored in such a way that it can be accessed by multiple tokens, or handles, at any time. The emergence of knowledge networks underlies the evolution of complex brains that can predict outcomes, induce imagination and expand knowledge. Attention and sleep play important roles in creating and protecting knowledge.

Abstract

The brain receives information via sensory inputs through the peripheral nervous system and stores a small subset as memories within the central nervous system. Short-term, working memory is present in the hippocampus whereas long-term memories are distributed within neural networks throughout the brain. Elegant studies on the mechanisms for memory storage and the neuroeconomic formulation of human decision making have been recognized with Nobel Prizes in Physiology or Medicine and in Economics, respectively. There is a wide gap, however, in our understanding of how memories of disparate bits of information translate into “knowledge”, and the neural mechanisms by which knowledge is used to make decisions. I propose that the conceptualization of a “knowledge network” for the creation, storage and recall of knowledge is critical to start bridging this gap. Knowledge creation involves value-driven contextualization of memories through cross-validation via certainty-seeking behaviors, including rumination or reflection. Knowledge recall, like memory, may occur via oscillatory activity that dynamically links multiple networks. These networks may show correlated activity and interactivity despite their presence within widely separated regions of the nervous system, including the brainstem, spinal cord and gut. The hippocampal–amygdala complex together with the entorhinal and prefrontal cortices are likely components of multiple knowledge networks since they participate in the contextual recall of memories and action selection. Sleep and reflection processes and attentional mechanisms mediated by the habenula are expected to play a key role in knowledge creation and consolidation. Unlike a straightforward test of memory, determining the loci and mechanisms for the storage and recall of knowledge requires the implementation of a naturalistic decision-making paradigm. By formalizing a neuroscientific concept of knowledge networks, we can experimentally test their functionality by recording large-scale neural activity during decision making in awake, naturally behaving animals. These types of studies are difficult but important also for advancing knowledge-driven as opposed to big data-driven models of artificial intelligence. A knowledge network-driven understanding of brain function may have practical implications in other spheres, such as education and the treatment of mental disorders.

1. Introduction

1.1. Motivation

Some humans are very good at memorizing facts and others have uncanny imaginative abilities that can be translated into beautiful works of art. Still others are very knowledgeable, i.e., they have a deep understanding in a particular information domain that can lead to insights on solving nontrivial problems. The term “knowledge” can be construed to represent a working model of a particular aspect of the real world. Acquiring this knowledge takes time and is built on processes involving cross-validation and justification in an attempt to arrive at the “truth”. It is not merely a memory gained through sensory experience or through repeated association of a stimulus with a reward or punishment. Though typically applied to human domains of information, knowledge is equally important, however, for any organism to gain insights from their social and environmental interactions to make informed decisions.
In this perspective article, I extend neuroscientific findings on memory acquisition, storage and recall to propose a neurological framework within which to define knowledge as a wide-area neural network. Such a network can be referred to as a knowledge network (KN). KNs participate in creating and storing knowledge and have several key properties. These properties include interconnectivity between multiple brain regions, such as fronto-cortical cognitive and limbic emotive structures. KNs are considered to be highly dynamic networks, portions of which link up transiently depending upon the context, an explicit query or a physiological state or drive. KNs also need to transiently link up with attentional brain circuits to extract current information either from an incoming sensory stream or from memory. KNs eventually direct their output to decision-making and action-selection networks. Catastrophic events, such as “9–11” for humans in the US, can transiently crash KNs, causing confusion and the need to create a new real-world model.
Speech perception, reading and language can be considered as types of developmentally entrained, sapient KNs that acquire cross-validation via explicit instruction and are designed to rapidly gain new knowledge [1,2,3,4,5,6]. Acquiring a navigational map equates to developing a KN of one’s surroundings within which to make quick decisions about escape from predators, foraging and to find mates. A navigational KN undergoes cross-validation and certainty or truth-seeking via repeated, direct interactions with the environment and has been partially elucidated for navigation in rodents [7,8]. KNs are important for decision making and survival. A navigational KN must be reliable for making split-second decisions for survival. This becomes obvious when observing a rabbit swerving in seemingly random directions to dodge a wild cat or a dog, and still able to find its rabbit hole while running at top speed on a rough terrain.
Below, I first briefly describe the philosophical idea and usage of the term “knowledge” and show how it relates to well-defined concepts of information and memory at the phenomenological level. I then elaborate on some of the neuroscientific attributes of knowledge and propose the building blocks and mechanisms for knowledge acquisition, storage and recall. Furthermore, I conceptualize the formation of KNs in an equation form and show where they fit in our current understanding of the evolution of the nervous system. Finally, building on recent studies on neural mechanisms for attention and sleep for memory formation and consolidation, I clarify their role in acquiring knowledge that gets embedded within KNs.

1.2. From Information to Knowledge

Ever since the formulation of information theory [9,10], a lot has been written on information processing from informational and neuroscientific perspectives [10,11,12,13,14]. Knowledge, however, remains less well defined from both a neurological and a computational perspective. Neither are there any mathematical definitions of knowledge or a knowledge theory. Lynn et al. [15,16] have recently provided a formulation of the perceived information (cross-entropy) within a communication system as being the sum of the large amount of information produced (having high entropy) and the efficiency of the observer’s representation of that information (low divergence from expectations). The brain’s ability to store and retrieve information, i.e., learning, memory and recall, is of fundamental importance for the elaboration of goal-directed behaviors that frequently involve decision making [17]. This requires querying the brain and extracting bits of relevant information simultaneously from various loci to direct and question decision-making and goal-setting tasks. We may term this contextualized information set as “knowledge”. Neural networks that are required to perform these functions have not been clearly identified either in the neurobiological or computational domains. In this brief review, I provide a nonformal overview of the differences between information, memory and knowledge from a neurocognitive perspective. I propose a conceptual framework for describing KNs as a dynamic grouping of realistic neural networks different from those needed to extract information and create a memory.
A theory for information processing was first formulated to quantify the information load that the human mind can encode, store and retrieve over the short term, emphasizing capacity limits without external aids or techniques [18]. Its formulation was based on experimental data generated by others from the presentation of auditory, taste and visual stimuli to human subjects. Further mathematical elaboration was focused on the coding and decoding of signals for transmission and reception [19]. Work on cognitive and neural mechanisms for long-term storage capacity and the contextual linkage with existing information required animal experimentation using sophisticated techniques that were not available at that time. One way to do so is to integrate relevant information bits as synaptic connections whose strength or “weight” within local and global networks can be continuously modified. The configuration and patterns of activity generated by this network, when queried by inputs (in the form of either real or virtual stimuli), function as a model of the world in which an organism exists. This model can be equated to what we term as “knowledge” and the neural networks that participate in creating, consolidating and modifying knowledge can be referred to as knowledge networks or KNs. KNs incorporate the core memory networks described by others [20,21], but also include cross-validation (for truth seeking), organizational (sequencing and timing), contextualizing and attentional networks [22]. Parts of these nonmemory networks remain less well-defined anatomically but must come online via their interconnections during query (recall) and reflection. In this context, it is noteworthy that even simple memories need to be reactivated before they can be modified or erased, emphasizing the inbuilt labile nature of a KN [23,24,25].

1.2.1. Feature Extraction

Behavioral and cognitive psychology can go only so far in providing mechanistic insights into the way brain networks accomplish the task of developing and implementing a knowledge-based model of the world for decision making and action selection. For that, we need to take a step back and first understand how information is created from the sensory environment. A necessary step in the process of information creation via sensory perception is feature extraction. Naturalistic signals or stimuli in the environment are complex and typically have multiple elements and parameters by which they can be specified. From these, the brain uses parallel–hierarchical processing to extract features of varying levels of complexity [26]. Feature extraction is perhaps one of the most universal and useful aspects of sensory processing regardless of stimulus modality. Some of these features contain information that contributes to stimulus discrimination and object recognition [26,27,28,29,30,31]. A defining feature of an object does not solely depend on the physical properties of an object but also is the outcome of neural processing governed by the receptive fields of higher-order neurons [32,33,34,35]. A large amount of research on sensory processing and cognition has focused on feature identity and extraction and this is also a central theme of deep learning in artificial neural networks [13,36,37]. It should be noted that, at a more abstract level, cognition also involves terms like thinking, knowing, understanding, reasoning, judging and problem-solving. This review does not attempt to tackle all of these concepts from a neuroscientific perspective, though in some cases they may be intimately connected to and rely on knowledge embedded within KNs. An understanding of how neural networks learn, recognize and problem-solve has led to major advances in artificial intelligence (AI) models for face, handwriting and voice recognition—problems that were seemingly impossible to solve by conventional programming and engineering approaches [38,39,40].
Restating the argument made above more succinctly, a physical entity is intrinsically recognized as a feature depending on its relevance for a particular function or goal, such as for food and mate recognition or selection for reproduction [41,42]. A feature may consist of simpler elements that can be easily synthesized and experimentally tested, such as orientation columns in the visual cortex [34,43]. From a behavioral perspective, simple elements within a feature are referred to as information-bearing elements or IBEs [44,45]. They contain information that is critical for computing a feature and its variants. Non-IBEs within a signal can usually vary without affecting the perception of a particular feature but may carry other types of information. Furthermore, an IBE can be quantitatively defined based on any number of physical parameters. Those parameters that are behaviorally meaningful or communicate information to an organism (low surprise), and by design are extracted and encoded by the relevant sensory system are referred to as information-bearing parameters or IBPs [44,45]. IBEs and IBPs were first defined to provide a framework for identifying and studying the acoustic elements in the pulse–echo combinations in the vocalizations produced by bats for echolocation [45].
In the neuroethological studies of bats, empirical studies yielded putative IBEs that could be meaningfully attributed to the substructures or components of the acoustic structures of the naturalistic pulse–echo signals used for echolocation. These readily identifiable components, such as constant-frequency (CF) tones with harmonics, and frequency-modulated (FM) sweeps, were shown to play a critical role in the computation of important acoustic features, such as relative target velocity and distance, and wingbeat frequency and the amplitude of insects. Relative target velocity, wingbeat frequency and amplitude are IBEs for a bat tracking a target, such as an insect [46]. In our example, the IBPs would be the values of relative target velocity as a function of the Doppler-shifted frequency (60.6 to 62.3 kHz in mustached bats), with the wingbeat frequency resulting in an amplitude modulation rate of 50 to 500 Hz in the range of 48.2 ± 10.7 dB SPL, corresponding to particular insects that are a good (nontoxic) food source [47,48,49]. Key elements or putative IBEs and IBPs are also present in the sounds produced for social communication in bats and other species [50,51,52]. For vision, IBEs may corresponded to the combination of a particular shape, color and/or texture of an object, and for the olfactory system the combination of a mixture of odorants that can be used to identify a flower [53,54,55,56]. IBEs and IBPs can be determined by performing behavioral studies and estimated by determining receptive fields of neurons.
Neurophysiological and computational studies show that the IBEs are represented in the responses of neurons in many brain regions [57,58,59,60,61]. Cortical neurons and those present within hidden layers of multilayered, artificial neural networks exhibit high activation levels to either identifiable or abstract, low-dimensional representations corresponding to the principal components of the physical measures of an object [62,63]. They constitute IBE-like representations within real neurons and networks. Studies on the visual system of nonhuman primates identified a representation of IBE-like basis functions in the response of cortical neurons [64]. These basis functions do not necessarily represent easily identifiable elements within a naturalistic stimulus. Rather, they are statistical formulations of the real elements that are extracted as useful information. Features represent the building blocks of objects, and objects make up a scene [65,66] (Figure 1). Clearly, association between features is critical for computations that guide decision making and flight behavior, such as insect-tracking and capture, and the recognition of a food source. However, the memory of a feature that may help to identify a target or one that could be useful in identifying a social call does not in itself constitute knowledge.

1.2.2. What Is Knowledge?

To be meaningful, information must exist in the form of knowledge. The study of knowledge borders the domains of educational science, cognitive science and philosophy [68,69]. Knowledge is mainly acquired from sensory inputs to the brain, though even at the very first step of sensory transduction of various forms of energy into electric or neural energy, knowledge acts like a filter, biasing what reaches the neural networks deep within the brain. In other words, the creation of new knowledge and learning is itself influenced by existing knowledge. Thus, knowledge is much more individualized than information. Two people may be given identical information, but the knowledge it creates in each of their brains may be quite different.
It is useful to also consider the meaning of knowledge from a philosophical perspective since a concept of knowledge has existed from a long time before a conception of mind and brain. Wikipedia defines human knowledge as “an awareness of facts, a familiarity with individuals and situations, or a practical skill” [70]. This type of definition of knowledge exists more within the domain of philosophy rather than neuroscience or information science. From an epistemological perspective, propositional or declarative knowledge has been proposed to have three essential features or components. These features are individually necessary and jointly sufficient for achieving a state of declarative knowledge [71]. Knowledge of facts, also called propositional knowledge, has been characterized as true belief that is distinct from opinion or guesswork by virtue of justification. Belief involves various levels of certainty about an observation or thought [72]. A low level of certainty of an event or a relationship may be equated to imagination by others, whereas an elevated level of certainty indicates a strong belief in a perception. If a belief happens to accurately represent reality, then it is taken to be true. The process of testing is important for obtaining evidence to justify that a belief is true. The process of justification protects against a lucky guess becoming knowledge. This suggests that KNs must involve neuronal processes that test for the level of certainty and establish certainty or belief in a perception go through a process of justifying that belief and check for the presence of evidence in memory circuits or find a way to seek that evidence. This makes knowledge, unlike memories, more robust and less prone to forgetting. In neural terms, this means that knowledge about an issue may have a fractured, distributed representation so that knowledge can be evoked in many different situations. In other words, there are multiple entry points to activating KNs or their subsets and their activation can impact decision making in many related circumstances. How exactly certainty is tested and justified at the neural level remains unclear (however, see [73]).
A KN’s representation of knowledge, i.e., a species-specific model of the world, may guide decision making in many circumstances, e.g., the strategy an animal adopts to find food or to interact with conspecifics to facilitate various aspects of social behavior. Neural networks that represent knowledge are more difficult to define and usually require simultaneous recordings of the activity of neurons in multiple brain regions in awake-behaving animals exhibiting naturalistic behaviors. For example, neurobehavioral studies of a laboratory rat, mouse or a primate species maintained or placed within a small cage or apparatus where the animal has limited degrees of freedom of movement and decision making are not well-suited to explore their KNs. A naturally enriched environment offers a much greater opportunity for an organism to create and use KNs for action selection. Thus, the use of neuroethological approaches to study animals under natural or semi-natural environments has a greater potential to identify KNs and the neural mechanisms underlying knowledge processing [74,75,76]. With a few exceptions, such as social vocalizations in the house mouse and bats, and spatial navigation studies in rats and bats, neuroethological and neuro-ecological studies are easier to perform in invertebrate species. Advanced invertebrate brains likely contain rudimentary KNs, supporting limited reflection and justification processes that are difficult to extract. The technology to examine neural network activity at a large scale in vertebrate species has recently become available, however, and its application to study KNs via appropriately designed experiments should be feasible [77,78].
Knowledge of the environment in which an organism functions is critical for accomplishing multiple activities that are essential for survival. These include territorial, social and foraging forays on a regular basis. Figure 2 captures the periodic, behavior-driven incorporation of epistemological aspects (e.g., reinforcement, trust and belief) and mechanisms (associative memories) that define knowledge in general, within a navigational framework. During navigation, a cognitive route map within networks (e.g., using place and grid cells) is automatically created by an animal’s behavior, and brain mechanisms (oscillations, excitation, inhibition, facilitation and spike-time-dependent plasticity) embed context-driven network properties within a navigational KN. Place cells fire maximally at a particular location based on multimodal sensory cues, whereas grid cells compute the vector from the starting to goal location. A cognitive map, typically created by strategic, free-exploration and probabilistic associations, is a neural model of the external spatial world which represents the distances and directions between locations.
Training animals on stereotypic actions, such as nose-pokes, within a constrained environment is important for examining the physiology and pharmacology of specific local circuits involved in either aversive or reward memories. Chronic neural recordings during free exploration behavior, such as for navigation, offer a more open-ended approach to finding key neuronal and network properties [79,80]. These types of studies led to the discovery of place neurons within the hippocampus and of grid cells within the entorhinal cortex (EC), important findings that were recognized with a Nobel prize [81]. In the last decade, research on networks and neurons underlying navigation continues to be a source of groundbreaking new findings, such as of ring and toroidal structures, border cells, stripe cells and head-direction cells in rats and free-flying bats [82,83] (Figure 3A). We still need to understand how navigational knowledge is generated, contextualized, and recalled within different contexts.
Within a KN concept, the understanding of navigation can be further advanced by adopting techniques of chronic recordings of positional information and neural activity from multiple brain regions while animals engage in navigation within different contexts (Figure 3B). Activity-dependent mapping, e.g., using c-FOS and ZENK as markers of early gene expression, has been used in songbirds to identify neurons for song discrimination and production [84,85,86,87]. The use of two-photon imaging and neural activity markers, such as CaMPARI, in freely behaving animals can also facilitate the identification of sensorimotor networks that are co-activated, and show how they engage a core KN to accomplish different phases of navigation [88]. These approaches can also explain how contextual queries are initiated and how they might reconfigure particular networks for multiple functions, e.g., shifting the excitatory–inhibitory balance in core networks and engaging new networks. This is important to know because the brain evolves as a whole, and while reductionist approaches are useful for identifying gene and receptor function and pharmacology, they also take us away from achieving a holistic understanding of the brain.
In common terms, knowledge is often understood as an awareness of facts (declarative knowledge) or as practical skills (procedural knowledge) and may also mean familiarity with objects or situations. The same terms have been used to categorize memories. Episodic memories require the activation of action-related networks that are associated with an activity or an event. For a long time, it was debated whether memories are stored within specific areas in the brain or distributed as information bits throughout the brain. Current data support the latter scenario [89,90]. If so, then, as in a computer, one also needs a directory of sorts to point to the address of each information bit. This means that to access a bit of information, one first needs to look up the registered address and then utilize the information in memory. This could mean first accessing another memory to locate the directory to in turn locate a bit of pertinent information. Theoretically, this sequencing of addresses and memories could lock up the neural access within an infinite loop. While loops are important components of information processing, they do not represent an efficient way to store information. Also, it is energetically costly to not only “remember” a certain bit of information but then also go to a location to recall where that information is stored. This further complicates recall by the number of instances or bits of information that need to be retrieved. In real life, one can equate this to finding a user manual for an electronic device, such as a television set. If one wants to obtain some information on the location or function of a port or connector, then it will be best if one can either look directly at that device or look up relevant information in a user manual sitting next to the television set rather than first locating and retrieving it from a different location. Moreover, this process involves the need for additional memory storage that can greatly increase the cost of obtaining information by a factor proportional to the number of bits accessed, slowing down recall. It may also trigger an infinite loop making the system crash. Alternately, directory information may be stored at a central location and automatically channeled to the source of the query. How this is accomplished within the brain without the involvement of an outside agent, as in the case of an individual using their brain to access the memory and physically locomoting to retrieve it, is still unclear.

1.2.3. Memory vs. Knowledge

Memory remains a vital component of knowledge, and considerable progress has been made towards understanding memory formation, consolidation and recall. Short-term memories are first transiently stored within the electrophysiologic activity of neurons and networks [91]. Some memories are then consolidated within neural networks throughout the brain via the modification of dendritic spines and synapses that are only just beginning to be delineated [92,93,94,95]. Over the long term, the relevant networks somehow become activated to access a specific memory each time a decision is to be made. It is as if the brain has the capacity to “dial-in” to different networks within a knowledge domain to extract information.
As a real-world example, to communicate with others to share and/or obtain information, one may access a digital phonebook and click on a number to be dialed. These actions require energy so that there is a cost–benefit ratio not only for movement but also of network usage within the brain. Many mental disorders, such as depression and schizophrenia, can be considered, respectively, as a high perceived cost of action and decision- making going array. These disorders can be triggered by chronic anxiety and stressful states that release hormones in the bloodstream via the activation of the hypothalamus and other endocrine organs, such as the adrenal glands [96]. Chronic release of these hormones can result in changes in the wiring of the brain [97,98,99,100]. It can also disrupt metabolic processes and deteriorate telomeres or the endcaps of chromosomes [101,102,103]. Knowledge not only lowers the basal state of energetic cost by providing access to previously obtained information, i.e., via experience, but also allows an organism to make decisions and take action in the interest of lowering energy consumption over the long term and increasing the probability of survival. But, what is knowledge in contrast to information and memory, and how is it created and stored in the brain? Semantic or conceptual knowledge has been defined and studied only loosely from a neurobiological perspective [69,104], and studied in a very limited way from a computational perspective [17]. Computationally, structured knowledge is thought to reside in long-term memory as a distributed activity pattern and accessed via partial retrieving cues. According to a proposed model, sequences of memories can be potentially stored as single attractors within recurrent neural networks [17,105]. To gain a more tangible sense of KNs, let us now consider brain structures that are well known to play a role in memory mechanisms.

1.3. Brain Structures Involved in Memory Formation and Recall

Memories, such as those used for face recognition, may be purely experiential [106], or they may be created via association with either reward or punishment [107]. Experimentally, memory formation can be studied in animals only via either reward or fear associations, i.e., within an emotive context. While an emotive context is a powerful mechanism of memory formation given the direct relationship of emotions with survival, not all memories are formed in this way. Associations can happen within any two temporally bound sensory inputs, provided those associations occur at a statistically significant level over other chance associations.
The hippocampus, a seahorse-shaped brain structure, stores information transiently until it is contextualized and stabilized, and then channeled to a neural network at a specific location for long term storage. In this scenario, memory may be accessed by reactivating the memory in a context-driven manner. In other words, context may activate a specific section of the “phonebook” to automatically “dial-in” to a particular network to access a specific memory and/or to transform it, given a new set of contextualized inputs. In the phonebook analogy, the same brain structure, the hippocampus, cannot however function both for long-term storage and as a pointer to the correct location of a phone number entry since the information represented by each phone number may change with time. We are, however, concerned less here with details of information storage and more about what that information represents and what it accomplishes for an organism’s survival.
The hippocampi can be divided into a dorsal and a ventral subregion [108]. Whereas the ventral hippocampus receives information from the basal nucleus of the amygdala (BA), the dorsal hippocampus sends information to the basolateral nucleus of the amygdala, which also receives sensory information from the cortex and thalamus [109,110,111]. The reciprocal connectivity of the ventral hippocampus with the amygdala plays a central role in creating contextual memories, as summarized in Figure 4. The hippocampus binds together item and context information related to a study event. It receives information from the amygdala, an almond-shaped brain structure consisting of multiple nuclei, as well as the perirhinal and parahippocampal, including entorhinal, cortices. These structures transfer information to the hippocampus, respectively, about an event from the emotive value stream, and the “what” and “where” streams. The BA also projects to both the medial and lateral regions of the entorhinal cortex (EC). The BA receives projections back from the CA1 region and the subiculum as well as from the lateral EC (LEC). The LA sends excitatory output to the inferior colliculus (IC) that inhibits both the medial and lateral nuclei in the central amygdala (CeA). The LA also sends excitatory output to the BA, which in turn can excite both medial and lateral nuclei in the CeA. The baslolateral amygdala (BLA) has reciprocal connectivity with the prefrontal cortex [112]. Theta and gamma oscillatory activity within the hippocampus plays an important role in creating and retrieving navigational knowledge. Theta–gamma phase coupling encodes navigation-related functions, and spike timing within theta oscillations is important for memory consolidation and recall (Figure 3). With respect to navigation, e.g., during free exploration in a familiar environment (Figure 2), and during memory recall, CA1 pyramidal neurons respond most effectively to CA3 input [113]. Eventually, all information is transmitted to the central nucleus of the amygdala for action-related decisions.
Finally, the prefrontal cortex (PFC), particularly the dorsolateral PFC, is heavily involved in working memory functions. It is also clearly important for maintaining and updating information, as well as for emotional regulation and motor control. The PFC, in general, plays a central role in orchestrating complex cognitive processes, such as executive functions, which include planning, decision making, problem solving, and controlling attention [114,115]. Therefore, KNs are expected to be intimately connected with and continuously interact with the PFC, though knowledge itself is likely distributed in wide-area networks extending to higher-order sensory cortices as well as diencephalic structures, such as the habenula.
Episodic memory for navigational and other tasks involves the conversion of sensory inputs into working memory. A working memory activity pattern in the hippocampus must be transferred to long-term memory stores, such as in the neocortex, to process the next set of events or sensory inputs. By definition, navigation is a sequential event. Therefore, it is not surprising that the hippocampus plays an important role in both functions and has been studied as such. Building on the adaptive resonance theory (ART) family of neural models, Grossberg and colleagues have advanced hypotheses to explain both the working memory and navigational functions of the hippocampus [116,117]. Their model stresses the importance of mass action-induced theta rhythms [118] in both mapping and extracting the information of an organism’s spatial environment for navigation. As indicated earlier (see also Figure 3A), time- and distance-encoding cells, together with head-direction cells, project to grid cells in the entorhinal cortex (EC), which, in turn, project to place cells in the hippocampus [7,116]. In this scheme, the theta period represents a temporal metric for sequence learning [119] and also allows activity in widespread hippocampal and neocortical networks to be temporally coordinated [116]. The idea is that theta activity results in either gamma or beta oscillations, depending on match (resonance) and mismatch (reset) between expected vs. actual inputs. Altogether, the KN for navigation in mammals contains idio- and allothetic sensory, working memory, long-term memory, emotive–limbic, and action-selection modules. These reside in interconnected anatomical structures, namely, the thalamus, sensory and parietal cortex, hippocampus, subiculum and EC, amygdala, as well as the prefrontal cortex and subthalamic nucleus for action selection and the motor cortex/basal ganglia for motor output—for details see Figure 1 in Bermudez-Contreras et al. [120]. The retrosplenial cortex is a key structure that receives both head-direction and allocentric information. Most recently, Rolls and colleagues have expanded their quantitative theory of hippocampal function for short-term memory storage and recall without invoking mass action in the form of oscillations [121,122]. They include the orbitofrontal cortex as a source of reward-related input that is used to bind multimodal information within the hippocampal circuitry and invoke “concept” and “spatial view” cells within the hippocampus to explain goal-directed navigational behavior [121,123].
As with mirror neurons in Broca’s area [124], it is becoming clear that the initial terminology developed to identify neurons and networks that conduct a specific function can usually explain a general class of functions. Thus, a part of the navigational network can also contribute to other cognitive processes based on associative thought [121]. This stresses the multifunctional nature of cortical, and possibly subcortical, neurons and networks [125,126,127,128,129]. Recent findings that expand the interconnectivity of a working memory module to its application to a task-directed navigation network are a prime example of the need to establish a viable and reliable KN where cross-validation can occur on a daily basis as an organism roams its environment under different environmental conditions and physiological states. Similar cross-validation and certainty-seeking processes can also occur via reflection-driven fronto-cortical activity.

2. Proposition

2.1. Knowledge Equation

The brain is essentially a complex and highly plastic network of neurons. Therefore, we presume that knowledge is stored as a state of synaptic interconnectivity within neural networks, partially as what we term as a memory of some information. To be meaningful (for survival), new information must be contextualized, linked up with existing information and tested for consistency or coherence before being stored as knowledge. In philosophical terms, knowledge of “facts” is often defined as true belief that is distinct from opinion or guesswork by virtue of justification. The process of justification is what delineates knowledge from a piece of factual information or a memory of it. The generation of knowledge involves the summation of cross-validated information over time and potentially across multiple timescales.
In this section, I propose a mathematical formulation of knowledge and of KNs. The motivation to do so is two-fold. First, it should allow those working in the domain of AI to incorporate a concept of KNs and develop new processes that go beyond using brute force, big data approaches to train artificial neural networks. Once established, the idea of KNs will allow systems and robots to learn from their interactions with the environment as humans and other animals do. Second, a mathematical conceptualization of KNs together with enabling technologies, in turn, can stimulate a more rigorous and comprehensive understanding of human cognition as well as predict the effects of neurological disorders, aiding in the development of diagnostic tools and treatments. I hope this will enable the simulation of knowledge-driven brain networks, such as for a better understanding of navigation, language learning and education in general (e.g., incorporating the role of emotional, motivational and movement networks to generate trust and belief). This will allow researchers to test hypotheses and explore the effects of different variables included in the mathematical formulation. Additional details of the mathematical extensions for these applications are beyond the scope of this review, but I hope will be picked up by others working in their respective fields.
First, let us try to formulate, in rigorous terms, the concept of knowledge residing within KNs. For knowledge to remain relevant, forgetting is as important as incorporating additional information. This is in fact critical for planning and efficient decision making. The state of knowledge at any time therefore can be given by a knowledge equation, where the information is updated by integrating new with existing information, and outdated or irrelevant information is deleted via forgetting to keep knowledge viable. Later, we will discuss how and when the forgetting happens in neurobiological terms.
To mathematize the idea that knowledge is a result of integrating new incoming information while forgetting old information, we can model knowledge as a dynamic system that changes with time. Some variables relevant to such a model are as follows:
  • K(t): the cumulative knowledge at time “t”;
  • G(t): The rate of incoming new information at time “t”. G(t) is modeled as a function of time, depending on how information is received. For instance, it could be constant, exponentially growing or influenced by other factors like attention or exposure;
  • L(t): The rate of loss or forgetting old information at time “t”. This can be transient, triggered by distraction or change permanently via neuronal or synaptic degradation, as in Alzheimer’s disease;
  • α: the integration rate constant, which determines how efficiently new information is integrated into the current knowledge state;
  • β: the forgetting rate constant, which determines how quickly old information is forgotten.
α and β are constants that quantify the efficiency of integration and the rate of loss of information via forgetting, respectively. In terms of memory, these can be determined either empirically from learning curves or theoretically based on the context.
The change in the state of knowledge over time, dK(t)/dt, can be expressed as a first-order linear differential equation, where
dK(t)/dt = αG(t) − βK(t)
where αG(t) represents the gain in information or contribution of new information to knowledge, and, within the context of memory, βK(t) represents the loss of information due to forgetting.
If we assume G(t) for memory networks, we have to set up the initial condition where M(0) = M 0 , and K(0) is the initial knowledge at time t = 0. Then, we arrive at a form of the equation for memory:
M   t = e β t ( α 0 t I s e β s d s + M 0 ) ( Memory equation )
where the integral term, 0 t I s e β s d s + K 0 , represents the cumulative effect of incoming new information over time, adjusted for the forgetting rate. “s” represents a process reducing the contribution of memory modules’ contribution weight. It should be noted, however, that knowledge is less prone to forgetting, and this equation applies strictly to the memory components of knowledge.
Assuming, for simplicity, that G(t) is a constant G 0 , we can find the explicit form of K(t) by solving the differential equation
dK(t)/dt = αG0βK(t)
The general solution, using an integrating factor, μ (t), where μ (t) = e β t (an exponential nature of information decay over time, assuming G(t) is not constant), is
K(t) = αG0/β + (K0αG0/β)     … (Knowledge equation)
  • The term αG0/β represents the steady-state knowledge level when the rate of integrating new information and the rate of forgetting are balanced.
  • The term (K0αG0/β) represents the transient behavior of knowledge over time, showing how it approaches the steady-state level. The rate at which K(t) approaches the steady-state is governed by β.
Overall, the Knowledge Equation captures the dynamic nature of knowledge accumulation and decay over time. It provides a mathematical framework for understanding the dynamics of knowledge as a function of new information integration and forgetting. This model can be adapted or extended based on specific contexts or additional complexities in the real-world scenarios being modeled. The variables and constants used are intuitive and can be adjusted based on empirical data or specific scenarios to reflect different learning and forgetting processes.

2.2. Knowledge Networks

Within a neural net framework, it is presumed that knowledge is embedded within a network that, when activated, will produce a set of outputs different from what it might in its nascent or initial state. A network carries information by its activity and the properties of the neurons within the network, but the knowledge it imparts depends on the context within which it is queried. The same network can produce different sets of outputs depending on how it is queried, that is, how other inputs activate this network. Thus, a generic information network can be a storehouse of multiple knowledge sets. This conception of a knowledge network is supported by the multifunctional nature of many neurons and networks [127,128,129,130].
Locally re-afferent and looping circuits are much more likely to be a component of KNs that store category-specific object knowledge [131,132]. An example of this type of network is shown in Figure 5A. That means that KNs will likely have neurons that show either tonic firing or bursting as the information loops between different networks as a representation of knowledge. This indeed appears to be the case when recording single-unit activity from neurons in the frontal cortex in response to complex, naturalistic sounds [60,62,133,134]. In contrast, neural responses to the same sounds from the primary auditory cortex are phasic and time-locked to stimulus onset. Also, because of their dynamic nature, the representation of knowledge within neural networks requires an activity component that is difficult to define a priori and is best studied using naturalistic stimuli [135]. These properties are not yet incorporated within artificially trained, deep learning networks, but in time hopefully AI will utilize an integration between multiple networks, each of which is trained for a specific task but relies on inputs or knowledge from other networks for justification. The knowledge could be related to ethics and other such constraints that can be independently updated. Presumably, the next generation of artificially intelligent agents will rely on knowledge-based models rather than generative language ones that are driven largely by statistical features. KNs have been alluded to previously from the viewpoint of social networks. Within this context, “KNs” are collections of individuals and teams who come together across organizational, spatial and disciplinary boundaries to invent and share a body of knowledge [136]. From an educational perspective, a network that provides knowledge to an organism is more representative of what we think of as “understanding”. Therefore, KNs need to be continuously updated, are multifunctional and are more highly distributed than an information-extracting or a specific-memory network.
KNs can be considered as both valance-driven and referential. A simple example of a contextualized KN involves circuits within both the hippocampus and the amygdala. For example, if one walks into a dark alley, the hippocampus provides spatial information, whereas the amygdala provides valence information from either past experiences or from “instructional” inputs. This “knowledge” then guides one’s decision about a choice to enter the alley, or once there to leave it as soon as possible. If this knowledge is based on experience, then the hippocampus plays a role in embedding this in memory within the entorhinal cortex where valance appears to be represented within spatial coordinates. Figure 5B shows neural activity flowing through a sequence of generic information networks where each network holds a specific memory or bit of information. A KN would constitute a cluster of smaller networks that are typically accessible via a query or contextual input (Figure 5C). Intuition is a form of intrinsic knowledge that is spontaneously activated, and is less under the control of conscious inquiry and rational analysis. The components of intuitive KNs are expected to be present within the enteric nervous system and subcortical brain structures, such as the reticular formation.
Neural networks filter, process, code and decode signals as information either in the form of sensory, recurrent or feedback signals [137,138,139]. KNs, as formulated here, do not necessarily perform any of the low-level functions, but they can modulate, gate and facilitate activity within neural networks, including memory modules. KNs store the information as knowledge that can later prove useful to the organism. Hence, unless genetically encoded, KNs are created over time, taking into account the consequences of prior action and information processing. KNs tend to be distributed globally and need to be dynamically interconnected, especially when a decision needs to be made and/or an action taken. Whole brain networks have previously been constructed using graph theory [140,141]. Thus, KNs and the neurons within them can be thought of as being multifunctional. This is not a requirement for purely sensory neurons, though they may also process multiple types of signals. Information can reside within static networks. Knowledge, however, requires information to be stored in contextualized networks (Figure 2 and Figure 5C). Since the context can change with every instance, knowledge cannot be represented by a static network.

2.3. Mathematical Formulation of KNs

As with knowledge, the state of a KN must also evolve with time. One can consider a KN to be a sum of multiple transient networks that are updated by new external (allothetic) and internal (idiothetic) sensory inputs and states. This may be represented symbolically as
KNt = NN1t + NN2t+1 + NN3t+2 + … NNxt+x.     … (Network equation)
where KN = knowledge network, NN = refers to a neural network and t = time at which the KN and NNs are accessed; NN’s can be memory, cross-validation, context or organizational networks that are transiently linked during knowledge creation and access.
To mathematize the concept of KNs rigorously, we need to model the dynamics of these networks, their interactions and how they contribute to the overall knowledge state [142]. First, we make the following assumptions:
  • The activity of neural networks, including memory modules, can be described by continuous functions of time.
  • The connection weights between neural networks and memory modules are constant over time.
  • The contribution weights of neural networks and memory modules to the knowledge network are dynamic and vary with time.
The following components, including variables listed, are an integral part of model formulation:
  • Neural Networks and Modules:
    • Nj (t): the activity level of the i-th NN at time t;
    • Mj (t): the activity level of the j-th memory module at time t;
    • wij: the connection weight between the i-th NN, and the j-th memory module.
  • Knowledge Networks (KNs):
    • K (t): total knowledge at time t;
    • αi (t): the weight of the i-th NN’s contribution to the KN at time t;
    • βj (t): the weight of the j-th memory module’s contribution to the KN at time t.
  • Dynamics and Interactions:
    • G (t): the rate of gain of new information at time t;
    • F (t): the rate of forgetting old information at time t.
Furthermore, a mathematical model of KN has the following properties:
  • Neural Network Dynamics: The activity level of each NN, Ni (t), represents the dynamic activity levels of different neural circuits that contribute to knowledge. This can be influenced by incoming information and interaction with memory modules.
  • Memory Module Dynamics: The activity level of each memory module Mj (t) represents the activity levels of memory storage systems that interact with neural networks, which can be influenced by the activity of a neural network.
  • Connection Weights, (wij), represent the strength of interaction between neural networks and memory modules.
  • Contribution Weights Dynamics: the weights αi (t) and βj (t) represent the dynamic importance of each neural network and memory module to the KN.
  • Total Knowledge Dynamics: The total knowledge at time t, after adjusting weights, is a function of the contributions from neural networks and memory modules. The final solution for total knowledge within a KN at time t, can be modeled as
K t = i α i j ( t )   N i   t + i β j   t M j   ( t ) ( KN knowledge equation )
A model incorporating the above features captures the dynamic nature of knowledge networks, integrating new information, and the forgetting process, with contributions from both neural networks and memory modules.
Using partial differential equations (PDEs) can make the model more realistic by capturing the spatial and temporal dynamics of KNs, reflecting how knowledge is dynamically structured and represented in the brain [143]. Specifically, sparse connectivity contributes efficiency, robustness and flexibility to the KN model [144]. Efficiency reduces the computational and energetic load on the brain by minimizing unnecessary connections. Robustness enhances the brain’s ability to isolate damage or dysfunction, as fewer connections mean that problems in one area are less likely to propagate widely. Flexibility allows the brain to adapt and reorganize more easily in response to learning and new experiences, as specific pathways can be strengthened or weakened without affecting the entire network. The mathematization of a PDE model is beyond the scope of this review, but PDE equations allow one to model how knowledge, neural activity and memory activity evolve not only over time but also across different regions of the brain. Hence, as noted earlier, to test these types of models, it is necessary to record neural activity over time and across different regions of the brain in actively behaving animals.

2.4. Evolution of Knowledge Networks

The identification and quantification of naturalistic, unimodal stimuli and binary motor behaviors have played an important role in advancing neuroethological approaches to understanding neural organization and mechanisms. Although such approaches will continue to be of benefit, given the present-day technologies and those that will be available in the near-future, it is timely to consider neural organization within more complex, integrative multimodal frameworks. It is time to address the fundamental integrative unit of brain organization and potentially its evolution. This unit almost always has to be multimodal because organisms function and evolve within a multimodal environment even though sometimes one sensory system may play a dominant role, e.g., the auditory system in bats and dolphins, and the olfactory system in rodents. Dogs have excellent olfactory and auditory capabilities. Even within a single sensory modality, neurons can be multifunctional, switching their role depending on the context, such as for echolocation vs. communication in bats [129,145]. Similarly, memory networks may be transformed with context via neuromodulators, such as dopamine and oxytocin, as happens for pair bonding vs. maternal bonding in prairie voles [146]. The conceptualization of KNs offers a more robust, naturalistic and adaptive way that may govern not only the organization of brain networks but also its evolutions as such. A KN-based model emphasizes the acquisition and processing of information to gain knowledge as a real-world model and the primary and urgently needed goal for survival of a newborn. A KN must consist of and engage with multiple and multimodal networks for decision making and action selection. As the brain matures, specific memories may be lost or become inaccessible having served their function of creating knowledge and this knowledge is both essential and sufficient for an organism’s survival in its adopted environment and ecological niche. Thus, one way to think about the evolution of the brain is to consider that it was gradually configured over millennia towards a dynamic, knowledge-directed system to enable information storage and processing over increasing timeframes.
KNs incorporate distant memories and the present context for building predictive models of the social and physical environment. In humans, knowledge can sustain motivation and goal-directed action over a timeframe of several years. A long-term goal stored within a separate network can periodically query KNs and act as a planning and “decision center” for action selection as needed. A query can be triggered by either a sensory cue in the environment (a physical change or a conspecific interaction) or an internally generated, imaginative signal. It may also be triggered by the physiological state of the body, such as sleep, arousal and hunger, driving the organism to retrieve information based on knowledge about the location of a food source and activate a behavioral algorithm that will bring that organism to its food source [147,148,149]. Thus, a KN may connect with components of sensory, attention, goal-setting/planning and motor networks that work together for decision making and action selection (Figure 2 and Figure 6).
Foraging strategies and many other adaptive behaviors depend on the complexity and size of the information-processing and storage system that an organism is endowed with. Single-celled organisms and simple multicellular organisms, such as sponges and coelenterates, respond to stimuli by orienting and translational movements that are both directed and random [150,151,152]. These organisms require only a sensor and a motor element to react reflexively in a preprogrammed manner with a built-in, limited range of tolerance for environmental change (Figure 7). They do not require a knowledge network. Alternatively, in more complex organisms, particularly vertebrates as well as some invertebrate groups, such as cephalopods with a well-organized brain, a goal-driven process may involve querying many networks that can extract knowledge before triggering a sequential decision set that ultimately gets converted into a sequence of actions [153].
In vertebrates, action sequences are embedded within brain structures, such as the basal ganglia and motor (including pre-motor) cortex for bodily movement, and the amygdala for autonomic functions and physiological activity of the internal organs [154,155]. Thus, neural networks are proposed as having gradually evolved from simple, non-overlapping reactive nerve nets, dedicated to specific functions of neuroids in sponges, to hydra-like organisms, more complex, bilateral brains in worms and to a multilobed, distributed central nervous system in insects and cephalopods. These advancements led to the emergence of primitive to advanced KNs present within proactive and imaginative brains, respectively (Figure 7). Some knowledge elements become embedded within neural networks through genetic encoding, especially in invertebrates and lower vertebrates [156,157]. Recent findings on the organization of the lamprey brain support this possibility [158,159]. Eventually, increases in complexity and connectivity led to the development of imaginative brains where scenarios could be simply imagined with or without follow-up with action selection [160]. Here, KNs became critical for providing top-down information via predictive computations [16,161,162,163,164]. Predictive networks enable “fact/error-checking” and the justification for decision making and action selection without reference to incoming sensory information [165].

3. Discussion

I have proposed here that a paradigm shift in our thinking is required to approach neurobiological research from a more comprehensive perspective. Despite numerous studies on learning and memory, we have barely elucidated the networks in which knowledge resides. Navigational systems represent an important frontier in this regard (see Figure 2). Below I discuss what we know about mechanisms for knowledge acquisition and, capitalizing on studies of memory consolidation, the important role of attention and sleep in the conversion of memory stores and contextual information into knowledge. As discussed later, an interdisciplinary approach promises to enhance the effectiveness of educational strategies and the sophistication of AI technologies, leading to better outcomes in both fields.

3.1. Knowledge Acquisition and Action Selection

Let us first examine in a little more detail the steps of information acquisition and processing within putative KNs. Exteroceptors, such as in eyes and ears, are designed to feed information about the external environment, whereas interoceptors, such as proprioceptors and carbon dioxide sensors, monitor and transmit information about the internal state of the body. These are a part of the peripheral division of the nervous system. These receptors send inputs to various parts of the brain where the inputs are consolidated and evaluated based upon experience and stored as “memory”. Because sensory systems are adaptive, to some extent sensors at the level of the peripheral nervous system, and more so central sensory processing networks, are primarily designed to detect changes in the quantity and quality of the sensory profile of the external and internal environment over different time-windows. For example, the vestibular system in humans is activated by changes in linear and angular acceleration over an approximately 7 s time window [166]. Hence, when constructing rollercoasters to stimulate the vestibular system, a turn needs to be introduced for every 7 s of travel to satisfy the stimulus expectations of the customers. Let us now tackle the question of how knowledge is gained from these sensations.
Some neurons themselves function as sensors by having receptors on specialized segments of their cell membrane, such as olfactory receptors present on olfactory cilia that are specialized forms of dendrites. Similarly, intraspinal cord mechanosensory neurons within the spinal cord can detect axial bending of the body [167,168]. Neurons within the central nervous system can also be regarded as sensors that monitor the body’s physiological, physical and emotive states by monitoring the activity of other neurons via receptors for neurotransmitters as well as for the hormonal and neuromodulator milieu within the extracellular space. Thus, changes in both externally driven and internally sensed neural activity eventually determine the behavior of an organism. Neuronal plasticity occurs at the synaptic level via both active and silent synapses that can control learning during critical periods [169,170]. Synaptic modification is less likely to occur during the elicitation of the reflexive type of responses, where a referential knowledge or prediction network is not invoked. During information gain, however, each new active synapse has the potential to significantly bias the output of a KN.
An ultimate goal of knowledge creation is to minimize perceived complexity and maximize predictability (minimizing entropy within an information-theoretic framework) without sacrificing accuracy. To achieve this, KNs may tolerate a certain amount of fuzziness, maximizing their applicability to real world scenarios that are rarely identical [171,172]. Neuro-fuzzy systems appear to be good at pattern recognition for solving real-world problems [173]. Feature extraction, cross-validation, synchronization and consensus finding are important components of such systems that have been implemented for knowledge discovery via artificial neural nets [174,175]. Consensus maps have been proposed to encode naturalistic smells via the odotopic mapping of odorants in the olfactory bulb and forebrain [176,177,178,179], and social calls via combination-sensitivity within the auditory cortex [50,61,180]. How cross-validation and synchronization is automatically accomplished within real neural networks is less clear. Oscillations in neural activity within different brain regions, such as the amygdala and hippocampus, may provide a mechanism for cross-validation, typically involving cross-modal processing, via temporal coherence, phase-amplitude coupling and other such mechanisms [137,181,182,183,184,185,186]. Oscillations in the form of traveling waves may also provide a mechanism for spreading information (memory encoding and recall) within the brain to strengthen multimodal consensus and the binding of common features within an information scene or landscape [187,188,189,190]. In this regard, the state of network dynamics and Hebbian plasticity appear to be essential for the optimization of network topology within real and artificial neural nets [191,192].
Action selection can occur at multiple timescales and involve different amounts of knowledge for action decisions. For fast reflexive activity, information is locally processed and triggers a quick (few milliseconds) reflexive action that can be monosynaptic. Other actions can take hundreds of milliseconds to minutes and even days to be planned and processed. Action decisions involve querying the KNs and extracting bits of relevant information from various parts to direct behavior. Navigation behavior involves both rapid and long-term decisions that must be sustained over time and during seasonal migration [193,194,195,196]. As indicated earlier, action selection involves the process of decision making that is a rapidly growing field of study in itself [197,198]. Decision making is a term that has been applied to the behavioral economy within the contexts of morality and commerce involving impulsiveness (risk-taking) vs. knowledge-based cognition [199,200,201]. The concept of the behavioral economy externally, however, is also relevant to the inner workings of the brain. We can think of decision making as a neural economy where neurons decide to fire or not to fire considering an energetic cost associated with bodily activity and for the generation of action potentials for processing information via neural networks within the brain [202].
Neurophysiological and imaging studies show that during decision making and task performance, the brain channels and loops neural activity through multiple regions of the brain [203] before channeling their output to brain regions that activate bodily patterns of movement. Within the basal ganglia, this is accomplished via disinhibition so that action patterns can be quickly sequenced and released without having to overcome the inertia of building up activity within a network [204,205]. Disinhibition is a good strategy for quickly translating knowledge-driven decisions into activity patterns.
Knowledge recall and action may be triggered by a few or even a single command neuron that, when activated, can bring multiple networks online to seek the justification of a decision [206,207]. When making intuitive decisions, one frequently relies on a “gut feeling”, suggesting that these command neurons may literally reside within the enteric nervous system, freeing neural circuits within the brain to process new incoming information to create new memories to expand or modify existing knowledge [208,209]. Neurons that fire together wire together via synaptic stabilization. Over the long term, non-neural mechanisms involving glial cells and perineural nets can protect developed and established networks from being modified [210,211,212]. Perineural nets create a matrix of proteins that stabilize the network and prevent new synapses from forming easily [213]. Perineural nets have been mostly observed encapsulating inhibitory, parvalbumin neurons. In this way, stored knowledge can be protected from being changed by random inputs [214]. Next, we consider the role of attention, a perceptual attribute that is critical for forming associative memories and the role of sleep, which is important for the stabilization of memories and, as postulated here, for converting memories into knowledge.

3.2. Establishing and Retrieving Knowledge: Attention, Sleep and Oscillations

Vertebrate brains typically receive vast amounts of information at any moment through multiple sensory channels. Therefore, a major function of the brain is to select the most relevant information so that it can be processed quickly and at the required level of resolution as well as stored effectively to make informed decisions that maximize the probability of survival over the short- and long-term. This is achieved via attention (Figure 8). Attention is important for both learning and for modifying behavior via synaptic modification that can occur across brief timescales of <200 ms [215,216]. Mechanisms for selectively attending to and “making sense” of the available information are critical for creating and modifying KNs.
Attentional mechanisms remain largely unknown and difficult to study, partly because of the largely covert nature of attention in species with a well-developed forebrain, and because it is difficult to impossible to interrogate animals about their attentional focus as well as the nature and duration of their attention. A number of studies have focused on visual attention using eye tracking, but attentional control by other sensory modalities is more difficult to study [217,218]. The habenula, a highly conserved mid-diencephalic brain structure, is considered as the integrative switchboard for directing attention either to the self or to others [219]. Its circuit-level connectivity is ideally suited for channeling knowledge residing in the forebrain to influence decision making and action selection via circuits present within the brainstem, e.g., projections to the dorsal raphe.
KNs not only determine the process of adaptive behavior elaboration, but also direct attention towards the source of additional relevant information [218]. This occurs via alternating between sustained attention and attentional shifts. Attentional shifts are a likely outcome of intrinsic oscillations in neural activity at different timescales, depending on the mechanism at play—from cellular up/down states to circuit level excitatory/inhibitory balance to hormonal rhythms [220,221]. These fluctuations in neural activity can facilitate attentional shifts to enhance knowledge by providing a temporal framework within which bits of information, e.g., those that provide a context, can be coupled to create new knowledge [222]. Within this context, a knowledge network can sustain attention to one attentional set or periodically shift it across multiple attentional sets [223,224]. The neural mechanisms for and the brain regions involved in attentional-shifts are even less studied than those involved in sustaining attention. Periodically shifting attention appears to be a conserved mechanism that remains little appreciated [225,226,227]. Shifting attention may turn out to be a key mechanism allowing fact checking, and hence in the creation of knowledge, as opposed to simply memories of reward- or fear-associations.
All brains basically exist in one of two states– either an awake or a sleep state, with daily transitions between the two. A sleep state is required to gate most of sensory activity and allow the selective pruning and suppression of ongoing activity by pruning irrelevant synaptic connections, maintaining and updating KNs and preparing them for acquiring new information via interactions with the environment during the awake state [228,229]. For most species, these interactions occur via movement. Therefore, the restoration of muscle (both skeletal and cardiac) tissue are also essential functions of sleep. Sleep also plays a protective role for established KNs by insulating and protecting them from random inputs during day-to-day activities. A number of experimental studies over the last two decades have firmly established that sleep plays an important role in the maintenance of memories and the ability for organisms to acquire and store new memories [230].
During the awake state, the brain allows bodily interactions with the environment, gaining information and testing its validity for conversion to memories and potentially to knowledge. During sleep, memories are consolidated by identifying and strengthening knowledge-building connections, while weakening or disabling those connections that are inconsistent with the acquisition of new knowledge. This may happen by repeatedly activating a KN, presumably via increased slow oscillatory activity (0.85 to 2.0 Hz) in the brain, so that synaptic noise, represented by those synapses that are inconsistently activated during each iteration of the activation cycle, is eliminated [231]. The inconsistent activation of a particular synaptic connection may result from a build-up of inhibitory activity within a network and contribute to a lowering of the synaptic strength (probability of neurotransmitter release) at a particular locus. A large part of this processing happens during sleep. Multichannel electroencephalographic (EEG) recordings in zebra finches underscore an increase in functional connectivity between brain regions during development, likely correlated with learning and knowledge development [232]. Even in fruit flies, sleep duration and sleep–wake switch parameters influence decision making critical for reproductive output [233].
Both attention and sleep are strongly associated with oscillations in the total population activity of neurons. Therefore, oscillations must play a key role in knowledge creation and retrieval, although we do not yet quite understand the detailed mechanisms. A match in either amplitude or phase between the oscillatory activity of distant brain regions leads to coherence [234]. Coherent networks underlie both top-down and bottom-up information flow [235,236]. According to one study in rats, an increase in oscillatory (0.85 to 2.0 Hz) activity during slow wave (SW) sleep within the short-term memory retention interval (80 min in duration) was associated with significantly stronger recall of episodic-like memory [231]. This is particularly effective for spatial memory, but not for object or declarative memory.
The consolidation of object or declarative memory appears to be correlated with sleep spindle activity. Sleep spindles are bursts of neural oscillations (~11 to 16 Hz) generated during nonrapid eye movements (NREM). They are electrical surface correlates, observed in the EEG of thalamocortical oscillations with a duration of 0.5 to 1.5 s [237,238]. The thalamic reticular nucleus has a significant role in the generation of oscillations [239]. Spindles have been proposed to enable large-scale functional connectivity and plasticity involving the rerouting of wake-instated neuronal traces between brain areas, such as the hippocampus and cortex [238]. The presence of spindles in the upstate during theta band of SW oscillatory activity enhances memory consolidation, whereas their presence in the upstate during delta wave activity facilitates the suppression of consolidation [240,241]. Thus, the timing of spindle activity is thought to play a key role in the process of memory consolidation and forgetting.
The fragmented activation of neural networks during the process of memory consolidation and forgetting may play out as a narrative, or what we term as “dreams”, some of which reach our conscious level, depending on the sleep state at which they occur [242]. Some dreams are triggered or contextualized by incoming sensory inputs during a light sleep state; others dreams are driven by intense emotive experiences during the awake state and lead to offline performance gain [243]. Dreams may also lead to mental/knowledge clarity during natural or induced recall as during hypnosis or psychoanalytic regression [244,245].
The question remains whether memory consolidation mechanisms also contribute to the process of building a knowledge network. For this to happen, sleep must also facilitate the processes of belief, justification and truth testing. Unfortunately, these concepts remain undefined in neural terms and therefore largely untested at the neurological level [246]. Oscillatory activity resulting in synaptic modification during sleep [247,248] could provide a mechanism for justification, or of obtaining evidence and testing certainty by perturbing neural networks, e.g., in the cingulate cortex, to bounce out of local minima until a new global state of stability (excitatory–inhibitory balance) is achieved that increases the level of certainty and hence strengthens belief [249]. Through this process, a new model of the real world or knowledge can be established within the networks (Figure 9).
Reaching a computational global minimum of neural activity while creating knowledge, especially during sleep, is consistent with and is an outcome of the Bayesian Free Energy Principle [16]. Free energy is an information-theoretic quantity that results from sensory inputs (data) and brain states. It provides a probabilistic representation coded by the brain, and a true conditional distribution of the causes of sensory input [251,252]. Within the context of knowledge, free energy allows the brain to minimize variational free energy as predicted by a model (knowledge network), by sampling and re-sampling information acquired during the awake state. This is best carried out during sleep when new sensory inputs are minimized, allowing re-entrant signaling and circuit manipulation. The state of achieving a global minimum each morning, assuming a sound sleep of adequate duration has occurred, may also explain the mental clarity experienced after waking up from a restful sleep. In short, sleep quiets the accumulated excitatory activity, or information overload, via a test of certainty, or justification, to arrive at a truth (elevated certainty) and eventually belief via multiple awake- and sleep-state cycles. These processes are central to knowledge building, as already discussed, and must occur in all species, though the exact mechanisms may differ, to improve decision making and chances of survival in the real world.
Attention, together with attentional shifts, plays a crucial role in the efficiency and accuracy of knowledge creation, whereas sleep consolidates memories, trashing noise that may manifest itself as dreams. Together with reflection, that in some ways is functionally like a sleep state, sleep may be essential not just for its restorative capacity and memory consolidation, but also to minimize network traffic so that a viable and reliable generative model of the world can exist within KNs. Such models are essential for minimizing prediction errors [253,254]. Without a stable knowledge-based model, humans can become confused and suffer from anxiety/mental illness, leading to poor choices during decision making and action selection.

3.3. Future Directions

Having established the significance of KNs from a perspective covering the areas of philosophy, cognitive psychology, neuroscience and phylogeny, it is important to explore what this might mean for the future. Rather than using drugs that interact nonspecifically at multiple sites within the brain, a KN approach to understanding brain disorders and treatment may usher in the era of electroceuticals, a trend that is already gaining momentum with the use of deep brain stimulation and transcranial magnetic stimulation [255,256,257]. These interventions can reset and re-organize neural networks in an individual to either restore or bypass network deficits, e.g., in depression [258,259]. An in-depth understanding of how knowledge is embedded within brain networks can positively impact such approaches and lead to interventions to alleviate several brain disorders, such as dyslexia, ADHD and autism.
By integrating neuroscience perspectives, both education and AI can move from rote memorization and big data input, respectively, to systems that are more adaptive, efficient and aligned with the natural learning processes of the human brain. The educational implications of a KN-based approach include a greater acceptance of personalized learning, and an impact on cognitive skill development, as well as the adoption of neuroscientific approaches to early childhood education, and the tackling of emotional and social learning disabilities. A knowledge-driven approach can be used to tailor educational experience to individual learning styles and needs that align with how brain processes route information in different individuals. Cognitive skills could be improved by incorporating insights gained into executive functions like working memory, attention and cognitive flexibility. These insights could inform the development of curricula and teaching methods for strengthening learning skills and information retention.
With respect to AI, we need to create systems that will mimic human learning processes more accurately. This is possible by applying new insights into how the brain learns, e.g., by replicating synaptic plasticity, reinforcement learning and other neural processes embedded within a knowledge-based framework, including KN-inspired algorithms that are based on spike-timing-dependent plasticity and hierarchical processing [260,261,262,263]. This will lead to the development of a more robust and adaptive AI. In general, AI can be made more intuitive, and actions/decisions more reliable and meaningful, especially as related to human–AI interactions. In the context of navigation (see Figure 2), robots and autonomous systems that apply principles of sensorimotor integration and adaptive learning can engage in naturalistic interactions with the environment more effectively for various tasks. Many of the details still need to be worked out but this will not happen without incorporating a broader, knowledge-based point-of-view in contrast to a purely memory-driven one. In the end, one is left wondering if there are any limits to the amount of knowledge that can be stored within KNs. Theoretically, the brain’s capacity may be infinite, bounded only by biological life time and by brain-size. If so, artificial KNs could easily overcome these limitations, providing practically boundless knowledge.

4. Conclusions

In conclusion, the creation and storage of knowledge in neural networks plays a crucial role in the functioning of the brain and likely a central theme guiding its evolution. Memories over a short term are stored in the hippocampus and distributed throughout the brain for long-term storage in the form of connection strengths within either a local or a distributed network of neurons. Once memories are consolidated and contextualized, they, together with parts of other relevant networks, constitute what is referred to here as a ‘knowledge network’. The knowledge can then be accessed via a query for decision making that directs behavior at relevant times and over multiple timescales. Attentional and attention-shifting networks play a key role in the creation, selection and modification of knowledge, and sleep is necessary for establishing knowledge, clearing noisy activity, and readying the brain for the acquisition of new information and/or knowledge. Together, these processes allow organisms to adapt their behavior in response to changes in the environment and survive.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Acknowledgments

I thank J. K. Kanwal at Caltech, Pasadena, CA, for her thoughtful feedback from a careful read of the manuscript, and M. S. Kanwal at Stanford University, Palo Alto, CA, for discussions and assistance with the mathematical formulation of knowledge equations. I also wish to thank two anonymous reviewers whose feedback helped to greatly improve the manuscript.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Ge, J.; Peng, G.; Lyu, B.; Wang, Y.; Zhuo, Y.; Niu, Z.; Tan, L.H.; Leff, A.P.; Gao, J.-H. Cross-language differences in the brain network subserving intelligible speech. Proc. Natl. Acad. Sci. USA 2015, 112, 2972–2977. [Google Scholar] [CrossRef] [PubMed]
  2. Poeppel, D. The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language. Cogn. Neuropsychol. 2012, 29, 34–55. [Google Scholar] [CrossRef] [PubMed]
  3. Si, X.; Zhou, W.; Hong, B. Cooperative cortical network for categorical processing of Chinese lexical tone. Proc. Natl. Acad. Sci. USA 2017, 114, 12303–12308. [Google Scholar] [CrossRef] [PubMed]
  4. Hickok, G.; Poeppel, D. Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language. Cognition 2004, 92, 67–99. [Google Scholar] [CrossRef] [PubMed]
  5. Wahl, M.; Marzinzik, F.; Friederici, A.D.; Hahne, A.; Kupsch, A.; Schneider, G.-H.; Saddy, D.; Curio, G.; Klostermann, F. The human thalamus processes syntactic and semantic language violations. Neuron 2008, 59, 695–707. [Google Scholar] [CrossRef] [PubMed]
  6. Cohen, L.; Billard, A. Social babbling: The emergence of symbolic gestures and words. Neural Netw. 2018, 106, 194–204. [Google Scholar] [CrossRef] [PubMed]
  7. Moser, E.I.; Kropff, E.; Moser, M.-B. Place cells, grid cells, and the brain’s spatial representation system. Annu. Rev. Neurosci. 2008, 31, 69–89. [Google Scholar] [CrossRef] [PubMed]
  8. Solstad, T.; Boccara, C.N.; Kropff, E.; Moser, M.-B.; Moser, E.I. Representation of geometric borders in the entorhinal cortex. Science 2008, 322, 1865–1868. [Google Scholar] [CrossRef] [PubMed]
  9. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–656. [Google Scholar] [CrossRef]
  10. Nakamura, K.; Komatsu, M. Information seeking mechanism of neural populations in the lateral prefrontal cortex. Brain Res. 2019, 1707, 79–89. [Google Scholar] [CrossRef]
  11. Nelken, I.; Chechik, G.; Mrsic-Flogel, T.D.; King, A.J.; Schnupp, J.W.H. Encoding stimulus information by spike numbers and mean response time in primary auditory cortex. J. Comput. Neurosci. 2005, 19, 199–221. [Google Scholar] [CrossRef] [PubMed]
  12. Kayser, C.; Montemurro, M.A.; Logothetis, N.K.; Panzeri, S. Spike-phase coding boosts and stabilizes information carried by spatial and temporal spike patterns. Neuron 2009, 61, 597–608. [Google Scholar] [CrossRef] [PubMed]
  13. Furukawa, S.; Middlebrooks, J.C. Cortical representation of auditory space: Information-bearing features of spike patterns. J. Neurophysiol. 2002, 87, 1749–1762. [Google Scholar] [CrossRef] [PubMed]
  14. Averbeck, B.B.; Lee, D. Coding and transmission of information by neural ensembles. Trends Neurosci. 2004, 27, 225–230. [Google Scholar] [CrossRef] [PubMed]
  15. Lynn, C.W.; Papadopoulos, L.; Kahn, A.E.; Bassett, D.S. Human information processing in complex networks. Nat. Phys. 2020, 16, 965–973. [Google Scholar] [CrossRef]
  16. Lynn, C.W.; Kahn, A.E.; Nyema, N.; Bassett, D.S. Abstract representations of events arise from mental errors in learning and memory. Nat. Commun. 2020, 11, 2313. [Google Scholar] [CrossRef] [PubMed]
  17. Steinberg, J.; Sompolinsky, H. Associative memory of structured knowledge. Sci. Rep. 2022, 12, 21808. [Google Scholar] [CrossRef] [PubMed]
  18. Miller, G.A. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 1956, 63, 81–97. [Google Scholar] [CrossRef] [PubMed]
  19. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; The University of Illinois Press: Champaign, IL, USA, 1963. [Google Scholar]
  20. Doeller, C.F.; Barry, C.; Burgess, N. Evidence for grid cells in a human memory network. Nature 2010, 463, 657–661. [Google Scholar] [CrossRef]
  21. Ferreira, T.L.; Shammah-Lagnado, S.J.; Bueno, O.F.A.; Moreira, K.M.; Fornari, R.V.; Oliveira, M.G.M. The indirect amygdala-dorsal striatum pathway mediates conditioned freezing: Insights on emotional memory networks. Neuroscience 2008, 153, 84–94. [Google Scholar] [CrossRef]
  22. Krauzlis, R.J.; Bogadhi, A.R.; Herman, J.P.; Bollimunta, A. Selective attention without a neocortex. Cortex 2018, 102, 161–175. [Google Scholar] [CrossRef] [PubMed]
  23. Lai, C.S.W.; Franke, T.F.; Gan, W.-B. Opposite effects of fear conditioning and extinction on dendritic spine remodelling. Nature 2012, 483, 87–91. [Google Scholar] [CrossRef] [PubMed]
  24. Nader, K. Memory traces unbound. Trends Neurosci. 2003, 26, 65–72. [Google Scholar] [CrossRef] [PubMed]
  25. Gottfried, J.A.; Dolan, R.J. Human orbitofrontal cortex mediates extinction learning while accessing conditioned representations of value. Nat. Neurosci. 2004, 7, 1144–1152. [Google Scholar] [CrossRef] [PubMed]
  26. Suga, N. Principles of auditory information-processing derived from neuroethology. J. Exp. Biol. 1989, 146, 277–286. [Google Scholar] [CrossRef] [PubMed]
  27. Fujita, I.; Tanaka, K.; Ito, M.; Cheng, K. Columns for visual features of objects in monkey inferotemporal cortex. Nature 1992, 360, 343–346. [Google Scholar] [CrossRef] [PubMed]
  28. von der Emde, G.; Fetz, S. Distance, shape and more: Recognition of object features during active electrolocation in a weakly electric fish. J. Exp. Biol. 2007, 210, 3082–3095. [Google Scholar] [CrossRef] [PubMed]
  29. Ehret, G.; Haack, B. Ultrasound recognition in house mice: Key-Stimulus configuration and recognition mechanism. J. Comp. Physiol. 1982, 148, 245–251. [Google Scholar] [CrossRef]
  30. Kanwal, J.S.; Fitzpatrick, D.C.; Suga, N. Facilitatory and inhibitory frequency tuning of combination-sensitive neurons in the primary auditory cortex of mustached bats. J. Neurophysiol. 1999, 82, 2327–2345. [Google Scholar] [CrossRef]
  31. Esser, K.H.; Condon, C.J.; Suga, N.; Kanwal, J.S. Syntax processing by auditory cortical neurons in the FM-FM area of the mustached bat Pteronotus parnellii. Proc. Natl. Acad. Sci. USA 1997, 94, 14019–14024. [Google Scholar] [CrossRef] [PubMed]
  32. Xiao, Z.; Suga, N. Reorganization of the auditory cortex specialized for echo-delay processing in the mustached bat. Proc. Natl. Acad. Sci. USA 2004, 101, 1769–1774. [Google Scholar] [CrossRef] [PubMed]
  33. Fujita, K.; Kashimori, Y. Neural mechanism of corticofugal modulation of tuning property in frequency domain of bat’s auditory system. Neural Process. Lett. 2016, 43, 537–551. [Google Scholar] [CrossRef]
  34. Grossberg, S. On the development of feature detectors in the visual cortex with applications to learning and reaction-diffusion systems. Biol. Cybern. 1976, 21, 145–159. [Google Scholar] [CrossRef]
  35. Nelken, I.; Fishbach, A.; Las, L.; Ulanovsky, N.; Farkas, D. Primary auditory cortex of cats: Feature detection or something else? Biol. Cybern. 2003, 89, 397–406. [Google Scholar] [CrossRef]
  36. Chang, T.R.; Chiu, T.W.; Sun, X.; Poon, P.W.F. Modeling frequency modulated responses of midbrain auditory neurons based on trigger features and artificial neural networks. Brain Res. 2012, 1434, 90–101. [Google Scholar] [CrossRef] [PubMed]
  37. Goldshtein, A.; Akrish, S.; Giryes, R.; Yovel, Y. An artificial neural network explains how bats might use vision for navigation. Commun. Biol. 2022, 5, 1325. [Google Scholar] [CrossRef]
  38. Yang, L.; Zhan, X.; Chen, D.; Yan, J.; Loy, C.C.; Lin, D. Learning to cluster faces on an affinity graph. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2293–2301. [Google Scholar]
  39. Mahadevkar, S.; Patil, S.; Kotecha, K. Enhancement of handwritten text recognition using AI-based hybrid approach. MethodsX 2024, 12, 102654. [Google Scholar] [CrossRef]
  40. Diep, Q.B.; Phan, H.Y.; Truong, T.-C. Crossmixed convolutional neural network for digital speech recognition. PLoS ONE 2024, 19, e0302394. [Google Scholar] [CrossRef]
  41. Suga, N.; O’Neill, W.E. Neural axis representing target range in the auditory cortex of the mustache bat. Science 1979, 206, 351–353. [Google Scholar] [CrossRef]
  42. Ehret, G.; Bernecker, C. Low-frequency sound communication by mouse pups (Mus musculus): Wriggling calls release maternal behaviour. Anim. Behav. 1986, 34, 821–830. [Google Scholar] [CrossRef]
  43. Hubel, D.H.; Wiesel, T.N. Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 1968, 195, 215–243. [Google Scholar] [CrossRef]
  44. Suga, N. Philosophy and stimulus design for neuroethology of complex-sound processing. Philos. Trans. R. Soc. Lond. B Biol. Sci. 1992, 336, 423–428. [Google Scholar] [CrossRef]
  45. Suga, N. Analysis of information-bearing elements in complex sounds by auditory neurons of bats. Audiology 1972, 11, 58–72. [Google Scholar] [CrossRef] [PubMed]
  46. Suga, N.; Niwa, H.; Taniguchi, I.; Margoliash, D. The personalized auditory cortex of the mustached bat: Adaptation for echolocation. J. Neurophysiol. 1987, 58, 643–654. [Google Scholar] [CrossRef]
  47. Mendoza Nava, H.; Holderied, M.W.; Pirrera, A.; Groh, R.M.J. Buckling-induced sound production in the aeroelastic tymbals of Yponomeuta. Proc. Natl. Acad. Sci. USA 2024, 121, e2313549121. [Google Scholar] [CrossRef] [PubMed]
  48. Baier, A.L.; Stelzer, K.-J.; Wiegrebe, L. Flutter sensitivity in FM bats. Part II: Amplitude modulation. J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 2018, 204, 941–951. [Google Scholar] [CrossRef] [PubMed]
  49. Kuwabara, N.; Suga, N. Delay lines and amplitude selectivity are created in subthalamic auditory nuclei: The brachium of the inferior colliculus of the mustached bat. J. Neurophysiol. 1993, 69, 1713–1724. [Google Scholar] [CrossRef]
  50. Washington, S.D.; Kanwal, J.S. DSCF neurons within the primary auditory cortex of the mustached bat process frequency modulations present within social calls. J. Neurophysiol. 2008, 100, 3285–3304. [Google Scholar] [CrossRef]
  51. Ma, J.; Naumann, R.T.; Kanwal, J.S. Fear conditioned discrimination of frequency modulated sweeps within species-specific calls of mustached bats. PLoS ONE 2010, 5, e10579. [Google Scholar] [CrossRef]
  52. Andoni, S.; Pollak, G.D. Selectivity for spectral motion as a neural computation for encoding natural communication signals in bat inferior colliculus. J. Neurosci. 2011, 31, 16529–16540. [Google Scholar] [CrossRef]
  53. Giraudet, P.; Berthommier, F.; Chaput, M. Mitral cell temporal response patterns evoked by odor mixtures in the rat olfactory bulb. J. Neurophysiol. 2002, 88, 829–838. [Google Scholar] [CrossRef] [PubMed]
  54. Lindsay, S.M.; Vogt, R.G. Behavioral responses of newly hatched zebrafish (Danio rerio) to amino acid chemostimulants. Chem. Senses 2004, 29, 93–100. [Google Scholar] [CrossRef] [PubMed]
  55. Sigala, N.; Logothetis, N.K. Visual categorization shapes feature selectivity in the primate temporal cortex. Nature 2002, 415, 318–320. [Google Scholar] [CrossRef] [PubMed]
  56. Ramkumar, P.; Jas, M.; Pannasch, S.; Hari, R.; Parkkonen, L. Feature-specific information processing precedes concerted activation in human visual cortex. J. Neurosci. 2013, 33, 7691–7699. [Google Scholar] [CrossRef]
  57. Romanski, L.M.; Averbeck, B.B.; Diltz, M. Neural representation of vocalizations in the primate ventrolateral prefrontal cortex. J. Neurophysiol. 2005, 93, 734–747. [Google Scholar] [CrossRef] [PubMed]
  58. Romanski, L.M.; Averbeck, B.B. The primate cortical auditory system and neural representation of conspecific vocalizations. Annu. Rev. Neurosci. 2009, 32, 315–346. [Google Scholar] [CrossRef] [PubMed]
  59. Washington, S.D.; Kanwal, J.S. Excitatory tuning to upward and downward directions of frequency-modulated sweeps in the primary auditory cortex. In Proceedings of the Society for Neuroscience, Washington, DC, USA, 12–16 November 2005; Volume 35. [Google Scholar]
  60. Kanwal, J.S.; Gordon, M.; Peng, J.P.; Heinz-Esser, K. Auditory responses from the frontal cortex in the mustached bat, Pteronotus parnellii. NeuroReport 2000, 11, 367–372. [Google Scholar] [CrossRef] [PubMed]
  61. Fitzpatrick, D.C.; Kanwal, J.S.; Butman, J.A.; Suga, N. Combination-sensitive neurons in the primary auditory cortex of the mustached bat. J. Neurosci. 1993, 13, 931–940. [Google Scholar] [CrossRef]
  62. Averbeck, B.B.; Romanski, L.M. Probabilistic encoding of vocalizations in macaque ventral lateral prefrontal cortex. J. Neurosci. 2006, 26, 11023–11033. [Google Scholar] [CrossRef]
  63. Wagatsuma, N.; Hidaka, A.; Tamura, H. Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images. eNeuro 2021, 8, 1–19. [Google Scholar] [CrossRef]
  64. Gallant, J.L.; Braun, J.; Van Essen, D.C. Selectivity for polar, hyperbolic, and Cartesian gratings in macaque visual cortex. Science 1993, 259, 100–103. [Google Scholar] [CrossRef]
  65. Oliva, A.; Torralba, A. The role of context in object recognition. Trends Cogn. Sci. 2007, 11, 520–527. [Google Scholar] [CrossRef]
  66. Stoll, J.; Thrun, M.; Nuthmann, A.; Einhäuser, W. Overt attention in natural scenes: Objects dominate features. Vision. Res. 2015, 107, 36–48. [Google Scholar] [CrossRef]
  67. Suga, N.; Gao, E.; Zhang, Y.; Ma, X.; Olsen, J.F. The corticofugal system for hearing: Recent progress. Proc. Natl. Acad. Sci. USA 2000, 97, 11807–11814. [Google Scholar] [CrossRef]
  68. Messinger, A.; Squire, L.R.; Zola, S.M.; Albright, T.D. Neural correlates of knowledge: Stable representation of stimulus associations across variations in behavioral performance. Neuron 2005, 48, 359–371. [Google Scholar] [CrossRef]
  69. Patterson, K.; Nestor, P.J.; Rogers, T.T. Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 2007, 8, 976–987. [Google Scholar] [CrossRef] [PubMed]
  70. Knowledge. Available online: https://en.wikipedia.org/wiki/Knowledge (accessed on 15 January 2024).
  71. Kump, B.; Moskaliuk, J.; Cress, U.; Kimmerle, J. Cognitive foundations of organizational learning: Re-introducing the distinction between declarative and non-declarative knowledge. Front. Psychol. 2015, 6, 1489. [Google Scholar] [CrossRef]
  72. Hansson, I.; Buratti, S.; Allwood, C.M. Experts’ and novices’ perception of ignorance and knowledge in different research disciplines and its relation to belief in certainty of knowledge. Front. Psychol. 2017, 8, 377. [Google Scholar] [CrossRef] [PubMed]
  73. Howlett, J.R.; Paulus, M.P. The neural basis of testable and non-testable beliefs. PLoS ONE 2015, 10, e0124596. [Google Scholar] [CrossRef] [PubMed]
  74. Sainburg, T.; Gentner, T.Q. Toward a computational neuroethology of vocal communication: From bioacoustics to neurophysiology, emerging tools and future directions. Front. Behav. Neurosci. 2021, 15, 811737. [Google Scholar] [CrossRef]
  75. Wagner, H.; Egelhaaf, M.; Carr, C. Model organisms and systems in neuroethology: One hundred years of history and a look into the future. J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 2024, 210, 227–242. [Google Scholar] [CrossRef]
  76. Lambert, K. Wild brains: The value of neuroethological approaches in preclinical behavioral neuroscience animal models. Neurosci. Biobehav. Rev. 2023, 146, 105044. [Google Scholar] [CrossRef] [PubMed]
  77. Roth, R.H.; Ding, J.B. From neurons to cognition: Technologies for precise recording of neural activity underlying behavior. BME Front. 2020, 2020, 7190517. [Google Scholar] [CrossRef] [PubMed]
  78. Du, J.; Riedel-Kruse, I.H.; Nawroth, J.C.; Roukes, M.L.; Laurent, G.; Masmanidis, S.C. High-resolution three-dimensional extracellular recording of neuronal activity with microfabricated electrode arrays. J. Neurophysiol. 2009, 101, 1671–1678. [Google Scholar] [CrossRef]
  79. O’Keefe, J. A computational theory of the hippocampal cognitive map. Prog. Brain Res. 1990, 83, 301–312. [Google Scholar] [PubMed]
  80. Lever, C.; Wills, T.; Cacucci, F.; Burgess, N.; O’Keefe, J. Long-term plasticity in hippocampal place-cell representation of environmental geometry. Nature 2002, 416, 90–94. [Google Scholar] [CrossRef]
  81. Moser, E.I.; Moser, M.-B.; McNaughton, B.L. Spatial representation in the hippocampal formation: A history. Nat. Neurosci. 2017, 20, 1448–1464. [Google Scholar] [CrossRef]
  82. Finkelstein, A.; Derdikman, D.; Rubin, A.; Foerster, J.N.; Las, L.; Ulanovsky, N. Three-dimensional head-direction coding in the bat brain. Nature 2015, 517, 159–164. [Google Scholar] [CrossRef]
  83. Geva-Sagiv, M.; Las, L.; Yovel, Y.; Ulanovsky, N. Spatial cognition in bats and rats: From sensory acquisition to multiscale maps and navigation. Nat. Rev. Neurosci. 2015, 16, 94–108. [Google Scholar] [CrossRef]
  84. Mello, C.V.; Ribeiro, S. ZENK protein regulation by song in the brain of songbirds. J. Comp. Neurol. 1998, 393, 426–438. [Google Scholar] [CrossRef]
  85. Jarvis, E.D.; Mello, C.V. Molecular mapping of brain areas involved in parrot vocal communication. J. Comp. Neurol. 2000, 419, 1–31. [Google Scholar] [CrossRef]
  86. Chatterjee, D.; Tran, S.; Shams, S.; Gerlai, R. A Simple Method for Immunohistochemical Staining of Zebrafish Brain Sections for c-fos Protein Expression. Zebrafish 2015, 12, 414–420. [Google Scholar] [CrossRef]
  87. Guthrie, K.M.; Anderson, A.J.; Leon, M.; Gall, C. Odor-induced increases in c-fos mRNA expression reveal an anatomical “unit” for odor processing in olfactory bulb. Proc. Natl. Acad. Sci. USA 1993, 90, 3329–3333. [Google Scholar] [CrossRef] [PubMed]
  88. Fosque, B.F.; Sun, Y.; Dana, H.; Yang, C.-T.; Ohyama, T.; Tadross, M.R.; Patel, R.; Zlatic, M.; Kim, D.S.; Ahrens, M.B.; et al. Neural circuits. Labeling of active neural circuits in vivo with designed calcium integrators. Science 2015, 347, 755–760. [Google Scholar] [CrossRef]
  89. Christophel, T.B. Distributed Visual Working Memory Stores Revealed by Multivariate Pattern Analyses. J. Vis. 2015, 15, 1407. [Google Scholar] [CrossRef]
  90. Linden, D.E.J. The working memory networks of the human brain. Neuroscientist 2007, 13, 257–267. [Google Scholar] [CrossRef]
  91. Sauseng, P.; Klimesch, W.; Heise, K.F.; Gruber, W.R.; Holz, E.; Karim, A.A.; Glennon, M.; Gerloff, C.; Birbaumer, N.; Hummel, F.C. Brain oscillatory substrates of visual short-term memory capacity. Curr. Biol. 2009, 19, 1846–1852. [Google Scholar] [CrossRef]
  92. Fiebig, F.; Lansner, A. Memory consolidation from seconds to weeks: A three-stage neural network model with autonomous reinstatement dynamics. Front. Comput. Neurosci. 2014, 8, 64. [Google Scholar] [CrossRef] [PubMed]
  93. Schafe, G.E.; LeDoux, J.E. Memory consolidation of auditory pavlovian fear conditioning requires protein synthesis and protein kinase A in the amygdala. J. Neurosci. 2000, 20, RC96. [Google Scholar] [CrossRef]
  94. Gal-Ben-Ari, S.; Rosenblum, K. Molecular mechanisms underlying memory consolidation of taste information in the cortex. Front. Behav. Neurosci. 2011, 5, 87. [Google Scholar] [CrossRef]
  95. Izquierdo, I.; Medina, J.H. Role of the amygdala, hippocampus and entorhinal cortex in memory consolidation and expression. Braz. J. Med. Biol. Res. 1993, 26, 573–589. [Google Scholar]
  96. McEwen, B.S. Mood disorders and allostatic load. Biol. Psychiatry 2003, 54, 200–207. [Google Scholar] [CrossRef]
  97. Toledo-Rodriguez, M.; Sandi, C. Stress during Adolescence Increases Novelty Seeking and Risk-Taking Behavior in Male and Female Rats. Front. Behav. Neurosci. 2011, 5, 17. [Google Scholar] [CrossRef]
  98. Shekhar, A.; Truitt, W.; Rainnie, D.; Sajdyk, T. Role of stress, corticotrophin releasing factor (CRF) and amygdala plasticity in chronic anxiety. Stress 2005, 8, 209–219. [Google Scholar] [CrossRef]
  99. Andersen, S.L.; Teicher, M.H. Stress, sensitive periods and maturational events in adolescent depression. Trends Neurosci. 2008, 31, 183–191. [Google Scholar] [CrossRef]
  100. Krugers, H.J.; Lucassen, P.J.; Karst, H.; Joëls, M. Chronic stress effects on hippocampal structure and synaptic function: Relevance for depression and normalization by anti-glucocorticoid treatment. Front. Synaptic Neurosci. 2010, 2, 24. [Google Scholar] [CrossRef]
  101. McEwen, B.S. Early life influences on life-long patterns of behavior and health. Ment. Retard. Dev. Disabil. Res. Rev. 2003, 9, 149–154. [Google Scholar] [CrossRef]
  102. Evans, J.R.; Torres-Pérez, J.V.; Miletto Petrazzini, M.E.; Riley, R.; Brennan, C.H. Stress reactivity elicits a tissue-specific reduction in telomere length in aging zebrafish (Danio rerio). Sci. Rep. 2021, 11, 339. [Google Scholar] [CrossRef]
  103. Cleber Gama de Barcellos Filho, P.; Campos Zanelatto, L.; Amélia Aparecida Santana, B.; Calado, R.T.; Rodrigues Franci, C. Effects chronic administration of corticosterone and estrogen on HPA axis activity and telomere length in brain areas of female rats. Brain Res. 2021, 1750, 147152. [Google Scholar] [CrossRef]
  104. Maguire, E.A.; Frith, C.D. The brain network associated with acquiring semantic knowledge. Neuroimage 2004, 22, 171–178. [Google Scholar] [CrossRef]
  105. Kotkat, A.H.; Katzner, S.; Busse, L. Neural networks: Explaining animal behavior with prior knowledge of the world. Curr. Biol. 2023, 33, R138–R140. [Google Scholar] [CrossRef]
  106. Livingstone, M.; Hubel, D. Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science 1988, 240, 740–749. [Google Scholar] [CrossRef]
  107. Baumgärtel, K.; Genoux, D.; Welzl, H.; Tweedie-Cullen, R.Y.; Koshibu, K.; Livingstone-Zatchej, M.; Mamie, C.; Mansuy, I.M. Control of the establishment of aversive memory by calcineurin and Zif268. Nat. Neurosci. 2008, 11, 572–578. [Google Scholar] [CrossRef]
  108. Moser, M.B.; Moser, E.I. Functional differentiation in the hippocampus. Hippocampus 1998, 8, 608–619. [Google Scholar] [CrossRef]
  109. Fanselow, M.S.; Dong, H.-W. Are the dorsal and ventral hippocampus functionally distinct structures? Neuron 2010, 65, 7–19. [Google Scholar] [CrossRef]
  110. White, N.M.; McDonald, R.J. Acquisition of a spatial conditioned place preference is impaired by amygdala lesions and improved by fornix lesions. Behav. Brain Res. 1993, 55, 269–281. [Google Scholar] [CrossRef]
  111. Pikkarainen, M.; Rönkkö, S.; Savander, V.; Insausti, R.; Pitkänen, A. Projections from the lateral, basal, and accessory basal nuclei of the amygdala to the hippocampal formation in rat. J. Comp. Neurol. 1999, 403, 229–260. [Google Scholar] [CrossRef]
  112. Ghashghaei, H.T.; Hilgetag, C.C.; Barbas, H. Sequence of information processing for emotions based on the anatomic dialogue between prefrontal cortex and amygdala. Neuroimage 2007, 34, 905–923. [Google Scholar] [CrossRef]
  113. Fernández-Ruiz, A.; Oliva, A.; Nagy, G.A.; Maurer, A.P.; Berényi, A.; Buzsáki, G. Entorhinal-CA3 Dual-Input Control of Spike Timing in the Hippocampus by Theta-Gamma Coupling. Neuron 2017, 93, 1213–1226. [Google Scholar] [CrossRef]
  114. Aoi, M.C.; Mante, V.; Pillow, J.W. Prefrontal cortex exhibits multidimensional dynamic encoding during decision-making. Nat. Neurosci. 2020, 23, 1410–1420. [Google Scholar] [CrossRef]
  115. Knight, R.T.; Stuss, D.T. Prefrontal cortex: The present and the future. In Principles of Frontal Lobe Function; Stuss, D.T., Knight, R.T., Eds.; Oxford University Press: New York, NY, USA, 2002; pp. 573–598. ISBN 9780195134971. [Google Scholar]
  116. Grossberg, S. A neural model of intrinsic and extrinsic hippocampal theta rhythms: Anatomy, neurophysiology, and function. Front. Syst. Neurosci. 2021, 15, 665052. [Google Scholar] [CrossRef] [PubMed]
  117. Carpenter, G.A.; Grossberg, S.; Mehanian, C. Invariant recognition of cluttered scenes by a self-organizing ART architecture: CORT-X boundary segmentation. Neural Netw. 1989, 2, 169–181. [Google Scholar] [CrossRef]
  118. Freeman, W.J. Mass Action in the Nervous System; Academic Press: New York, NY, USA, 1975; ISBN 9780122671500. [Google Scholar]
  119. Buzsáki, G.; Moser, E.I. Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nat. Neurosci. 2013, 16, 130–138. [Google Scholar] [CrossRef] [PubMed]
  120. Bermudez-Contreras, E.; Clark, B.J.; Wilber, A. The neuroscience of spatial navigation and the relationship to artificial intelligence. Front. Comput. Neurosci. 2020, 14, 63. [Google Scholar] [CrossRef] [PubMed]
  121. Rolls, E.T.; Treves, A. A theory of hippocampal function: New developments. Prog. Neurobiol. 2024, 238, 102636. [Google Scholar] [CrossRef] [PubMed]
  122. Treves, A.; Rolls, E.T. Computational analysis of the role of the hippocampus in memory. Hippocampus 1994, 4, 374–391. [Google Scholar] [CrossRef]
  123. Rolls, E.T. Neurons including hippocampal spatial view cells, and navigation in primates including humans. Hippocampus 2021, 31, 593–611. [Google Scholar] [CrossRef] [PubMed]
  124. Kohler, E.; Keysers, C.; Umiltà, M.A.; Fogassi, L.; Gallese, V.; Rizzolatti, G. Hearing sounds, understanding actions: Action representation in mirror neurons. Science 2002, 297, 846–848. [Google Scholar] [CrossRef]
  125. Heyes, C. Where do mirror neurons come from? Neurosci. Biobehav. Rev. 2010, 34, 575–583. [Google Scholar] [CrossRef]
  126. Keysers, C.; Gazzola, V. Hebbian learning and predictive mirror neurons for actions, sensations and emotions. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2014, 369, 20130175. [Google Scholar] [CrossRef] [PubMed]
  127. Briggman, K.L.; Kristan, W.B. Multifunctional pattern-generating circuits. Annu. Rev. Neurosci. 2008, 31, 271–294. [Google Scholar] [CrossRef] [PubMed]
  128. Queenan, B.N.; Zhang, Z.; Ma, J.; Naumann, R.T.; Mazhar, S.; Kanwal, J.S. Multifunctional cortical neurons exhibit response enhancement during rapid switching from echolocation to communication sound processing. In Proceedings of the Society for Neuroscience, Abstract #275.21. San Diego, CA, USA, 13–17 November 2010. [Google Scholar]
  129. Suga, N. Multi-function theory for cortical processing of auditory information: Implications of single-unit and lesion data for future research. J. Comp. Physiol. A 1994, 175, 135–144. [Google Scholar] [CrossRef] [PubMed]
  130. Parker, J.; Khwaja, R.; Cymbalyuk, G. Asymmetric control of coexisting slow and fast rhythms in a multifunctional central pattern generator: A model study. Neurophysiology 2019, 51, 390–399. [Google Scholar] [CrossRef]
  131. Mahon, B.Z.; Caramazza, A. What drives the organization of object knowledge in the brain? Trends Cogn. Sci. 2011, 15, 97–103. [Google Scholar] [CrossRef] [PubMed]
  132. Martin, A.; Wiggs, C.L.; Ungerleider, L.G.; Haxby, J.V. Neural correlates of category-specific knowledge. Nature 1996, 379, 649–652. [Google Scholar] [CrossRef] [PubMed]
  133. Eiermann, A.; Esser, K.H. Auditory responses from the frontal cortex in the short-tailed fruit bat Carollia perspicillata. NeuroReport 2000, 11, 421–425. [Google Scholar] [CrossRef] [PubMed]
  134. Hage, S.R. Auditory and audio-vocal responses of single neurons in the monkey ventral premotor cortex. Hear. Res. 2018, 366, 82–89. [Google Scholar] [CrossRef] [PubMed]
  135. Nicolelis, M.A.L. Computing with thalamocortical ensembles during different behavioural states. J. Physiol. 2005, 566, 37–47. [Google Scholar] [CrossRef]
  136. Pugh, K.; Prusak, L. Designing Effective Knowledge Networks; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
  137. García-Rosales, F.; López-Jury, L.; González-Palomares, E.; Wetekam, J.; Cabral-Calderín, Y.; Kiai, A.; Kössl, M.; Hechavarría, J.C. Echolocation-related reversal of information flow in a cortical vocalization network. Nat. Commun. 2022, 13, 3642. [Google Scholar] [CrossRef] [PubMed]
  138. Hackett, T.A. Information flow in the auditory cortical network. Hear. Res. 2011, 271, 133–146. [Google Scholar] [CrossRef]
  139. Bowers, J.S.; Vankov, I.I.; Damian, M.F.; Davis, C.J. Why do some neurons in cortex respond to information in a selective manner? Insights from artificial neural networks. Cognition 2016, 148, 47–63. [Google Scholar] [CrossRef] [PubMed]
  140. Bullmore, E.; Sporns, O. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 2009, 10, 186–198. [Google Scholar] [CrossRef] [PubMed]
  141. Feldt, S.; Bonifazi, P.; Cossart, R. Dissecting functional connectivity of neuronal microcircuits: Experimental and theoretical insights. Trends Neurosci. 2011, 34, 225–236. [Google Scholar] [CrossRef] [PubMed]
  142. Izhikevich, E.M. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting; The MIT Press: Cambridge, MA, USA, 2006; ISBN 9780262276078. [Google Scholar]
  143. Evans, L.C. Partial Differential Equations (The Graduate Studies in Mathematics, 19), 2nd ed.; American Mathematical Society: Providence, RL, USA, 2022; p. 662. ISBN 978-1-4704-6942-9. [Google Scholar]
  144. Sacramento, J.; Wichert, A.; van Rossum, M.C.W. Energy Efficient Sparse Connectivity from Imbalanced Synaptic Plasticity Rules. PLoS Comput. Biol. 2015, 11, e1004265. [Google Scholar] [CrossRef] [PubMed]
  145. Kanwal, J.S.; Peng, J.P.; Esser, K.H. Auditory communication and echolocation in the mustached bat: Computing for dual functions within single neurons. In Echolocation in Bats and Dolphins; Thomas, J.A., Moss, C.J., Vater, M., Eds.; University of Chicago Press: Chicago, IL, USA, 2004; pp. 201–208. [Google Scholar]
  146. Santiago, A.F. Plasticity in the Prairie Vole: Contextual Factors and Molecular Mechanisms Modulating Bond Plasticity in the Prairie Vole (Microtus ochrogaster). Doctoral Dissertation, Cornell University, Ithaca, NY, USA, 2024. [Google Scholar]
  147. Cho, J.Y.; Sternberg, P.W. Multilevel modulation of a sensory motor circuit during C. elegans sleep and arousal. Cell 2014, 156, 249–260. [Google Scholar] [CrossRef] [PubMed]
  148. Takeishi, A.; Yeon, J.; Harris, N.; Yang, W.; Sengupta, P. Feeding state functionally reconfigures a sensory circuit to drive thermosensory behavioral plasticity. eLife 2020, 9, e61167. [Google Scholar] [CrossRef] [PubMed]
  149. Voigt, K.; Razi, A.; Harding, I.H.; Andrews, Z.B.; Verdejo-Garcia, A. Neural network modelling reveals changes in directional connectivity between cortical and hypothalamic regions with increased BMI. Int. J. Obes. 2021, 45, 2447–2454. [Google Scholar] [CrossRef] [PubMed]
  150. Dupre, C.; Yuste, R. Non-overlapping Neural Networks in Hydra vulgaris. Curr. Biol. 2017, 27, 1085–1097. [Google Scholar] [CrossRef] [PubMed]
  151. Keramidioti, A.; Schneid, S.; Busse, C.; von Laue, C.C.; Bertulat, B.; Salvenmoser, W.; Heß, M.; Alexandrova, O.; Glauber, K.M.; Steele, R.E.; et al. A new look at the architecture and dynamics of the Hydra nerve net. eLife 2024, 12, RP87330. [Google Scholar] [CrossRef]
  152. Musser, J.M.; Schippers, K.J.; Nickel, M.; Mizzon, G.; Kohn, A.B.; Pape, C.; Ronchi, P.; Papadopoulos, N.; Tarashansky, A.J.; Hammel, J.U.; et al. Profiling cellular diversity in sponges informs animal cell type and nervous system evolution. Science 2021, 374, 717–723. [Google Scholar] [CrossRef]
  153. Schnell, A.K.; Amodio, P.; Boeckle, M.; Clayton, N.S. How intelligent is a cephalopod? Lessons from comparative cognition. Biol. Rev. Camb. Philos. Soc. 2021, 96, 162–178. [Google Scholar] [CrossRef] [PubMed]
  154. Parent, A.; Hazrati, L.N. Functional anatomy of the basal ganglia. I. The cortico-basal ganglia-thalamo-cortical loop. Brain Res. Brain Res. Rev. 1995, 20, 91–127. [Google Scholar] [CrossRef] [PubMed]
  155. Braine, A.; Georges, F. Emotion in action: When emotions meet motor circuits. Neurosci. Biobehav. Rev. 2023, 155, 105475. [Google Scholar] [CrossRef] [PubMed]
  156. Lettvin, J.; Maturana, H.; McCulloch, W.; Pitts, W. What the Frog’s Eye Tells the Frog’s Brain. Proc. IRE 1959, 47, 1940–1951. [Google Scholar] [CrossRef]
  157. Maisak, M.S.; Haag, J.; Ammer, G.; Serbe, E.; Meier, M.; Leonhardt, A.; Schilling, T.; Bahl, A.; Rubin, G.M.; Nern, A.; et al. A directional tuning map of Drosophila elementary motion detectors. Nature 2013, 500, 212–216. [Google Scholar] [CrossRef] [PubMed]
  158. Edens, B.M.; Stundl, J.; Urrutia, H.A.; Bronner, M.E. Neural crest origin of sympathetic neurons at the dawn of vertebrates. Nature 2024, 629, 121–126. [Google Scholar] [CrossRef] [PubMed]
  159. Bedois, A.M.H.; Parker, H.J.; Price, A.J.; Morrison, J.A.; Bronner, M.E.; Krumlauf, R. Sea lamprey enlightens the origin of the coupling of retinoic acid signaling to vertebrate hindbrain segmentation. Nat. Commun. 2024, 15, 1538. [Google Scholar] [CrossRef] [PubMed]
  160. Corominas-Murtra, B.; Goñi, J.; Solé, R.V.; Rodríguez-Caso, C. On the origins of hierarchy in complex networks. Proc. Natl. Acad. Sci. USA 2013, 110, 13316–13321. [Google Scholar] [CrossRef] [PubMed]
  161. Watabe-Uchida, M.; Eshel, N.; Uchida, N. Neural circuitry of reward prediction error. Annu. Rev. Neurosci. 2017, 40, 373–394. [Google Scholar] [CrossRef]
  162. Riceberg, J.S.; Shapiro, M.L. Orbitofrontal Cortex Signals Expected Outcomes with Predictive Codes When Stable Contingencies Promote the Integration of Reward History. J. Neurosci. 2017, 37, 2010–2021. [Google Scholar] [CrossRef]
  163. Jordan, R. The locus coeruleus as a global model failure system. Trends Neurosci. 2024, 47, 92–105. [Google Scholar] [CrossRef]
  164. Korzyukov, O.; Lee, Y.; Bronder, A.; Wagner, M.; Gumenyuk, V.; Larson, C.R.; Hammer, M.J. Auditory-vocal control system is object for predictive processing within seconds time range. Brain Res. 2020, 1732, 146703. [Google Scholar] [CrossRef]
  165. Mikulasch, F.A.; Rudelt, L.; Wibral, M.; Priesemann, V. Where is the error? Hierarchical predictive coding through dendritic error computation. Trends Neurosci. 2023, 46, 45–59. [Google Scholar] [CrossRef]
  166. Goldberg, J.M.; Fernandez, C. Physiology of peripheral neurons innervating semicircular canals of the squirrel monkey. I. Resting discharge and response to constant angular accelerations. J. Neurophysiol. 1971, 34, 635–660. [Google Scholar] [CrossRef]
  167. Knafo, S.; Wyart, C. Active mechanosensory feedback during locomotion in the zebrafish spinal cord. Curr. Opin. Neurobiol. 2018, 52, 48–53. [Google Scholar] [CrossRef]
  168. Henderson, K.W.; Menelaou, E.; Hale, M.E. Sensory neurons in the spinal cord of zebrafish and their local connectivity. Curr. Opin. Physiol. 2019, 8, 136–140. [Google Scholar] [CrossRef]
  169. Bottjer, S.W. Silent synapses in a thalamo-cortical circuit necessary for song learning in zebra finches. J. Neurophysiol. 2005, 94, 3698–3707. [Google Scholar] [CrossRef]
  170. Xu, W.; Löwel, S.; Schlüter, O.M. Silent Synapse-Based Mechanisms of Critical Period Plasticity. Front. Cell. Neurosci. 2020, 14, 213. [Google Scholar] [CrossRef] [PubMed]
  171. Buhusi, C.V. The across-fiber pattern theory and fuzzy logic: A matter of taste. Physiol. Behav. 2000, 69, 97–106. [Google Scholar] [CrossRef] [PubMed]
  172. Kanwal, J.S.; Ehret, G. Behavior and Neurodynamics for Auditory Communication; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  173. Vlamou, E.; Papadopoulos, B. Fuzzy logic systems and medical applications. AIMS Neurosci. 2019, 6, 266–272. [Google Scholar] [CrossRef] [PubMed]
  174. Cacciatore, S.; Luchinat, C.; Tenori, L. Knowledge discovery by accuracy maximization. Proc. Natl. Acad. Sci. USA 2014, 111, 5117–5122. [Google Scholar] [CrossRef] [PubMed]
  175. Brede, M.; Stella, M.; Kalloniatis, A.C. Competitive influence maximization and enhancement of synchronization in populations of non-identical Kuramoto oscillators. Sci. Rep. 2018, 8, 702. [Google Scholar] [CrossRef] [PubMed]
  176. Nikonov, A.A.; Finger, T.E.; Caprio, J. Beyond the olfactory bulb: An odotopic map in the forebrain. Proc. Natl. Acad. Sci. USA 2005, 102, 18688–18693. [Google Scholar] [CrossRef] [PubMed]
  177. Fuss, S.H.; Korsching, S.I. Odorant feature detection: Activity mapping of structure response relationships in the zebrafish olfactory bulb. J. Neurosci. 2001, 21, 8396–8407. [Google Scholar] [CrossRef]
  178. Stettler, D.D.; Axel, R. Representations of odor in the piriform cortex. Neuron 2009, 63, 854–864. [Google Scholar] [CrossRef] [PubMed]
  179. Wang, F.; Nemes, A.; Mendelsohn, M.; Axel, R. Odorant receptors govern the formation of a precise topographic map. Cell 1998, 93, 47–60. [Google Scholar] [CrossRef] [PubMed]
  180. Ohlemiller, K.K.; Kanwal, J.S.; Suga, N. Facilitative responses to species-specific calls in cortical FM-FM neurons of the mustached bat. NeuroReport 1996, 7, 1749–1755. [Google Scholar] [CrossRef] [PubMed]
  181. García-Rosales, F.; López-Jury, L.; González-Palomares, E.; Cabral-Calderín, Y.; Hechavarría, J.C. Fronto-Temporal Coupling Dynamics During Spontaneous Activity and Auditory Processing in the Bat Carollia perspicillata. Front. Syst. Neurosci. 2020, 14, 14. [Google Scholar] [CrossRef] [PubMed]
  182. Martin, L.M.; García-Rosales, F.; Beetz, M.J.; Hechavarría, J.C. Processing of temporally patterned sounds in the auditory cortex of Seba’s short-tailed bat, Carollia perspicillata. Eur. J. Neurosci. 2017, 46, 2365–2379. [Google Scholar] [CrossRef] [PubMed]
  183. Tseng, Y.-L.; Liu, H.-H.; Liou, M.; Tsai, A.C.; Chien, V.S.C.; Shyu, S.-T.; Yang, Z.-S. Lingering Sound: Event-Related Phase-Amplitude Coupling and Phase-Locking in Fronto-Temporo-Parietal Functional Networks During Memory Retrieval of Music Melodies. Front. Hum. Neurosci. 2019, 13, 150. [Google Scholar] [CrossRef]
  184. Yang, L.; Chen, X.; Yang, L.; Li, M.; Shang, Z. Phase-Amplitude Coupling between Theta Rhythm and High-Frequency Oscillations in the Hippocampus of Pigeons during Navigation. Animals 2024, 14, 439. [Google Scholar] [CrossRef] [PubMed]
  185. Vivekananda, U.; Bush, D.; Bisby, J.A.; Baxendale, S.; Rodionov, R.; Diehl, B.; Chowdhury, F.A.; McEvoy, A.W.; Miserocchi, A.; Walker, M.C.; et al. Theta power and theta-gamma coupling support long-term spatial memory retrieval. Hippocampus 2021, 31, 213–220. [Google Scholar] [CrossRef]
  186. Daume, J.; Kamiński, J.; Schjetnan, A.G.P.; Salimpour, Y.; Khan, U.; Kyzar, M.; Reed, C.M.; Anderson, W.S.; Valiante, T.A.; Mamelak, A.N.; et al. Control of working memory by phase-amplitude coupling of human hippocampal neurons. Nature 2024, 629, 393–401. [Google Scholar] [CrossRef]
  187. Mohan, U.R.; Zhang, H.; Ermentrout, B.; Jacobs, J. The direction of theta and alpha travelling waves modulates human memory processing. Nat. Hum. Behav. 2024, 8, 1124–1135. [Google Scholar] [CrossRef] [PubMed]
  188. Aggarwal, A.; Brennan, C.; Luo, J.; Chung, H.; Contreras, D.; Kelz, M.B.; Proekt, A. Visual evoked feedforward-feedback traveling waves organize neural activity across the cortical hierarchy in mice. Nat. Commun. 2022, 13, 4754. [Google Scholar] [CrossRef] [PubMed]
  189. Wu, Y.; Chen, Z.S. Computational models for state-dependent traveling waves in hippocampal formation. BioRxiv 2023. [Google Scholar] [CrossRef]
  190. Wu, J.Y.; Guan, L.; Bai, L.; Yang, Q. Spatiotemporal properties of an evoked population activity in rat sensory cortical slices. J. Neurophysiol. 2001, 86, 2461–2474. [Google Scholar] [CrossRef]
  191. Erkol, Ş.; Mazzilli, D.; Radicchi, F. Influence maximization on temporal networks. Phys. Rev. E 2020, 102, 042307. [Google Scholar] [CrossRef]
  192. Medvedev, A.V.; Chiao, F.; Kanwal, J.S. Modeling complex tone perception: Grouping harmonics with combination-sensitive neurons. Biol. Cybern. 2002, 86, 497–505. [Google Scholar] [CrossRef] [PubMed]
  193. Aharon, G.; Sadot, M.; Yovel, Y. Bats Use Path Integration Rather Than Acoustic Flow to Assess Flight Distance along Flyways. Curr. Biol. 2017, 27, 3650–3657.e3. [Google Scholar] [CrossRef]
  194. Merlin, C.; Gegear, R.J.; Reppert, S.M. Antennal circadian clocks coordinate sun compass orientation in migratory monarch butterflies. Science 2009, 325, 1700–1704. [Google Scholar] [CrossRef] [PubMed]
  195. Shukla, V.; Rani, S.; Malik, S.; Kumar, V.; Sadananda, M. Neuromorphometric changes associated with photostimulated migratory phenotype in the Palaearctic-Indian male redheaded bunting. Exp. Brain Res. 2020, 238, 2245–2256. [Google Scholar] [CrossRef] [PubMed]
  196. Irachi, S.; Hall, D.J.; Fleming, M.S.; Maugars, G.; Björnsson, B.T.; Dufour, S.; Uchida, K.; McCormick, S.D. Photoperiodic regulation of pituitary thyroid-stimulating hormone and brain deiodinase in Atlantic salmon. Mol. Cell. Endocrinol. 2021, 519, 111056. [Google Scholar] [CrossRef] [PubMed]
  197. Glimcher, P.W.; Rustichini, A. Neuroeconomics: The consilience of brain and decision. Science 2004, 306, 447–452. [Google Scholar] [CrossRef] [PubMed]
  198. Tversky, A.; Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef] [PubMed]
  199. Luo, J.; Yu, R. Follow the heart or the head? The interactive influence model of emotion and cognition. Front. Psychol. 2015, 6, 573. [Google Scholar] [CrossRef] [PubMed]
  200. Fellows, L.K. The cognitive neuroscience of human decision making: A review and conceptual framework. Behav. Cogn. Neurosci. Rev. 2004, 3, 159–172. [Google Scholar] [CrossRef] [PubMed]
  201. Pearson, J.M.; Watson, K.K.; Platt, M.L. Decision making: The neuroethological turn. Neuron 2014, 82, 950–965. [Google Scholar] [CrossRef] [PubMed]
  202. Basten, U.; Biele, G.; Heekeren, H.R.; Fiebach, C.J. How the brain integrates costs and benefits during decision making. Proc. Natl. Acad. Sci. USA 2010, 107, 21767–21772. [Google Scholar] [CrossRef]
  203. Floresco, S.B.; Ghods-Sharifi, S. Amygdala-prefrontal cortical circuitry regulates effort-based decision making. Cereb. Cortex 2007, 17, 251–260. [Google Scholar] [CrossRef] [PubMed]
  204. Hikosaka, O.; Takikawa, Y.; Kawagoe, R. Role of the basal ganglia in the control of purposive saccadic eye movements. Physiol. Rev. 2000, 80, 953–978. [Google Scholar] [CrossRef] [PubMed]
  205. Grillner, S.; Robertson, B. The basal ganglia over 500 million years. Curr. Biol. 2016, 26, R1088–R1100. [Google Scholar] [CrossRef] [PubMed]
  206. Cregg, J.M.; Leiras, R.; Montalant, A.; Wanken, P.; Wickersham, I.R.; Kiehn, O. Brainstem neurons that command mammalian locomotor asymmetries. Nat. Neurosci. 2020, 23, 730–740. [Google Scholar] [CrossRef] [PubMed]
  207. DiDomenico, R.; Nissanov, J.; Eaton, R.C. Lateralization and adaptation of a continuously variable behavior following lesions of a reticulospinal command neuron. Brain Res. 1988, 473, 15–28. [Google Scholar] [CrossRef] [PubMed]
  208. Schemann, M.; Grundy, D. Electrophysiological identification of vagally innervated enteric neurons in guinea pig stomach. Am. J. Physiol. 1992, 263, G709–G718. [Google Scholar] [CrossRef] [PubMed]
  209. Jing, J.; Vilim, F.S.; Horn, C.C.; Alexeeva, V.; Hatcher, N.G.; Sasaki, K.; Yashina, I.; Zhurov, Y.; Kupfermann, I.; Sweedler, J.V.; et al. From hunger to satiety: Reconfiguration of a feeding network by Aplysia neuropeptide Y. J. Neurosci. 2007, 27, 3490–3502. [Google Scholar] [CrossRef] [PubMed]
  210. Nugent, M.; St Pierre, M.; Brown, A.; Nassar, S.; Parmar, P.; Kitase, Y.; Duck, S.A.; Pinto, C.; Jantzie, L.; Fung, C.; et al. Sexual Dimorphism in the Closure of the Hippocampal Postnatal Critical Period of Synaptic Plasticity after Intrauterine Growth Restriction: Link to Oligodendrocyte and Glial Dysregulation. Dev. Neurosci. 2023, 45, 234–254. [Google Scholar] [CrossRef] [PubMed]
  211. Schreurs, B.G.; O’Dell, D.E.; Wang, D. The role of cerebellar intrinsic neuronal excitability, synaptic plasticity, and perineuronal nets in eyeblink conditioning. Biology 2024, 13, 200. [Google Scholar] [CrossRef] [PubMed]
  212. Christensen, A.C.; Lensjø, K.K.; Lepperød, M.E.; Dragly, S.-A.; Sutterud, H.; Blackstad, J.S.; Fyhn, M.; Hafting, T. Perineuronal nets stabilize the grid cell network. Nat. Commun. 2021, 12, 253. [Google Scholar] [CrossRef]
  213. Karetko, M.; Skangiel-Kramska, J. Diverse functions of perineuronal nets. Acta Neurobiol. Exp. 2009, 69, 564–577. [Google Scholar] [CrossRef]
  214. Sorvari, H.; Miettinen, R.; Soininen, H.; Pitkänen, A. Parvalbumin-immunoreactive neurons make inhibitory synapses on pyramidal cells in the human amygdala: A light and electron microscopic study. Neurosci. Lett. 1996, 217, 93–96. [Google Scholar] [CrossRef] [PubMed]
  215. Deco, G.; Rolls, E.T. Attention, short-term memory, and action selection: A unifying theory. Prog. Neurobiol. 2005, 76, 236–256. [Google Scholar] [CrossRef] [PubMed]
  216. Jensen, O.; Kaiser, J.; Lachaux, J.-P. Human gamma-frequency oscillations associated with attention and memory. Trends Neurosci. 2007, 30, 317–324. [Google Scholar] [CrossRef] [PubMed]
  217. Flechsenhar, A.; Larson, O.; End, A.; Gamer, M. Investigating overt and covert shifts of attention within social naturalistic scenes. J. Vis. 2018, 18, 11. [Google Scholar] [CrossRef] [PubMed]
  218. Belardinelli, A.; Herbort, O.; Butz, M.V. Goal-oriented gaze strategies afforded by object interaction. Vision. Res. 2015, 106, 47–57. [Google Scholar] [CrossRef] [PubMed]
  219. Okamoto, H.; Cherng, B.-W.; Nakajo, H.; Chou, M.-Y.; Kinoshita, M. Habenula as the experience-dependent controlling switchboard of behavior and attention in social conflict and learning. Curr. Opin. Neurobiol. 2021, 68, 36–43. [Google Scholar] [CrossRef] [PubMed]
  220. Mohanty, A.; Gitelman, D.R.; Small, D.M.; Mesulam, M.M. The spatial attention network interacts with limbic and monoaminergic systems to modulate motivation-induced attention shifts. Cereb. Cortex 2008, 18, 2604–2613. [Google Scholar] [CrossRef] [PubMed]
  221. Salmi, J.; Rinne, T.; Koistinen, S.; Salonen, O.; Alho, K. Brain networks of bottom-up triggered and top-down controlled shifting of auditory attention. Brain Res. 2009, 1286, 155–164. [Google Scholar] [CrossRef] [PubMed]
  222. Tamber-Rosenau, B.J.; Esterman, M.; Chiu, Y.-C.; Yantis, S. Cortical mechanisms of cognitive control for shifting attention in vision and working memory. J. Cogn. Neurosci. 2011, 23, 2905–2919. [Google Scholar] [CrossRef]
  223. Parker, M.O.; Gaviria, J.; Haigh, A.; Millington, M.E.; Brown, V.J.; Combe, F.J.; Brennan, C.H. Discrimination reversal and attentional sets in zebrafish (Danio rerio). Behav. Brain Res. 2012, 232, 264–268. [Google Scholar] [CrossRef]
  224. Fodoulian, L.; Gschwend, O.; Huber, C.; Mutel, S.; Salazar, R.; Leone, R.; Renfer, J.-R.; Ekundayo, K.; Rodriguez, I.; Carleton, A. The claustrum-medial prefrontal cortex network controls attentional set-shifting. BioRxiv 2020. [Google Scholar] [CrossRef]
  225. Buschman, T.J.; Miller, E.K. Shifting the spotlight of attention: Evidence for discrete computations in cognition. Front. Hum. Neurosci. 2010, 4, 194. [Google Scholar] [CrossRef]
  226. Goldberg, M.E.; Bisley, J.; Powell, K.D.; Gottlieb, J.; Kusunoki, M. The role of the lateral intraparietal area of the monkey in the generation of saccades and visuospatial attention. Ann. N. Y. Acad. Sci. 2002, 956, 205–215. [Google Scholar] [CrossRef]
  227. Amo, R.; Aizawa, H.; Takahashi, R.; Kobayashi, M.; Takahoko, M.; Aoki, T.; Okamoto, H. Identification of the zebrafish ventral habenula as a homologue of the mammalian lateral habenula. Neurosci. Res. 2009, 65, S227. [Google Scholar] [CrossRef]
  228. Puentes-Mestril, C.; Roach, J.; Niethard, N.; Zochowski, M.; Aton, S.J. How rhythms of the sleeping brain tune memory and synaptic plasticity. Sleep 2019, 42, 1–14. [Google Scholar] [CrossRef] [PubMed]
  229. Geva-Sagiv, M.; Mankin, E.A.; Eliashiv, D.; Epstein, S.; Cherry, N.; Kalender, G.; Tchemodanov, N.; Nir, Y.; Fried, I. Augmenting hippocampal-prefrontal neuronal synchrony during sleep enhances memory consolidation in humans. Nat. Neurosci. 2023, 26, 1100–1110. [Google Scholar] [CrossRef]
  230. Capellini, I.; McNamara, P.; Preston, B.T.; Nunn, C.L.; Barton, R.A. Does sleep play a role in memory consolidation? A comparative test. PLoS ONE 2009, 4, e4609. [Google Scholar] [CrossRef]
  231. Oyanedel, C.N.; Binder, S.; Kelemen, E.; Petersen, K.; Born, J.; Inostroza, M. Role of slow oscillatory activity and slow wave sleep in consolidation of episodic-like memory in rats. Behav. Brain Res. 2014, 275, 126–130. [Google Scholar] [CrossRef] [PubMed]
  232. Yeganegi, H.; Ondracek, J.M. Multi-channel EEG recordings reveal age-related differences in the sleep of juvenile and adult zebra finches. Sci. Rep. 2023. [Google Scholar] [CrossRef]
  233. Buchert, S.N.; Murakami, P.; Kalavadia, A.H.; Reyes, M.T.; Sitaraman, D. Sleep correlates with behavioral decision making critical for reproductive output in Drosophila melanogaster. Comp. Biochem. Physiol. Part A Mol. Integr. Physiol. 2022, 264, 111114. [Google Scholar] [CrossRef]
  234. Sauseng, P.; Klimesch, W.; Gruber, W.R.; Birbaumer, N. Cross-frequency phase synchronization: A brain mechanism of memory matching and attention. Neuroimage 2008, 40, 308–317. [Google Scholar] [CrossRef]
  235. Engel, A.K.; Fries, P.; Singer, W. Dynamic predictions: Oscillations and synchrony in top-down processing. Nat. Rev. Neurosci. 2001, 2, 704–716. [Google Scholar] [CrossRef] [PubMed]
  236. Drebitz, E.; Haag, M.; Grothe, I.; Mandon, S.; Kreiter, A.K. Attention configures synchronization within local neuronal networks for processing of the behaviorally relevant stimulus. Front. Neural Circuits 2018, 12, 71. [Google Scholar] [CrossRef]
  237. Klimesch, W. EEG alpha and theta oscillations reflect cognitive and memory performance: A review and analysis. Brain Res. Rev. 1999, 29, 169–195. [Google Scholar] [CrossRef] [PubMed]
  238. Fernandez, L.M.J.; Lüthi, A. Sleep spindles: Mechanisms and functions. Physiol. Rev. 2020, 100, 805–868. [Google Scholar] [CrossRef]
  239. Macdonald, K.D.; Fifkova, E.; Jones, M.S.; Barth, D.S. Focal stimulation of the thalamic reticular nucleus induces focal gamma waves in cortex. J. Neurophysiol. 1998, 79, 474–477. [Google Scholar] [CrossRef] [PubMed]
  240. Ngo, H.-V.V.; Born, J. Sleep and the Balance between Memory and Forgetting. Cell 2019, 179, 289–291. [Google Scholar] [CrossRef]
  241. Kim, J.; Gulati, T.; Ganguly, K. Competing Roles of Slow Oscillations and Delta Waves in Memory Consolidation versus Forgetting. Cell 2019, 179, 514–526.e13. [Google Scholar] [CrossRef]
  242. Nir, Y.; Tononi, G. Dreaming and the brain: From phenomenology to neurophysiology. Trends Cogn. Sci. 2010, 14, 88–100. [Google Scholar] [CrossRef]
  243. Tamaki, M.; Berard, A.V.; Barnes-Diana, T.; Siegel, J.; Watanabe, T.; Sasaki, Y. Reward does not facilitate visual perceptual learning until sleep occurs. Proc. Natl. Acad. Sci. USA 2020, 117, 959–968. [Google Scholar] [CrossRef]
  244. Schredl, M.; Doll, E. Emotions in diary dreams. Conscious. Cogn. 1998, 7, 634–646. [Google Scholar] [CrossRef]
  245. Marzano, C.; Ferrara, M.; Mauro, F.; Moroni, F.; Gorgoni, M.; Tempesta, D.; Cipolli, C.; De Gennaro, L. Recalling and forgetting dreams: Theta and alpha oscillations during sleep predict subsequent dream recall. J. Neurosci. 2011, 31, 6674–6683. [Google Scholar] [CrossRef] [PubMed]
  246. Wiswede, D.; Koranyi, N.; Müller, F.; Langner, O.; Rothermund, K. Validating the truth of propositions: Behavioral and ERP indicators of truth evaluation processes. Soc. Cogn. Affect. Neurosci. 2013, 8, 647–653. [Google Scholar] [CrossRef]
  247. Joo, H.R.; Frank, L.M. The hippocampal sharp wave-ripple in memory retrieval for immediate use and consolidation. Nat. Rev. Neurosci. 2018, 19, 744–757. [Google Scholar] [CrossRef]
  248. Roumis, D.K.; Frank, L.M. Hippocampal sharp-wave ripples in waking and sleeping states. Curr. Opin. Neurobiol. 2015, 35, 6–12. [Google Scholar] [CrossRef]
  249. Remondes, M.; Wilson, M.A. Slow-γ Rhythms Coordinate Cingulate Cortical Responses to Hippocampal Sharp-Wave Ripples during Wakefulness. Cell Rep. 2015, 13, 1327–1335. [Google Scholar] [CrossRef]
  250. Moser, M.-B.; Rowland, D.C.; Moser, E.I. Place cells, grid cells, and memory. Cold Spring Harb. Perspect. Biol. 2015, 7, a021808. [Google Scholar] [CrossRef]
  251. Friston, K.; Kilner, J.; Harrison, L. A free energy principle for the brain. J. Physiol. Paris. 2006, 100, 70–87. [Google Scholar] [CrossRef] [PubMed]
  252. Friston, K. The free-energy principle: A rough guide to the brain? Trends Cogn. Sci. 2009, 13, 293–301. [Google Scholar] [CrossRef] [PubMed]
  253. Krupnik, V. I like therefore I can, and I can therefore I like: The role of self-efficacy and affect in active inference of allostasis. Front. Neural Circuits 2024, 18, 1283372. [Google Scholar] [CrossRef]
  254. Friston, K. The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar] [CrossRef] [PubMed]
  255. Kammer, T.; Spitzer, M. Brain stimulation in psychiatry: Methods and magnets, patients and parameters. Curr. Opin. Psychiatry 2012, 25, 535–541. [Google Scholar] [CrossRef] [PubMed]
  256. Wagle Shukla, A.; Vaillancourt, D.E. Treatment and physiology in Parkinson’s disease and dystonia: Using transcranial magnetic stimulation to uncover the mechanisms of action. Curr. Neurol. Neurosci. Rep. 2014, 14, 449. [Google Scholar] [CrossRef]
  257. Magsood, H.; Syeda, F.; Holloway, K.; Carmona, I.C.; Hadimani, R.L. Safety study of combination treatment: Deep brain stimulation and transcranial magnetic stimulation. Front. Hum. Neurosci. 2020, 14, 123. [Google Scholar] [CrossRef] [PubMed]
  258. Holtzheimer, P.E.; Mayberg, H.S. Neuromodulation for treatment-resistant depression. F1000 Med. Rep. 2012, 4, 22. [Google Scholar] [CrossRef] [PubMed]
  259. Bluhm, R.; Castillo, E.; Achtyes, E.D.; McCright, A.M.; Cabrera, L.Y. They affect the person, but for better or worse? perceptions of electroceutical interventions for depression among psychiatrists, patients, and the public. Qual. Health Res. 2021, 31, 2542–2553. [Google Scholar] [CrossRef]
  260. Farries, M.A.; Fairhall, A.L. Reinforcement learning with modulated spike timing dependent synaptic plasticity. J. Neurophysiol. 2007, 98, 3648–3665. [Google Scholar] [CrossRef]
  261. Detorakis, G.; Sheik, S.; Augustine, C.; Paul, S.; Pedroni, B.U.; Dutt, N.; Krichmar, J.; Cauwenberghs, G.; Neftci, E. Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning. Front. Neurosci. 2018, 12, 583. [Google Scholar] [CrossRef]
  262. Florian, R.V. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Comput. 2007, 19, 1468–1502. [Google Scholar] [CrossRef]
  263. Teng, T.-H.; Tan, A.-H.; Zurada, J.M. Self-organizing neural networks integrating domain knowledge and reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 889–902. [Google Scholar] [CrossRef]
Figure 1. The parallel–hierarchical processing of sensory inputs leads to the extraction of information from features and objects defined by the information-bearing elements or IBEs. The physical proximity and/or temporal coherence of extracted features create objects, and that of objects within a scene creates perceptual associations or memories. Most of this processing is accomplished via ascending lemniscal pathways in the brain. Thalamocortical loops facilitate egocentric selection by neurons tuned to the parameters of an incoming stimulus, and signal amplification occurs via descending projections (see arrows) [67].
Figure 1. The parallel–hierarchical processing of sensory inputs leads to the extraction of information from features and objects defined by the information-bearing elements or IBEs. The physical proximity and/or temporal coherence of extracted features create objects, and that of objects within a scene creates perceptual associations or memories. Most of this processing is accomplished via ascending lemniscal pathways in the brain. Thalamocortical loops facilitate egocentric selection by neurons tuned to the parameters of an incoming stimulus, and signal amplification occurs via descending projections (see arrows) [67].
Information 15 00487 g001
Figure 2. A schematic showing behavior-driven flow of information to extract object- and scene-specific cues for creating navigational knowledge. Associative memories are typically created via valence-driven idiothetic cues, or show statistically significant coincidence of occurrence in the form of allothetic cues. KNs are expected to play an important role in top-down modulation for sensory selection by sustaining attention at various levels of sensory processing and may be modified by reward- and aversion-driven associative memory mechanisms.
Figure 2. A schematic showing behavior-driven flow of information to extract object- and scene-specific cues for creating navigational knowledge. Associative memories are typically created via valence-driven idiothetic cues, or show statistically significant coincidence of occurrence in the form of allothetic cues. KNs are expected to play an important role in top-down modulation for sensory selection by sustaining attention at various levels of sensory processing and may be modified by reward- and aversion-driven associative memory mechanisms.
Information 15 00487 g002
Figure 3. (A). Networks and cell types discovered for navigation in rodents, bats and primates, including humans. Lines connect to the brain structures where they are located and arrows show the directionality in which networks and one cell type carves the receptive field of another. (B). Examples of research questions that can be addressed using a KN-driven approach to provide a foundational understanding of the neuroscience of knowledge. These approaches can help us to define the physiological properties, configuration, extent and plasticity of KNs. MEC: medial entorhinal cortex.
Figure 3. (A). Networks and cell types discovered for navigation in rodents, bats and primates, including humans. Lines connect to the brain structures where they are located and arrows show the directionality in which networks and one cell type carves the receptive field of another. (B). Examples of research questions that can be addressed using a KN-driven approach to provide a foundational understanding of the neuroscience of knowledge. These approaches can help us to define the physiological properties, configuration, extent and plasticity of KNs. MEC: medial entorhinal cortex.
Information 15 00487 g003
Figure 4. Schematic showing the localization of oscillatory activity (top-left) in the ventral hippocampus within a navigational context (see Figure 2) and for memory functions, and its complex interconnectivity with the amygdala for context-dependent associative learning. The lateral amygdala (LH) receives processed sensory inputs from the cortex and thalamus. These inputs contain information about features and objects in the sensory landscape. The subiculum, presubiculum and parasubiculum are extensions of the CA1, CA2 and CA3 regions of the hippocampus and they all receive outputs from the basal amygdala (BA). The basolateral amygdala (BLA) has reciprocal connections with the prefrontal cortex. Projections to the hypothalamus trigger hormonal changes, and those to the periaqueductal gray in the brainstem control respiration, heart rate and vocalization.
Figure 4. Schematic showing the localization of oscillatory activity (top-left) in the ventral hippocampus within a navigational context (see Figure 2) and for memory functions, and its complex interconnectivity with the amygdala for context-dependent associative learning. The lateral amygdala (LH) receives processed sensory inputs from the cortex and thalamus. These inputs contain information about features and objects in the sensory landscape. The subiculum, presubiculum and parasubiculum are extensions of the CA1, CA2 and CA3 regions of the hippocampus and they all receive outputs from the basal amygdala (BA). The basolateral amygdala (BLA) has reciprocal connections with the prefrontal cortex. Projections to the hypothalamus trigger hormonal changes, and those to the periaqueductal gray in the brainstem control respiration, heart rate and vocalization.
Information 15 00487 g004
Figure 5. Knowledge network construction and distinction. (A) A generic multilayered network for signal extraction or associative learning, including recurrence (looping back arrows on green circles) that can be either excitatory or inhibitory, and feedforward (arrows) and feedback (filled circles) connections. The green, blue and orange circles belong to hidden layers of the network. (B) A series of modular networks showing the bottom-up flow of information to extract IBEs, features and objects from the sensory landscape (see Figure 1). (C) A putative knowledge network consisting of multiple distributed memory modules, predictive and attentional, and multimodal cross-validating networks as the basic components. This type of network contains multimodal representations, and has long-distance connections with inputs that can modify as well as query the network via sensory and cephalic triggers. Information is expected to flow bidirectionally (gray arrows) between these networks for decision making and action selection. The entire network can be triggered by either query or contextual signals. Transient coupling between networks (dashed arrows) can occur via oscillatory activity (see text for details). Physical dissociation via synaptic degradation or temporal unbinding over time is also possible via the de-correlation of neural activity. Prediction mismatch triggers large mismatch negativity-evoked potentials, leading to error signals and potential re-learning that may result in a reconfiguration of the knowledge network.
Figure 5. Knowledge network construction and distinction. (A) A generic multilayered network for signal extraction or associative learning, including recurrence (looping back arrows on green circles) that can be either excitatory or inhibitory, and feedforward (arrows) and feedback (filled circles) connections. The green, blue and orange circles belong to hidden layers of the network. (B) A series of modular networks showing the bottom-up flow of information to extract IBEs, features and objects from the sensory landscape (see Figure 1). (C) A putative knowledge network consisting of multiple distributed memory modules, predictive and attentional, and multimodal cross-validating networks as the basic components. This type of network contains multimodal representations, and has long-distance connections with inputs that can modify as well as query the network via sensory and cephalic triggers. Information is expected to flow bidirectionally (gray arrows) between these networks for decision making and action selection. The entire network can be triggered by either query or contextual signals. Transient coupling between networks (dashed arrows) can occur via oscillatory activity (see text for details). Physical dissociation via synaptic degradation or temporal unbinding over time is also possible via the de-correlation of neural activity. Prediction mismatch triggers large mismatch negativity-evoked potentials, leading to error signals and potential re-learning that may result in a reconfiguration of the knowledge network.
Information 15 00487 g005
Figure 6. A line diagram depicting the relationship between sensory inputs, motor control and behavior within a sense of self (idiothethic) in both physical (time and space) and physiological terms. The reciprocal interconnectivity of the various functions emphasizes the importance of and role of various neural states and mechanisms that constitute components of KNs. The behavior or motor output is typically triggered in response to a query generated by signals from the internal or the external environment. Physiological states, such as hunger, drive an organism to attend to contextual or allothetic cues in its environment and activate central motor programs in the brain to trigger behavioral actions. These in turn become the source of sensory inputs that allow an organism to monitor its movement and location within its relevant region of space. Some of this feedback leads to contextual learning through motivational cues related to reward and fear. Direct idiothetic feedback (diagonal arrow) can modulate coordinated and directed motor activity.
Figure 6. A line diagram depicting the relationship between sensory inputs, motor control and behavior within a sense of self (idiothethic) in both physical (time and space) and physiological terms. The reciprocal interconnectivity of the various functions emphasizes the importance of and role of various neural states and mechanisms that constitute components of KNs. The behavior or motor output is typically triggered in response to a query generated by signals from the internal or the external environment. Physiological states, such as hunger, drive an organism to attend to contextual or allothetic cues in its environment and activate central motor programs in the brain to trigger behavioral actions. These in turn become the source of sensory inputs that allow an organism to monitor its movement and location within its relevant region of space. Some of this feedback leads to contextual learning through motivational cues related to reward and fear. Direct idiothetic feedback (diagonal arrow) can modulate coordinated and directed motor activity.
Information 15 00487 g006
Figure 7. Flow-chart showing the evolution of organisms with or without “brains” (shaded boxes) within the animal kingdom. Early forms of life (over 540 million years ago) did not require knowledge to adapt to their environment. Their action selection was directly determined by sensors and effectors operating at either molecular, cellular or simple reflex levels. Later, as organisms and their brains became more complex, just-in-time information availability was supplanted by KNs that allowed a brain to be proactive in terms of evaluating its environment, goal-setting and action selection. Some molluscan species with large brains, such as the octopus, and protochordates are expected to have rudimentary KNs, whereas the neurally advanced species, such as cetaceans and primates, have the capacity of knowledge abstraction and manipulation. Humans are assumed to have the most advanced KNs capable of symbolic representation through cultural evolution and education.
Figure 7. Flow-chart showing the evolution of organisms with or without “brains” (shaded boxes) within the animal kingdom. Early forms of life (over 540 million years ago) did not require knowledge to adapt to their environment. Their action selection was directly determined by sensors and effectors operating at either molecular, cellular or simple reflex levels. Later, as organisms and their brains became more complex, just-in-time information availability was supplanted by KNs that allowed a brain to be proactive in terms of evaluating its environment, goal-setting and action selection. Some molluscan species with large brains, such as the octopus, and protochordates are expected to have rudimentary KNs, whereas the neurally advanced species, such as cetaceans and primates, have the capacity of knowledge abstraction and manipulation. Humans are assumed to have the most advanced KNs capable of symbolic representation through cultural evolution and education.
Information 15 00487 g007
Figure 8. Information flow to extract object- and scene-specific information and create memories as well as knowledge. KNs are expected to play an important role in top-down modulation for sensory selection by sustaining attention at various levels of sensory processing.
Figure 8. Information flow to extract object- and scene-specific information and create memories as well as knowledge. KNs are expected to play an important role in top-down modulation for sensory selection by sustaining attention at various levels of sensory processing.
Information 15 00487 g008
Figure 9. A diagrammatic representation of a “pyramid-inversion model” for the transfer of information over time from the sensory to the brain environment via a series of filters (left). The conversion to knowledge (right) occurs via oscillatory mechanisms (middle) involving the transfer of short-term to long-term memories. The theta band (0.2 to 0.8 Hz oscillations), during the up-state in nonREM sleep, consolidates memories and integrates them with contextual and cross-validated information as knowledge. Delta waves attenuate sleep spindle activity occurring during the down-state of sleep leading to the forgetting and incorporation of new memories [240]. Together with sharp wave ripples (140 to 200 Hz) during sleep and sustained reflection, this activity can modify memories stored within KNs [247,250].
Figure 9. A diagrammatic representation of a “pyramid-inversion model” for the transfer of information over time from the sensory to the brain environment via a series of filters (left). The conversion to knowledge (right) occurs via oscillatory mechanisms (middle) involving the transfer of short-term to long-term memories. The theta band (0.2 to 0.8 Hz oscillations), during the up-state in nonREM sleep, consolidates memories and integrates them with contextual and cross-validated information as knowledge. Delta waves attenuate sleep spindle activity occurring during the down-state of sleep leading to the forgetting and incorporation of new memories [240]. Together with sharp wave ripples (140 to 200 Hz) during sleep and sustained reflection, this activity can modify memories stored within KNs [247,250].
Information 15 00487 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kanwal, J.S. From Information to Knowledge: A Role for Knowledge Networks in Decision Making and Action Selection. Information 2024, 15, 487. https://doi.org/10.3390/info15080487

AMA Style

Kanwal JS. From Information to Knowledge: A Role for Knowledge Networks in Decision Making and Action Selection. Information. 2024; 15(8):487. https://doi.org/10.3390/info15080487

Chicago/Turabian Style

Kanwal, Jagmeet S. 2024. "From Information to Knowledge: A Role for Knowledge Networks in Decision Making and Action Selection" Information 15, no. 8: 487. https://doi.org/10.3390/info15080487

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop