Next Article in Journal
Cell Formation Problem with Alternative Routes and Machine Reliability: Review, Analysis, and Future Developments
Previous Article in Journal
Optimal Allocation of Multi-Type Vaccines in a Two-Dose Vaccination Campaign for Epidemic Control: A Case Study of COVID-19
Previous Article in Special Issue
Emergence in Complex Physiological Processes: The Case of Vitamin B12 Functions in Erythropoiesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complexity in Systemic Cognition: Theoretical Explorations with Agent-Based Modeling

Research Group for Computational & Organizational Cognition, Department of Culture and Language, University of Southern Denmark, 55, 5230 Odense, Denmark
*
Author to whom correspondence should be addressed.
Systems 2024, 12(8), 287; https://doi.org/10.3390/systems12080287
Submission received: 30 June 2024 / Revised: 31 July 2024 / Accepted: 2 August 2024 / Published: 6 August 2024
(This article belongs to the Special Issue Theoretical Issues on Systems Science)

Abstract

:
This paper presents a systemic view of human cognition that suggests complexityis an essential feature of such a system. It draws on the embodied, distributed, and extended cognition paradigms to outline the elements and the mechanisms that define cognition. In doing so, it uses an agent-based computational model (the TS 1.0.5Model) with a focus on learning mechanisms as they reflect on individual competence to gain insights on how cognition works. Results indicate that cognitive dynamics do not depend solely on macro structural elements, nor do they depend uniquely on individual characteristics. Instead, more insights and understanding are available through the consideration of all elements together as they co-evolve and interact over time. This perspective illustrates the essential role of how we define the meso domain and constitutes a clear indication that cognitive systems are indeed complex.

1. Introduction

This article is concerned with the seemingly odd concept that human cognition is systemic. This idea is not new; in fact, it was first introduced by [1] and then explored more recently in [2,3,4] (Chapter 12).
The main tenet of this perspective is that cognition is distributed [5,6] across enabling resources [7] and that their interactions constitute the basic foundation of a system. In a cognitive system, resources are all those elements that allow performative actions. For example, the act of speaking with someone involves the activation of specific neural patterns, an engagement with the body—utterances require a very precise exercise of the mouth, tongue, and related muscles—anchoring to the words that are spoken as a way of feedback onto what comes next and, not last, the explicit/implicit reactions of those who listen. To this, we can add several more elements. In fact, it is not irrelevant to think of the context in which this dialogue happens (e.g., a cafeteria, a bar, a public square, a classroom), nor is it secondary to think of other bodily engagements, such as hands or other movements (of the speaker and the listeners) that may elicit and support what the sentences refer to.
There are many assumptions underlying the idea that cognition is systemic. Two are considered as being constitutive in this article and should be referred to explicitly. On the one hand, the application of a system’s framework to cognition highlights the intertwined and co-evolutionary dynamic patterns that can be observed among elements. This means that it becomes very hard to split the system and consider each element separate from the others. This seemingly simple (and rather obvious) assumption is not at all simple (nor is it obvious). It is not simple because it sets the ground for an understanding of cognition that is based on dynamics rather than elements and static properties. Furthermore, the heterogeneity, (in)stability, variance, sometimes ambiguity, and uncertainty of the interactions among elements make it very difficult to predict how cognition will unfold. On the other hand, the claim is not obvious if one reflects on the fact that, for many years, cognition has been studied as a function of the human brain, where any performance (e.g., language) is dissected and treated as information processed computationally. This tradition has roots in the very beginning of the cognitive science field [8,9] and has influenced some areas (e.g., linguistics) for decades [10,11].
One implication of the above is that it is the study of the combined effects of cognitive resources that allows for a deeper and better understanding of cognitive life. However, how this is achieved is yet to be defined. From the considerations exposed thus far, it is apparent that the study of cognition would benefit from applications of analytical tools from complex systems science. Given how blurry the boundaries of the elements involved are, and how much the system is capable of re-configuring itself giving rise to unpredictable patterns, variability of end states, and emergent properties, it is fair to consider a complexity assumption. This assumption complements (sometimes disrupts) previous traditions and is the focus of this article. Specifically, the article uses an agent-based computational simulation model (already presented elsewhere [12]) to explore and define the elements that make cognition a complex system. In doing so, it makes particular reference to the meso domain of interactions (see below for further details).
The following section is dedicated to providing a theoretical background to the claims above, while Section 3 presents a computational exemplification of the concepts outlined in Section 2. We then draw implications and conclusions in the last Section 4.

2. Background

A systemic take on cognition places emphasis on the fact that cognition revolves around the flexible, adaptive activity of agents. Agents possess more or less stable traits, such as competences, skill-sets, or other kinds of dispositions. For instance, Goodwin [13] shows how so-called ‘professional vision’ can develop in highly specialized and professional contexts, whereby cognitive agents become predisposed to see certain things that others would not. For example, utility company workers use professional vision to spot a leakage underneath a pavement in the winter. This is possible because the snow melts in the area due to the hot water leaking underneath it (see, [14]). From a sensorimotor enactivist perspective, Noë [15] makes a similar point. He appeals to so-called ‘perceptual understanding,’ which effectively interrelates with an agent’s practical understanding whereby one skillfully deploys concepts in plain perceptual encounters. In terms of the importance of such an understanding, Noë provides the following example:
“It is difficult to tell, looking at the entrance to the Taj Mahal, which bits of squiggle are mere ornament, and which are writing in Classical Arabic. You can have this experience, it is available to you, only if you are not fluent in Classical Arabic, or in this style of Arabic script.”
(p. 3, [15])
Under this angle, a disposition like one’s practical understanding effectively enables the skillful agent to competently pick up on certain relevant environmental cues (e.g., affordances, symbols, etc.) and make effective use of these in a manner that the unskilled agent is not able to. In fact, expertise is not merely a matter of being able to use a tool or an instrument correctly but it can also include the perception of practice-relevant aspects of one’s surroundings. This is precisely why human systemic cognition unfolds in the context of socio-material practices, which function in the realm of what Hutchins [16] terms the human cultural-cognitive ecosystem. Importantly, as Hutchins notes, specific skills develop by means of engaging with practices and by the fact that different practices interrelate. For example, the ability to project a trajectory onto a spatial arrangement allows for diverse activities such as queuing or navigating an airplane across practical contexts. As Hutchins puts it, the “relationships among practices in the cognitive ecosystem can create possibilities for generalization of skill across activity systems” (p. 42, [16]).
Thus, skills are not strictly local or situational, but rather flexible in the sense that they can be used to engage with cognitive resources across situations, practical arrangements, and even practices. These skills and their enabling practical understandings are just some of the elements that make cognition work as a system. Since a systemic take on cognition is fundamentally anti-Cartesian, it follows that cognition is not viewed as unfolding strictly through intra-cranial processes that are somehow tightly affiliated with the workings of a mind or an intellect. The brain is also dismissed as the primary locus of cognitive behavior. Instead, cognition spreads across agents, the environment, tools, etc.; as argued in Gahrn-Andersen et al. [2], this means that different ontologies are at play. Indeed, human systemic cognition is precisely ‘systemic’ because it enmeshes phenomena that traditionally would have been attributed to particular stand-alone ontologies (e.g., the cognitive, the linguistic, the biological, the social, etc.) (p. 90). Precisely because no explanatory weight is placed on representational mental content, it follows that cognition unfolds in actual agent-environmental relations. This means that cognition is relationally constituted and effectively cuts across inner-outer dichotomies related to the mental:
“While ideas can emerge in silence under a furrowed brow, thinking more typically arises as people battle with an invoice, look for the foreman or choose to trust a web site. Though people can think alone, they also do so when looking at X-rays, drawing geometrical shapes or, indeed, talking with others. In all cases, brain-side activity is inseparable from world-side events.”
(p. 256, [1])
However, it is not enough to recognize the importance of factors pertaining to the micro-level of agent–environment relations; we must also recognize that systemic cognition is socio-practical by nature. This means that it unfolds in a practice-constitutive fashion amongst interacting agents and, hence, it constitutes a meso domain. This is not a new concept. In fact, other domains have effectively utilized the concept of the meso when discussing systems. For example, Dopfer’s interpretation of Schumpeter’s attempt to reformulate the traditional micro–macro dichotomy into a micro-meso-macro framework redefines evolutionary economics [17]. This in-between domain is connected structurally to the macro and procedurally to the micro, in a way similar to what we claim it does in the cognitive sphere. Another prominent example comes from the change management literature as it meets complexity [18,19,20]. Here, the meso is used as explanatory “rules that arise from some dominating assembly that facilitates macro structures and processes (that result in behaviour) through their actualisations” (p. 1339, [20]).
From the perspective taken in this article, the meso domain involves the interplay of various socio-practical elements such as tools, individual and collective skills, and understanding, where cognitive processes are distributed across different aspects of the socio-material environment. Placing emphasis on the meso underpins the role of community, collaboration, and shared practices in shaping and sustaining cognitive activities, further demonstrating that cognition cannot be fully understood without considering the broader socio-practical context in which it occurs:
“The domain comprises any action that connects up how individuals perform daily duties such as, e.g., working a machine, arguing a point in a meeting, writing an email, or simply chatting with a colleague in front of the coffee machine. All such activities require some part of the biological organism (e.g., brain, body), an interpretation of or sensitivity toward (awareness of) organizational super-structures (e.g., norms, cultural expectations), and the actual resources through which cognition is enabled (e.g., a colleague, a computer).”
(p. 4, [2])
In meso-domain interactions, cognition typically revolves around some sort of task orientation. Indeed, as Brentano envisaged, cognition involves an intentional directness towards the surroundings, and while Brentano was a mentalist and prone to emphasizing the solitary ‘thinker’ as the cognizing agent, a systemic approach pushes the basic Latourian and Heideggerian counterpoint: the agent and their directionality cannot be seen apart from the socio-practically saturated world they exist in and engage with. In this sense, it is the local situational context that matters. This allows us to recognize the importance of task orientation which is distributed across the system. It also makes the fact that there is a strong collaborative element to what characterizes the task orientation of agents more apparent.
Perry [21] places emphasis on agents’ abilities of transforming a particular practical problem or task domain by means of their skills and competencies, the availability of resources at hand, the dispositions of other agents, etc. Specifically, he points to the fact that a distributed cognitive system need not be characterized by fixed rules and procedures and, hence, amount to a tightly coupled system (although this was how they were originally envisaged by [5]). Rather, it unfolds in a more loosely coupled manner whereby the system itself is characterized by a high degree of flexibility, adaptation, innovation and, last but not least, plasticity as to its structure [22,23]. Effectively, this means that important constitutive elements such as agents’ roles, rules and procedures, goals, the functionality of tools, etc. are not strictly preconditioned but rather emerge from how agents organize at the meso level.
Indeed, loosely coupled systems may still very much be characterized by the presence of so-called macro-domain phenomena such as culture-defining norms and Standard Operating Procedures (SOP). Yet, the crucial difference here is that such phenomena are heavily or consequently enforced (p. 158, cf. [21]). This entails that the system in question exhibits a high degree of self-organization, precisely because (a) it has the freedom to do so and, (b) it is organized around sets of complex tasks (or problems) which are complex and thus do not lend themselves to straightforward solutions based on the successful resolving of tasks in the past, fixed training regimes, or SOP.
Since the resulting organization has no center, there is a strong focus on how control evolves. Indeed, as Hutchins argues, “centers and boundaries are features that are determined by the relative density of information flow across a system” (p. 37, [16]). In other words, it is the observer who decides on the extent of the boundaries of a system, and given that the observer him/herself cannot be fully taken out of the equation, we necessarily come to accept that systems exhibit incompleteness, both theoretically and in practice (pace, [24]). Nevertheless, in spite of this, we find that it is a foundational characteristic of self-organizing systems that some of their elements impose a degree of control; Cowley and Vallée-Tourangeau put it thus:
“In a species where groups, dyads and individuals exploit self-organizing aggregates much depends on cognitive control. At times, this is more demand led; at others, it is looser and individual-focused. Control thus depends on management of the body, various second-order constructs, and lived experience. Not only is systemic output separable from actions but, just as strikingly, we offload information onto extended systems.”
(p. 269, [1])
While recognizing that cognition is systemic and, hence, that it unfolds across aspects of agents and their environments, Cowley and Vallée-Tourangeau also highlight the importance of individual agents who, in Batesonian-inspired terms [25], should be recognized as those differences that effectively allow for the unfolding of the cognitive system in question. This is because individual agents exhibit partial control through their skillful engagements with their surroundings. Here we must keep Hutchins’ point in mind that even though individual agents might be the locus of control, the control remains partly determined collectively. For as he reminds us, “In some cultural contexts, people seeking service arrange themselves in a queue as a way to control the sequence of access to services” (p. 39). So, going back to their point concerning the trans-practical (or eco-systemic) skill of ‘projecting a trajectory onto a spatial arrangement’, we find that skills such as this are also constrained, in part at least, through macro-level constraints that are reified or brought into relevance on the level of meso, through the manner in which agents coordinate amongst one-another. It testifies to Secchi’s point concerning the fact that organizations seem to “create the condition for distributed cognitive mechanisms” which give rise to certain kinds of behavior, thus allowing the organization to be “a cohesive unity of individuals despite being also a complex [, unpredictable] and adaptive social system” (p. 176, [26]).

3. A Computational Example

In order to explore the theoretical propositions presented in the first part of this paper, we employ a computational simulation methodology. We use an agent-based model that represents teams working to perform specific tasks and, in doing so, they use the cognitive resources available, especially their own and other people’s expertise/competence. In the following pages, the model is briefly introduced, and then the method of analysis is presented together with a selection of the most relevant findings.
Before engaging with the model and its features, it is relevant to indicate why agent-based modeling is an appropriate method in this case. This technique has been developed [27] and used to study complex systems [28,29] and it has been argued it could and should be applied to the study of cognition, especially under embodied, distributed, and extended perspectives [4]. These models make it relatively easy to study interactions among elements, what affects them, and how they behave under a set of evolving circumstances (e.g., stresses, shocks, or unchanged conditions). Hence, through the study of such models, it is possible to unearth the mechanisms leading to emergent patterns [30] and, in general, enquire on the causes of systemic complexity, especially in social systems [31].

3.1. The TS 1.0.5 Model

The TS 1.0.5 Model was developed after an ethnographic study concerning a large Danish utility company operating unmanned aerial technology (i.e., drones) to detect leakages in the hot water pipe system. The focus of the model is the maintenance department and the structure of its three teams. They are organized around a flexible structure that allows employees to work across teams, depending on the type of leakage repair and on their personal preference (sometimes grounded in competency attributes). Hence, performance is defined in relation to the way in which the employee-agents are able to deal with leakages and to provide a “fixing” of the problem.
This model is programmed and implemented using Netlogo 6.3.0.

3.1.1. Agents and Environment

The full description, aim, and capabilities of the TS 1.0.5 Model are presented in [12]. Since the purpose of this paper is limited to understanding the elements that make cognition work as a system, the description of this model is limited to the features that are actually used in the current study.
The model has three types of agents: managers, employees, and leakages/cases. Agent-managers and agent-employees share three characteristics: (a) disposition to listen ( L i ) to information coming from other agents, (b) disposition to share ( S i ) information with them, and (c) a degree of competence ( c i ), that is the professional ability to deal with the tasks at hand. These values are attributed to the agents at the beginning of the simulation using a normal distribution such that a random number generated is by ≈ N ( 0 , 0.5 ) for managers and ≈ N ( 1 , 1 ) for employees for both dispositions to share ( S i ) and to listen ( L i ). Competence is, instead, attributed using a random-normal distribution ≈ N ( 1 , 0.5 ) constrained such that 0 c i 2 . Agent-leakages have characteristics such as (a) size, indicating the extension of water dispersion on the ground, and (b) importance, that is the extent to which a leakage needs to be dealt with, (c) complexity, that is the range of expert knowledge necessary to deal with them, and (d) type, that is whether the information on the leak comes from a drone scan, a call from a customer, an alarm, or something else. Furthermore, these agents become cases when their importance is higher than a threshold, controlled by the modeler on the interface.
All agents are randomly assigned a place in the environment, and agent-persons establish relationships with other agents in their surroundings, according to parameter relationship range ( ϕ ).
The simulation is set to run for 260 time units. One unit represents a week and, assuming there are 52 working weeks in a year, this number corresponds to 5 years.

3.1.2. Mechanisms

Once the agent-employees and the agent-managers are placed in the environment, they start to connect to each other depending on spatial proximity. This means that two agents may cooperate to perform a task if they establish a connection. The agent-leaks appear in the environment gradually, as the simulation time goes by. Some of these agent-leaks would pass the threshold and become “cases”, meaning that they can be dealt with by any agent-employee that is close enough and interested.1 Once the connection between an agent-employee and an agent-leak is established, the procedure works as follows:
  • If competence is such that c i 2 , then
  • The agent checks that the materials necessary to fix the leakage is available, and
  • If the material is available, then the case is solved
  • Otherwise there is some lag between the time of the connection and the time the case is solved.
If point 1 of the procedure does not hold, then the agent-employee “asks” the other agents to which it is connected, in an attempt to gain the additional competence that is needed to perform the task. However, this social aid mechanism materializes as follows:
  • If competence of the agent-employee is such that c i < 2 , then
  • The agent-employee asks for help to their network
  • It receives help in the form of competence increase δ c if
    a
    One’s own disposition to listen is more than average, and
    b
    The helper’s disposition to share is more than average.
From the above, it is apparent that a case may never be solved and remain hanging in the system for the entire duration of the simulation. Furthermore, it is possible that a case is distant from employees and managers and this secures it being undetected and/or impossible to deal with.

3.2. Analytical Approach

The original simulation had four organizational structures and it compared them to understand which one would lead to the highest proportion of cases fixed, on which terms, and in what time. It had a main computational experiment with a 4608 parameter space that needed more than 100,000 simulations to be analyzed appropriately. The analysis indicated that, under most conditions, the “hybrid” structure designed after the ethnographic data was almost always the slowest to reach an average number of cases fixed but, once it did that, it also progressed to become the most successful of the four structures. This is only observable when the entire lifespan of the simulation is considered. The previous study focused on macro patterns as they are informed by the initial and ongoing settings of the simulation. However, it does not tell much about the internal processes that shape and are shaped by the way in which the system is configured. This is what this article is set to do.
Starting from a configuration of parameters that lead the “hybrid” structure to become the most successful,2 this article zooms in on one simulation run by downloading all possible data related to the agents, their characteristics, and the links, both those with a “social” (i.e., with other agent-employees or agent-managers) and those with an “operational” scope (i.e., with leaks/cases).
Once this is conducted, we downloaded 260 datasets, one per week, and created a large dataset with all the information necessary to define teams and their dynamics over time. We used several packages from R Version 4.3.1 and worked on R-Studio Version 2023.06.1+524.

3.3. Findings

The first aspect to look at is team composition. The team number is associated with the agent that originates from the informal network (each agent has a unique numerical denomination, starting from 0). For example, Team 0 is the team around agent-manager 0, Team 27 is the team of agent-employee 27. When the team network originates from an employee, this is constituted by informal work relations that connect various other agent-employees. The management teams are instead more formal networks and are also made of agent-employees. Figure 1 shows the various different networks, established for the sole purpose of dealing with cases that cannot be solved otherwise. The first aspect to notice is the wide variety of team members; there are a few that peak, with the 21 members of team 3 and a few with very low numbers, such as team30 with only 2 members. The average number of team members is 9.77 with a standard deviation of 5.35.

3.3.1. Social Network Analysis

A more appropriate visualization is perhaps offered by social network analytical tools. Figure 2 shows the intricate informal network connecting all the agents in the simulation. The three managers stay at the margins of the networks, as if they serve as facilitators for others to connect. Hence, and even without calculations, it is already apparent from Figure 2 that managers are not central to the operations. They may still have to make decisions, but they are not at the core of daily operations.
The three measures presented in Table 1 certify the considerations above by reporting the values of three of the most relevant centrality indices related to each agent. Eigenvector centrality is a measure of influence, and it is calculated by weighting the node’s connections as opposed to every other node in the network. Betweenness centrality relates to the times the node is part of a path connecting two other nodes. Finally, closeness centrality is a measure of the shortest paths that allow getting to the node from any other node in the graph, on average; it is calculated as geodesic distance.
Table 1 ranks the teams according to influence (i.e., eigenvector); however, apparent from the other two indices, agents ranking high in one are likely to score high in the other indices as well. The information confirms what is visible in the visualized network (Figure 2), that is agent-managers 0, 1, and 2 are ranked 33, 32, and 25. Given their location at the edges of the network, it is rather obvious that information does not pass across them, hence betweenness is low to very low—it ranges from 0 to 1.5 (max value for this index is 15.62 from team 22). This limits their influence, with an eigenvector ranging from 0.17 to 0.62 (max = 1). The third index, closeness, also scores particularly low, with the longest path of 0.5 associated with agent-manager 0. The agents that show better indices are not necessarily those with larger informal teams. For example, the two agents with the highest eigenvector values are agent-employee 13 and agent-employee 24, with, respectively, 14 and 5 team members. In fact, agent-employee 3 has the highest number of team members with 21; it ranks high in the list but it is noticeable that other nodes/agents have more influence and overall centrality. Another interesting consideration concerns those agents that score the highest in betweenness but are neither influential nor close. For example, agent-employee 18 has one of the highest betweenness values (i.e., 11.74), due to its position in the network, but scores very low in the other two indices. The highest betweenness value comes from agent-employee 22 that, with 15.62, is very much away from the average of 6.96. One might think that the highest values depend on how many networks (i.e., informal teams) an agent-employee is part of. That is, however, not the case. In fact, agent-employee 22 is part of 15 teams, including the one that originates from it. The highest value pertains to agent-employee 29, part of 21 teams. Agent-employee 18 is part of only 10 teams, similar to the 11 of agent-employee 13—the first in the overall ranking—and unlike the agent with the second position in the ranking table, agent-employee 24, part of 20 teams. In other words, there is no correlation between the number of teams an agent is part of and betweenness centrality values.

3.3.2. Cognitive Dynamics

The question to ask in relation to the results above is whether those agents with higher centrality are also those that receive better or improved effects in cognitive terms. Put differently, the position in the network can be an indication of the way in which they develop their competence—aptitude towards the problem or task they are dealing with—and is reflected in their performance (leaks/cases fixed).

Competence

The idea is to compare competence change for agents at the top with those at the bottom of the ranking Table 1. The trends for (log) competence in each team leader (or originator) are presented in Figure 3, where linear regressions summarize the general patterns. Competence can only increase when an agent fixes a case and this can happen either because the agent has enough competence or as a result of cooperation with other agents. The two most central agents are 13 and 24; both experience a relative increase in competence, especially if compared to the one experienced by agent-managers (i.e., 0, 1, and 2). Some agent-employees—25, 27, 28, 19, and 11—do experience minimal competence growth. When compared with the information in Table 1, it is quite surprising that agent-employee 29, ranked 4 for influence, experiences a decrease, on average, of competence (Figure 3).
From the above, it is apparent that a simple analysis of the structure of informal networks as well as of the most influential nodes does not explain cognitive dynamics, in the sense that competence—i.e., learning from a combination of cognitive resources—is not completely dependent on these aspects.

Dispositions

The following step in the analysis is that of trying to understand competence through cognitive dispositions. As mentioned above, each agent has a disposition to listen ( L i ) and a disposition to share ( S i ) information. Thus, it becomes essential to understand whether these two are able to predict a modification in the learning mechanism leading to competence variability among agents.
Probably one of the simplest ways to isolate this effect would be that of using a linear mixed-effect regression model [32] that takes care of the longitudinal nature of the data (its evolution over time) while, at the same time, taking the groups (i.e., teams) into account. Thus, we created a model that regresses the agent’s competence on time, and disposition to share and listen to both the agent and the others in the team.
The first result is that almost all variance ≈99% in the dependent variable (i.e., competence) is explained by differences within teams rather than by those between them. The grouping is meaningful. Concurrently, the effect of time seems to be irrelevant, reporting an estimate β t = 0.0000175   ( s . e . = 0.00007 ) ,   p = 0.8091 . On the contrary, the effect of the disposition to share of the other shows the highest effect, with β S e 2 = 0.071   ( s . e . = 0.003 ) ,   p < 0.0001 , where S e 2 is the disposition to share of the other end of the connection in a given team. The disposition to listen of the agent also has an effect on competence increase, with β L e 1 = 0.021   ( s . e . = 0.004 ) ,   p < 0.0001 , where L e 1 is the disposition to listen of the node from where the link originates. The coefficients are to be interpreted as the average increase the selected independent variable has on the dependent variable at each time step. If observed from this perspective, in spite of the apparent low number of the estimate, the effect is actually extremely powerful, given we have 260 data points in time.
The other two variables considered are the disposition to share of the agent, and the disposition to listen of the other agent. Now, from a purely rational perspective, these seem to be irrelevant in that competence should be a function of the information gathered by the agent, and not that transferred to other agents. Results support this assumption only partially. On the one hand, a positive attitude of an agent towards sharing information has a positive effect on its own competence, with an estimate that has the highest effect in the regression, with β S e 1 = 0.175   ( s . e . = 0.003 ) ,   p < 0.0001 . This is surprising and it probably indicates that the exchange needs to be bidirectional to actually work. In other words, cognition needs to be considered a system in order to work (more on this below).
On the other hand, it is surprising to see that the disposition of the agent at the other end of a connection to listen affects the development of competence of the agent at the origination point negatively, with an estimate of β L e 2 = 0.023   ( s . e . = 0.004 ) , p < 0.0001 . Again, this is probably related to a systemic effect that is discussed in Section 4.
Figure 4 is a visual representation of these regression results. However, this is conducted by crossing these results to the findings of the structural network analyses above. In order not to overestimate any of the centrality indexes presented in Table 1, we have created a Composite Centrality Index (CCI) that is a simple weighted average of eigenvector (EV), betweenness (B), and closeness (C) centrality:
C C I = 1 3 × E V + 1 3 × C + 1 3 × ( B / m a x ( B ) ) .
We then categorized the data points into three influence groups, split by the 1st and the 3rd quartiles. Data are organized around these three groups in the panes shown in Figure 4. The y axis is the logarithm of competence for the team leader, the x axis is the disposition to listen ( L e 1 ), and colors represent the disposition to share ( S e 1 ) of the team leader. The plot shows that competence is higher when CCI is high (right pane vs. left and middle pane), indicating that structural elements of the network do have an effect. However, at the same time, these effects can only be seen in combination with the two cognitive dispositions, that need to be relatively high in order for competence to occupy higher positions in Figure 4.

4. Implications and Conclusions

This article started by presenting the assumption that cognition is systemic, it has provided the theoretical arguments to support this claim, and it has then used a computational simulation as an example to observe and to reflect on cognitive dynamics.
The analysis of the agent-based TS 1.0.5 Model is oriented towards unearthing cognitive dynamics that relate to agents’ learning mechanisms that ultimately affect their competence in dealing with the task at hand. The analytical approach has been incremental and started from the structural elements to then assess the dynamics emerged within teams. Implications are organized into two areas: (a) the workings of the meso domain (i.e., the elements), and (b) the importance of the distributive processes.

4.1. Systemic Elements

The analysis of the data from the simulation shows that competence is nested within teams. In part, this derives only indirectly from how the simulation was developed. In fact, the simulation model makes it such that competence increases from cases that are successfully solved. This allows agents to acquire new knowledge and develop their competence further. In principle, agents with high levels of competence are more likely to successfully deal with the task at hand and they may learn from this activity. When competence is not developed enough, the informal network of other agents comes into play. Furthermore, in this case, the solution to cases only depends on successful exchanges with other connected agents. The simulation does not give “preference” to these social interactions, in the sense that parameter values are distributed normally around means and standard deviations and the procedures split the agent population in two, by using the mean as a threshold. Nevertheless, the vast majority of agents in a team do develop their competence as a result of being embedded into such teams.
This means that the social elements of cognitive dynamics lie more at the core than those claimed by both traditional cognition [8,11] and distributed cognitive scholars [5]. It is the formation of social relationships around each agent that constitute the basis for “learning” to happen, as reflected in competence development. In light of this feature, we have called this phenomenon social organizing elsewhere [3,33]. This also means that a “center” of such a system is very difficult to isolate and standard measures of centrality (see Table 1 above) do not fully capture cognitive dynamics. They do provide information on positions and roles in a (nested) structure of informal networks, but the knowledge that can be gathered from them is about the potential of each node. In this article, social network analysis has been pivotal in understanding that more information was necessary in order to analyze the dynamics of the cognitive systems. Furthermore, there are as many cognitive systems as there are teams, although these are, as anticipated, nested—the same agent is part of multiple teams.
The fluidity with which agents develop competence is another element that emerges from the analysis. This means that, in spite of their centrality and initial parameter values, it is very difficult to predict where the cognitive dynamic of each agent is going to evolve, or is going to end when observed from the start of the simulation. Some of them may develop competence as a result of an emergent process, leading to unexpected outcomes (i.e., the final competence level) [28].

4.2. Systemic Processes

Structural aspects—the teams in our simulation example—are very relevant in the understanding of cognitive dynamics. At the same time, they are incapable of providing a full explanation. A structure needs to be understood in the dynamics originates in it generates. The elements—social and other resources—of a structure interact to give rise to configurations that cannot be understood by an isolated analysis of it. This is why the elements of a system must be understood in connection to their actual and potential capabilities. This, in turn, may affect structure itself, among other aspects.3 For this reason, the mechanisms with which agents interacted in the simulation were not fully in line with the results of the social network analysis.
Cognitive dispositions play a significant role in any cognitive system [4,34]. However, what emerges from the simulation results is that they have to be considered in a systemic perspective. This means that cognition has no actual “center”, but its distributive features make it very difficult to understand how, when, and why an element is central. The rather surprising result that the competence of agent x depends on the disposition to listen of another agent y points to this proposition. Furthermore, the other result that indicates that the disposition to share of agent x affects its own competence development negatively is also supporting of this proposition. If we reinterpret results by thinking systemically about them, then a candidate (apposite) explanation is that it is the combination of all the elements together and their interactions that affect cognition. In light of this perspective, sharing may redirect resources over others and it can be thought of as an investment (a cost) to pay to build trust and other relationships. In other words, it is a negative effect that creates other, more powerful, positive effects. Again, it is the system that matters, not the role of the individual element.

4.3. Concluding Remarks

This article started with the proposition that cognition is systemic. Building on it, the claim is that the type of system that should be considered to understand, define, and analyze human cognition shows features typical of complexity. As a way to define more precise boundaries of this proposition—i.e., that human cognition is better described as a complex system—we have used an agent-based computational simulation model. We zoomed in on a particular configuration of parameters of the TS 1.0.5 Model to study the cognitive dynamics behind competence development. In the model, as well as in most observed cognitive systems, competence constitutes a body of knowledge that is applied to specific domains to perform tasks, solve problems, or to perform one’s job, more generally. Traditionally, just like cognition, competence has been considered as something pertaining to the individual. We have come across something that departs quite significantly from this perspective.
There are at least two major remarks that can be drawn from the theoretical background and the example presented in this paper. One is that the type of complexity we have outlined is very much in line with systems that adapt, self-organize, reconfigure, show resilience, weaken dependence on initial conditions, and are extremely difficult to predict. The assumptions made in the TS 1.0.5 Model are fairly simple. Agents are not characterized by standard features of human beings; they only have characteristics that are strictly functional to the task to be performed. These characteristics are also in line with the social environment in which the agents operate. In spite of this abstraction, the results discussed in this article clearly indicate that cognition clearly shows dynamics that can only be characterized in the realm of complexity. Furthermore, the point here is that, if such a simple abstract representation already produces a fair level of complexity, there are good reasons to believe that complexity can be found also when the model (the observed system) increases the complex characteristics of its agents (human beings, in the observed system).
The other consideration reflects the elements that make this complexity. The theoretical concept presented in the first part of the article that states the importance of social mechanisms in understanding and analyzing cognitive dynamics is probably the kernel of what we have shown in this article. The mechanisms of cognitive distribution that allow agents to learn, and thus increase their competence, are, in essence, social interactions. These happen in a given frame and with given characteristics of the agents, yet their dynamics are such that they can shape (or re-shape) the system. Neither the frames (e.g., the network structure) alone nor the individual characteristics (e.g., competence, dispositions) alone can explain cognitive dynamics. This supports the original proposition that the meso domain is central in understanding complex systemic cognition.
The portion of the TS 1.0.5 Model presented in this paper is a relatively simple snapshot of a broader version that has been used to analyze the effect of team structures on performance. Future research may look at sensible extensions of this model to move the inquiry further. Amongst the many, a modified version of this model that develops characterizations of agents by including historical information, expertise, skill levels, and a more nuanced account of informal relations among agents would benefit our understanding of the making of the meso domain as well as define the system in more detail. Another version of the model could compare different perspectives of competence in an attempt to understand when and if a more traditional (isolated) account matches the explanatory power of the distributed account presented here. Finally, the model could be extended to include more dynamic perspectives of agents leaving a team and joining another, or a hiring/firing process. In this case, adaptation would play a role and it could be interesting to understand how big a role it could have.
Perhaps we can end the article with a call for cognition scholars, on one side, and to complex systems scholars, on the other. The former should consider an a-centric complex system perspective when studying human cognition; such a perspective makes it possible to increase the explanatory power of the analyses while, at the same time, providing a more accurate representation. We would like to invite the latter to consider the domains across and between micro and macro—what we called meso here and elsewhere—when studying complex social systems.

Author Contributions

Conceptualization, D.S., R.G.-A. and M.N.; methodology, D.S. and M.N.; modeling, D.S. and M.N.; formal analysis, D.S.; resources, D.S., R.G.-A. and M.N.; data curation, D.S.; writing—original draft preparation, D.S. and R.G.-A.; writing—review and editing, M.N.; visualization, D.S.; project administration, D.S. and R.G.-A.; funding acquisition, D.S. and R.G.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Velux Foundation, grant number 38917.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the ethnographic study (not reported in this article) used to develop the original agent-based model.

Data Availability Statement

The TS 1.0.5 Model is available on the OpenABM database. Data for this article can also be found there. https://www.comses.net/codebases/3a340ef1-1992-4a5a-8e09-5dac9f541720/releases/1.0.5/ (accessed on 29 June 2024).

Acknowledgments

All authors are extremely grateful to Maria S. Festila, member of our team in the project Determinants of Resilience in Organizational Networks (DRONe), for her ethnographic work that served as root and inspiration for the TS 1.0.5 Model.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Notes

1
This simply means that the type of agent-leak is consistent with the area of competence of the agent-employee.
2
Apart from the parameter values indicated in the text, the simulation also fixes the case importance threshold at 0, competence increase δ c = 0.02 , and relationship range ϕ = 15 (its maximum).
3
This was not possible to observe through the simulation, because we have selected a relatively static version of it to analyze in this article. However, this could be a good topic for future research studies.

References

  1. Cowley, S.J.; Vallée-Tourangeau, F. (Eds.) Cognition beyond the Brain. Computation, Interactivity and Human Artifice, 2nd ed.; Springer: London, UK, 2017. [Google Scholar]
  2. Gahrn-Andersen, R.; Festilia, M.S.; Secchi, D. Systemic Cognition: Sketching a Functional Nexus of Intersecting Ontologies. In Proceedings of the Multiple Systems: Complexity and Coherence in Ecosystems, Collective Behavior, and Social Systems, Cagliari, Italy, 4–5 May 2023; Minati, G., Penna, M.P., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 89–99. [Google Scholar]
  3. Secchi, D.; Gahrn-Andersen, R.; Cowley, S.J. (Eds.) Routledge Studies in Organizational Change & Development. In Organizational Cognition: The Theory of Social Organizing; Routledge: New York, NY, USA; Abingdon, UK, 2022; Volume 28. [Google Scholar]
  4. Secchi, D. Computational Organizational Cognition. A Study on Thinking and Action in Organizations; Emerald: Bingley, UK, 2021. [Google Scholar]
  5. Hutchins, E. Cognition in the Wild; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  6. Hutchins, E. Distributed cognition. Int. Encycl. Soc. Behav. Sci. Elsevier Sci. 2000, 138, 2068–2072. [Google Scholar]
  7. Magnani, L. Morality in a Technological World. Knowledge as a Duty; Cambridge University Press: New York, NY, USA, 2007. [Google Scholar]
  8. Newell, A.; Simon, H.A. Computer simulation of human thinking. Science 1961, 134, 2011–2017. [Google Scholar] [CrossRef] [PubMed]
  9. Newell, A.; Simon, H.A. Human Problem Solving; Prentice-Hall: Englewood Cliffs, NJ, USA, 1972. [Google Scholar]
  10. Chomsky, N. Rules and Representations; Columbia University Press: New York, NY, USA, 1980. [Google Scholar]
  11. Fodor, J.A. Psychosemantics. The Problem of Meaning in the Philosophy of Mind; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
  12. Secchi, D.; Neumann, M.; Gahrn-Andersen, R.; Festila, M.S. Distributed Competence: How Team Structure Affects Task Performance. In Proceedings of the European Academy of Management Annual Conference, Bath, UK, 25–28 June 2024. [Google Scholar]
  13. Goodwin, C. Professional vision. Am. Anthropol. 1994, 96, 606–633. [Google Scholar] [CrossRef]
  14. Gahrn-Andersen, R. Transcending the situation: On the context-dependence of practice-based cognition. In Context Dependence in Language, Action, and Cognition; Ciecierski, T., Grabarczyk, P., Eds.; Epistemic Studies; De Gruyter: Berlin, Germany, 2021; pp. 209–228. [Google Scholar]
  15. Noë, A. Concept pluralism, direct perception, and the fragility of presence. In Open Mind; MIND Group: Frankfurt am Main, Germany, 2014. [Google Scholar]
  16. Hutchins, E. The cultural ecosystem of human cognition. Philos. Psychol. 2014, 27, 34–49. [Google Scholar] [CrossRef]
  17. Dopfer, K. The origins of meso economics: Schumpeter’s legacy and beyond. J. Evol. Econ. 2012, 22, 133–160. [Google Scholar] [CrossRef]
  18. Guo, K.; Yolles, M.; Fink, G.; Iles, P. The Changing Organization: Agency Theory in a Cross-Cultural Context; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  19. Yolles, M. Organizations as Complex Systems: An Introduction to Knowledge Cybernetics; IAP: Greenwich, CT, USA, 2006. [Google Scholar]
  20. Yolles, M. The complexity continuum, Part 1: Hard and soft theories. Kybernetes 2018, 48, 1330–1354. [Google Scholar] [CrossRef]
  21. Perry, M. Socially distributed cognition in loosely coupled systems. In Cognition beyond the Brain: Computation, Interactivity and Human Artifice; Cowley, S.J., Vallée-Tourangeau, F., Eds.; Springer: Dordrecht, The Netherlands, 2013; pp. 147–169. [Google Scholar]
  22. Herath, D.B. Business Plasticity through Disorganization; Emerald: Bingley, UK, 2019. [Google Scholar]
  23. Herath, D.B. The Primacy of “Disorganization” in Social Organizing. In Organizational Cognition: The Theory of Social Organizing; Secchi, D., Gahrn-Andersen, R., Cowley, S.J., Eds.; Taylor & Francis: Abingdon, UK, 2022; pp. 198–212. [Google Scholar]
  24. Minati, G. Multiple systems. In Multiple Systems: Complexity and Coherence in Ecosystems, Collective Behavior, and Social Systems; Minati, G., Penna, M.P., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 3–15. [Google Scholar]
  25. Bateson, G. Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology; University of Chicago Press: Chicago, IL, USA, 2000. [Google Scholar]
  26. Secchi, D. Boundary Conditions for the Emergence of ‘Docility’: An Agent-Based Model and Simulation. In Agent-Based Simulation of Organizational Behavior. New Frontiers of Social Science Research; Secchi, D., Neumann, M., Eds.; Springer: New York, NY, USA, 2016; pp. 175–200. [Google Scholar]
  27. Troitzsch, K.G. Historical introduction. In Simulating Social Complexity. A Handbook, 2nd ed.; Edmonds, B., Meyer, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; Chapter 2; pp. 13–22. [Google Scholar]
  28. Miller, J.H.; Page, S.E. Complex Adaptive Systems. An Introduction to Computational Models of Social Life; Princeton University Press: Princeton, NJ, USA, 2007. [Google Scholar]
  29. North, M.J.; Macal, C.M. Managing Business Complexity: Discovering Strategic Solutions with Agent-Based Modeling and Simulation; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
  30. Grimm, V.; Revilla, E.; Berger, U.; Jeltsch, F.; Mooij, W.M.; Railsback, S.F.; Thulke, H.H.; Weiner, J.; Wiegand, T.; DeAngelis, D.L. Pattern-oriented modeling of agent-based complex systems: Lessons from ecology. Science 2005, 310, 987–991. [Google Scholar] [CrossRef] [PubMed]
  31. Edmonds, B.; Meyer, R. (Eds.) Simulating Social Complexity. A Handbook, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  32. Lindstrom, M.J.; Bates, D.M. Newton—Raphson and EM algorithms for linear mixed-effects models for repeated-measures data. J. Am. Stat. Assoc. 1988, 83, 1014–1022. [Google Scholar]
  33. Secchi, D.; Cowley, S.J. Organisational cognition: What it is and how it works. Eur. Manag. Rev. 2021, 18, 79–92. [Google Scholar] [CrossRef]
  34. Secchi, D.; Bardone, E. Super-docility in organizations. An evolutionary model. Int. J. Organ. Theory Behav. 2009, 12, 339–379. [Google Scholar] [CrossRef]
Figure 1. Number of members in the informal work teams originating from employees and managers.
Figure 1. Number of members in the informal work teams originating from employees and managers.
Systems 12 00287 g001
Figure 2. Social network of the work teams. (blue nodes: managers; red nodes: employees).
Figure 2. Social network of the work teams. (blue nodes: managers; red nodes: employees).
Systems 12 00287 g002
Figure 3. Competence (log) for each informal team leader (linear regressions and confidence intervals).
Figure 3. Competence (log) for each informal team leader (linear regressions and confidence intervals).
Systems 12 00287 g003
Figure 4. Competence (log) for each informal team leader explained by (log) disposition to listen to the agent. The data points are colored using the disposition of the team to share leader. Each pane is split over a centrality index that combines the three measures in Table 1.
Figure 4. Competence (log) for each informal team leader explained by (log) disposition to listen to the agent. The data points are colored using the disposition of the team to share leader. Each pane is split over a centrality index that combines the three measures in Table 1.
Systems 12 00287 g004
Table 1. Eigenvector, betweenness, and closeness centrality of the agents.
Table 1. Eigenvector, betweenness, and closeness centrality of the agents.
#AgentEVBC#AgentEVBC
113110.0770.80017250.8008.5180.727
224110.0770.8001890.7875.0210.711
3200.9649.2240.78019150.7395.1280.696
4290.9257.5750.7622050.7096.4960.696
530.9248.2940.76221320.7067.7630.696
6210.8946.4140.74422230.7036.4450.696
770.8687.5080.74423270.7004.2200.681
860.8409.6580.74424310.6555.7270.681
9120.8166.8720.7272510.6221.5070.615
1040.81613.2300.72726280.6029.9200.681
11110.8125.0770.71127170.5963.7450.653
12260.8114.3120.7112880.5796.5160.667
13100.8114.3120.71129300.5787.1060.667
14160.8117.4750.71130180.57411.7400.667
15220.81115.6260.74431140.5455.6790.653
16190.8107.1840.7113220.4111.5540.582
3300.17400.508
Note. EV: eigenvector centrality; B: betweenness centrality; C: closeness centrality.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Secchi, D.; Gahrn-Andersen, R.; Neumann, M. Complexity in Systemic Cognition: Theoretical Explorations with Agent-Based Modeling. Systems 2024, 12, 287. https://doi.org/10.3390/systems12080287

AMA Style

Secchi D, Gahrn-Andersen R, Neumann M. Complexity in Systemic Cognition: Theoretical Explorations with Agent-Based Modeling. Systems. 2024; 12(8):287. https://doi.org/10.3390/systems12080287

Chicago/Turabian Style

Secchi, Davide, Rasmus Gahrn-Andersen, and Martin Neumann. 2024. "Complexity in Systemic Cognition: Theoretical Explorations with Agent-Based Modeling" Systems 12, no. 8: 287. https://doi.org/10.3390/systems12080287

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop