1. Introduction
Our world is highly complex and interconnected, and decision-making by managers is now made more difficult by the unprecedented complexity arising from several sources, including the increasing role of stakeholder capitalism [
1]; increasing diversity and complexity of social structures [
2]; rapid technological advancement, including AI [
3]; changing population dynamics, including a growing global population [
4]; increasing urbanisation that is limiting natural resources [
5]; and emergent phenomena, such as unpredictable market behaviour, increases in natural disasters, and social movements, often large scale [
6,
7,
8]. One feature which is a foundational force behind the complexity of our world is the interdependence of societies, economies, and technologies, a factor which has reshaped the complexity of human interactions [
9]. There are now global networks of transportation, communication, and trade, which connect to humans through digitisation. Financial systems and economies are therefore interdependent, generating emergent and unpredictable phenomena and adding layers of complexity to modern life [
10].
One grand challenge we face is that of sustainable exploration and development that balances the needs of the economy, the environment, and social well-being. Sustainability exploration is often addressed through the concept of grand challenges of unprecedented scale, interconnectedness, and complexity, requiring global efforts in understanding and response [
11]. De Ruyter et al. [
12] refer to the Sustainable Development Goals adopted by the UN in 2015 and the need for targeted action to back up academic conversations via societal and political action and working together with industry. The grand challenges of climate change, food security, and inequality are deeply interconnected within our globalized world. These highly interdependent systems are difficult to understand and control and may pose serious threats to society due to their instability, requiring global cooperation [
13]. In addition, technological disruption presents issues around privacy and data protection and the rapidly escalating digital technological developments, including AI, create both benefits and risks. Some are concerned about the impact of automation on employability [
14]. Others raise concerns about misinformation and the capability of agents to manipulate people’s opinions [
15]. Both automation and misinformation are significant difficulties that demand a multidisciplinary approach, involving technological innovation, ethical guidelines, and policy interventions, as well as enhanced contemporary management skills to govern under the weight of all these potential problems. Clearly making wise decisions taking account of all of the information in such complex conditions can incur risk.
The interaction between people and machines itself represents a significant grand challenge within contemporary society. While enormous progress has been made in developing powerful computational systems, the creation of truly intuitive and meaningful interactions between people and machines is still some way off. Human communication is quite sophisticated and nuanced, and this can lead to frustration when people have to adapt their behaviour to fit in with the machines’ limitations, rather than vice versa. While the focus is often on how people could adapt to machines, perhaps it would be wise to consider how machines can be designed to better serve human ways of thinking. Interactions that complement human thinking rather than competing with it would be valuable. The goal would be creating collaborative systems that enhance human capabilities but at the same time respecting human involvement and human judgement. As the machines are more and more integrated into our lives, questions of privacy inevitably emerge. We also want to ensure that human-to-human interaction is not decreased as human–machine interaction (HMI) builds. How do people work out how to trust machines? Machine decision-making processes need to be transparent and understandable to people. This matter encompasses a range of issues, including ethical concerns, social implications, technological impacts on human behaviour, management decision-making, and societal structures [
16,
17]. This paper examines the interaction between humans and machines and critically analyses how this interaction can be leveraged to face these difficulties effectively, proposing a conceptual model to thrive in conditions of complexity and high levels of uncertainty. In the past, traditional decision-making systems often have been hierarchical, but this style of decision-making is not suited to the level of complexity of the current era. Cooperative human–machine decision-making is necessary [
18].
The major contributions of this paper are as follows:
This paper advances the understanding of Cybernetics 3.0 as a framework for addressing human–machine interaction challenges, emphasising how this approach enables the integration of human agency, ethics, and values into complex systems thinking and decision-making.
Through causal loop diagrams, this article provides a novel visualisation of the dynamic feedback relationships between human and machine decision-making capabilities, showing how these can be optimised via carefully designed collaborative systems.
This paper develops a practical model for human–AI collaboration in healthcare settings illustrating how cybernetic principles can inform the decision of systems, leveraging both human judgement and machine capabilities whilst maintaining appropriate oversight and ethical considerations.
This paper discusses the potential of the field of cybernetics as a way of combining and integrating the potentials of AI and humans, to give our planet the best chance of flourishing into the future. Harari [
19] reflects on the future of human–machine interactions, supporting the potential for cybernetic systems to reshape society through collaboration. Cybernetics is an interdisciplinary field that explores control and communication in complex systems including both living organisms and machines. Cybernetics views humans and machines as information processing systems which, through cooperation, can lead to better design and greater collaborative capabilities [
20]. The field investigates how systems use information feedback and regulation to adapt, adjust their behaviour, and maintain stability and equilibrium as the environment changes. Information flow is obviously critical. How humans interact with and are influenced by technology is a critical feature of contemporary life, and cybernetics presents a promising way of optimising the interaction between humans and machines for the benefit of all, and to assist managers in making wise and sustainable decisions. Cybernetics in its more recent form represents an evolution beyond earlier versions by emphasising human agency and intentionality in complex systems. This approach views machines as extensions of human action rather than independent entities. The approach focuses on how human actions shaped systems while still acknowledging machine capabilities. This is particularly relevant for addressing current complex decision-making environments involving both human and AI. The approach helps to bridge technical and social aspects of human–machine systems and provides a framework for ethical oversight and human wisdom in technological systems.
HMI refers to the interface and relationship between humans and technological systems. This has evolved from simple user interfaces to complex collaborative decision-making. HMI now encompasses cognitive, emotional, and ethical dimensions, going far beyond just technical aspects. This is critical for addressing the current grand challenges requiring both human and machine capabilities. It includes the consideration of human agency, trust, and ethical oversight. This perspective is particularly important as AI systems become more sophisticated and autonomous.
The remainder of this paper is structured as follows:
Section 2 examines the evolution of cybernetics and introduces Cybernetics 3.0 as a framework for HMI.
Section 3 explores systems thinking in complex decision-making scenarios.
Section 4 analyses HMI and decision-making capabilities.
Section 5 discusses the implications of Cybernetics 3.0 for human–machine decision-making (HMD).
Section 6 presents a cybernetic model of human–AI collaboration using causal loop diagrams.
Section 7 illustrates practical applications through a healthcare study. This paper concludes with implications for future research and practice.
2. Cybernetics 3.0 and Human–Machine Interactions
The overarching theoretical framework of this paper is Cybernetics 3.0. The field of cybernetics, derived from the Greek word “kybernetikos”, meaning “good at steering”, was established by Norbert Wiener in the 1940s. The field is highly relevant today, being a discipline that examines systems, emphasizing the interactions and dynamics among humans, technology, and the environment. Systems thinking is a foundational perspective for understanding complex interconnected issues, giving a holistic approach to viewing problems and emphasising the relationships and interactions between various components. Systems thinking is powerful as a way of understanding complex relationships involving linkages and feedback loops. If we think of a simple example, two nations may hate each other because they do not know each other, and they will not get to know each other because they hate each other. Actions have consequences. We are all familiar with the concept of the self-fulfilling prophecy as a feedback loop in which a self-reinforcing system, in this case the initial belief, whether true or false, leads to its own confirmation. Cybernetics as a discipline builds upon and extends systems thinking in its recognition that systems are complex, consisting of many feedback loops that influence their operation, variability, and evolution throughout time towards desired outcomes. Cybernetics, therefore, studies how systems regulate themselves and maintain stability, and this can give key insights into how complex systems can be guided and managed. Wiener’s work was widely acknowledged as the founding of “cybernetics” as a distinct scientific field. The primary objective of this field is to examine how best to control complex systems, be this in the field of transportation, services, health, education, or many other domains [
21]. The evolution of cybernetics provides valuable insights into the changing understanding of HMI. Over time, there has been a move away from understanding machines as being separate from human life towards viewing machines as being extensions of people and reflections of human action.
Cybernetics 1.0, developed by Arturo Rosenblueth and Norbert Wiener in the 1940s [
22], was concerned with observed systems and feedback. Cybernetics 1.0 focused on analogies between living beings and machines, particularly the application of biological feedback mechanisms to technology. Wiener realized that feedback could be replicated in machines if they met certain criteria, including the presence of an input and an output, a sensor to detect the effector’s values, and an instrument to compare this state to a desired final state. Cybernetics 1.0 viewed machines as imitations of biological systems, replicating feedback mechanisms.
Cybernetics 2.0 emerged in the 1970s and shifted the focus from observed systems to observing systems. Heinz von Foerster, Margaret Mead, Gregory Bateson, Humberto Maturana, and Francisco Varela were key figures in this development [
23], introducing the “observer subject” and rejecting the belief in objectivism of the first wave. Maturana and Varela’s concept of autopoiesis, which describes a system where the observer and the observed are interlinked in a continuous feedback loop, became a foundational concept in Cybernetics 2.0. However, emphasising the self-referential nature of consciousness, where the ego primarily interacts with itself, leads to an overly subjective perspective that reduces cybernetics to a theory of how we acquire knowledge. While Cybernetics 2.0 attempted to explore the observer’s role more deeply and challenged the traditional view that the observer and the observed are separated, it was not able to fully overcome the tendency towards subjectivism.
Cybernetics 3.0, developed by Luis de Marcos Ortega, Carmen Flores Bjurström, and Fernando Flores Morador, seeks to overcome the limitations of both Cybernetics 1.0 and 2.0 by understanding self-organization not as an inherent property of systems but as a product of human action, emphasizing the role of people in the system [
24]. This approach sees machines as human surrogates rather than as independent entities and considers feedback a specific type of “human doing”. Cybernetics 3.0 reframes the relationship between people and machines, recognizing the human origin of feedback and emphasizing human action within the system, focusing on the specific role of humans in shaping systems. In this context, feedback mechanisms, commonly associated with both living beings and machines in earlier cybernetic thought, are reframed as a specific form of human doing. By emphasizing human action as the basis for organization, Cybernetics 3.0 introduces a new way of thinking about the observer and the observed. The observer is no longer simply passively observing a system: instead, the observer is actively engaged in creating and shaping the observed system through their actions.
This perspective has profound implications for how we understand the grand challenge of HMI: if we see machines as extensions of human action, then humans bear a greater responsibility for the actions of the machines and their consequences, where there is co-evolution and where human action shapes the machine’s development and machines also influence human actions. Cybernetics as a science aims to ensure that both people and the environment can benefit from the evolution of technology, and gives us a framework to understand the systems in which we are embedded (including ourselves) as complex adaptive systems that are evolving continuously and adapting to our environment [
25]. Cybernetics 3.0 highlights the need for ethical considerations in the design and development of machines, recognizing that these choices are not neutral but instead reflect human values and intentions. We must consider both the components of a system and their interactions, a concept that is necessary for understanding complex systems and their behaviour. By recognising the human origin of feedback and emphasising human action, Cybernetics 3.0 provides a more sophisticated and human-centred perspective on the interaction between humans and machines, and the way in which mutual shaping occurs between people and technology. This has implications for both individual and collective well-being. This perspective is significant in addressing the complex challenges and opportunities presented by technological advancements, particularly in the area of Web 3.0 and the increasing integration of AI into various aspects of life. The implications for human–machine decision-making (HMD) are many and varied.
Feedback mechanisms are a core concept in complex systems management and need to be specifically designed so that critical human knowledge is integrated. Feedback mechanisms allow self-regulation and system stability. They also allow systems to adapt and evolve. There are continuous learning loops between human and machine components, which lead to the co-evolution of human and machine capabilities. This helps to detect and correct errors. In terms of integrating human knowledge, adaptive decision systems incorporate human feedback, and trust oversight loops are needed to maintain human control. Cybernetics 3.0 views feedback as human-directed action that shapes systems. System components will include self-regulation and the maintenance of stability, learning and adaptation processes, HMI loops, detection and correction of errors, and performance monitoring/improvement. To integrate such feedback mechanisms, there will be core learning loops between human decisions and AI capabilities, with trust oversight mechanisms to ensure appropriate control. Recommendation systems should incorporate human judgement. Feedback relationships can be clearly visualised through causal loop diagrams. There needs to be a balance between automation and human agency. Overall, then, feedback mechanisms need to support human agency, while enabling system adaptation and learning. Feedback mechanisms are essential. Humans need to actively shape rather than simply observe system behaviour.
Having established the theoretical foundations of Cybernetics 3.0, the next section examines how systems thinking can be applied to complex decision-making contexts and the challenges of siloed approaches.
3. Systems Thinking in Complex Decision-Making Contexts
The deep interdependence we now see has developed quite quickly. The impacts are only now becoming clearer. Emergence, or the arising of novel properties or behaviours at the macro level that are not present within the individual components, is a fundamental concept in complex systems [
26]. Some forms of unexpected emergence from these linkages will be good, but some will be bad. We have seen many examples of the way catastrophic failures can occur due to this interdependence. The 2008 Global Financial Crisis (GFC), as an example, demonstrates cascading failures that can arise from tightly coupled systems. We can see that problems in one sector can spread through financial networks to affect areas of life around the world: local problems can quickly become global, affecting people far removed from the original issue. Goldin and Vogel [
27] identify the failure of sophisticated global institutions to manage underlying systemic risk. The pandemic is another example of the interconnectedness of systems producing damaging emergence. More integrated thinking would have highlighted the interconnections and enabled systems thinking. A multidisciplinary approach that accepts information from various perspectives could enable superior engagement with people, especially vulnerable people [
28,
29]. As noted by Sturmberg et al. [
30], awareness of the many interdependencies among the various aspects of the health system would have enabled a much better response and the building of resilient and healthy communities. The pandemic response exposed a critical flaw in modern systems: the lack of integration and collaboration between various fields of knowledge. Identifying the connections between the various aspects of health and well-being would have led to a wiser and more inclusive response [
31]. In fact, the pandemic is a powerful illustration of the dangers of compartmentalisation and a failure to pick up on causal connections. Recognising the connections between human and the planet’s health and fostering collaboration across networks enables the building of more resilient and equitable systems, and these systems will be more capable of navigating future crises as these arise.
There are unique difficulties in making decisions in complex systems due to the interconnected nature of the components, nonlinear relationships, and the dynamic interactions in these environments [
32]. Complex systems such as ecosystems, cities, or financial markets exhibit emergence, and this cannot be predicted by analysing the individual components in isolation. Compartmentalisation and a silo mentality create an “us” versus “them” mindset. Sometimes people are not aware of the existence of a silo mentality. As noted by Cilliers and Greyvenstein [
33], people who are inside a silo may believe that they are actually seeing the problem in a holistic way. Silos may encourage groupthink and a sense of superiority. A scoping review by Bento et al. [
34] noted that siloed organisational behaviour does not sit well with complexity, which requires instead webs of interaction and communicating networks. A silo mentality is in effect the absence of systems thinking and of a clear vision of the whole organisation, reducing efficiency and morale as well as negatively impacting the organisational culture. Silos create a reluctance to share information and to cooperate across departments and units. People with power also may wish to preserve the status quo [
35]. Current siloed decision-making is a major barrier in addressing such complex matters. Siloed solutions cannot address interconnected problems. What is needed is holistic understanding and wisdom. Systems thinking practices could assist people in anticipating unintended consequences and better comprehending the role of people in shaping systems.
Decision-making against the background of this intense interconnectedness requires not reductionist siloed thinking but wisdom and holistic thinking. Networks exacerbate existing problems such as inequality, scarcity, and the disempowerment of many groups of people. Decision-making needs to be different now. Understanding feedback loops is critical. Systems thinking enables people to view the second and third order effects that can occur often after a time delay and often counterintuitively, identify the feedback loops, and recognise that linear cause-and-effect thinking will not work. As asserted by Senge [
36], “Systems thinking is a conceptual framework, a body of knowledge and tools that has been developed over the past 70 years to make full patterns clearer, and to help us see how to change them effectively”. While there are many perspectives on systems thinking, the main themes refer to the complex nature of systems, its closeness to big picture thinking, the value of holistic thinking, multidimensional thinking, and focusing on the big picture of the system in order to obtain more effective solutions. This is a fast-growing area of the literature with applications in various domains including management, engineering, education, healthcare, and others. Risk assessment needs to take account of systemic risk and look for contagion pathways. Stakeholder management must be changed as well. A broader range of stakeholders, moving well beyond local stakeholders, should be involved. There will be many competing interests. The decision-making process needs to be flexible and adaptable, using scenario planning to test multiple future possibilities. Cooperation frameworks should be at the forefront and clear communication with clear chains of responsibility will be necessary. Responses need to be rapid under these circumstances.
Prescott [
28] used the parable of the blind men and the elephant to illustrate the impact of holding limited perspectives when we try to understand complex systems. The blind men examine different parts of the elephant and come to completely different understandings of its nature, and in the same way, people situated in specialised medical fields, for example, can operate in isolation and not comprehend the interconnected nature of various aspects of people’s health and planetary health. This prevents us from seeing the whole picture of interconnected systems. A lack of communication between people in various roles who focus on narrow specialised areas provides fragmented solutions that are inadequate in addressing the root causes of the problem at hand. Fragmented approaches that do not appreciate that biological, psychological, social, and cultural aspects of health are all connected are inadequate. The degradation of ecosystems is also connected with human disease. Encouraging cross discipline and cross specialisation communication flow can break down the silos. Embracing diverse perspectives regarding different ways of viewing and living in the world can provide a cultural shift in a positive direction, encouraging ethical and sustainable decision-making [
37]. A competitive and territorial attitude is not helpful in encouraging collaboration: what is really needed is greater openness and a willingness to learn, as well as a connected-up view of the scenario at hand.
When we consider HMI, and especially the HMD that arises, systems thinking and Web 3.0 are foundational. Web 3.0 represents an evolution towards a far more decentralised and autonomous Internet system which aligns well with cybernetic concepts [
38]. For example, decentralised autonomous organisations exemplify cybernetic self-organization, and peer-to-peer networks can demonstrate emergence in behaviour. We can see feedback loops; for example, within block chain consensus mechanisms that can give system stability via feedback [
39]. Regarding system control, distributed control replaces centralised authority and this has major implications, since control arises from network participants and this approach replaces traditional hierarchical control [
40]. Governance is via algorithms which replace human administrative control, conferring transparency. System parameters are enforced in an automated fashion. This reduces the potential for biased control decisions [
38]. There is greater stakeholder participation, and verification becomes possible without full information disclosure [
41]. Regular information exchange is possible now, and interoperability protocols will allow much more fluid data movement [
42]. Protocols can be upgraded through community governance. System rules can evolve based upon participant behaviour. Web 3.0 empowers consumers by restoring ownership and authority over their digital assets [
43]. Overall, the cybernetic perspective indicates that Web 3.0 could enable more resilience and adaptive Internet infrastructure with a reduced need for centralised control [
44]. Coordination of complex systems can emerge, and information flow and processing can be enhanced. The combination of Web 3.0 and cybernetics creates a powerful framework for next generation digital systems which are both intelligent and self-regulating. This combination allows adaptive learning systems with networks that evolve based upon participant behaviour, enhanced resilience through distributed control and automated error correction, autonomous coordination via smart contracts, and the optimisation of information with enhanced data flow through semantic web capabilities with feedback loops improving system efficiency.
Based on this understanding of systems thinking, the following section explores specific aspects of HMI and decision-making, including the capabilities and limitations of both human and artificial intelligence.
4. Human–Machine Interaction and Decision-Making
HMI is a critical grand challenge now. It also presents a specific example of decision-making under complexity. The interaction is increasingly adapting to human habits and needs, but there are still questions in balancing human control with machine capabilities, especially in the case of sustainability and complex decision-making environments. The relationships between humans and technology are complex and rapidly evolving. Understanding these relationships will be critical for developing effective solutions to complex problems. We need to understand how we can develop better interfaces and better partnerships between people and machines. As we develop technological solutions to address the various grand challenges now being faced, this interaction becomes increasingly important. There are many issues. Ensuring that technological solutions respect human rights and promote fairness is one of them. We also need to develop technologies that are accessible and beneficial to all users. Through HMI, we can leverage big data and analytics to inform policy and actions that will benefit people.
The introduction of Web 3.0 is characterized by the integration of advanced technologies that enable more intelligent, interconnected, and user-centric experiences. Web 3.0 aims to enhance the functionality of the web by allowing machines to understand and interpret data in a more human-like way. AI and semantic technologies in Web 3.0 enable more personalized and contextually relevant content delivery. For example, streaming services can offer tailored recommendations based on a deeper understanding of user preferences, utilizing machine learning algorithms and vast amounts of user data (Netflix Technology Blog, 2017). Currently there are a range of applications of AI within decision-making, with increasing integration and collaboration mainly through AI systems augmenting human decision-making capabilities within a range of industries and sectors. AI is increasingly used for decision support, although this tends to be focused upon lower-level routine decisions rather than intricate, high-stakes decisions necessitating human knowledge [
45]. However, the rapid development of the use of AI predicts that this scenario is likely to change [
46]. Already, financial services are applying autonomous AI decisions in services such as loan approvals, but the incorporation of human judgement in AI-facilitated decision-making is increasingly recognised as a critical domain [
47].
Certainly, we see many applications of AI now and the rollout is very rapid. A report by Meissner and Narita [
46] shows that 35% of Amazon’s revenue arises from AI recommendations. Davenport and Ronanki [
48] suggest that there are benefits in incorporating AI into commercial decision-making aiming to enhance decision-making precision, velocity, and efficacy, while simultaneously decreasing expenses and optimizing processes across several sectors. For example, AI can rapidly analyse big data and identify patterns that humans may not be able to detect [
49]. AI is also useful in handling data-intensive, repetitive tasks, enabling faster, consistent, and accurate decisions [
50]. AI can make good predictions based upon a large training data set and this can be of assistance within commercial decision-making, and therefore, human labour and input can therefore be reduced for tedious processes [
51]. AI has decision-making strengths around rapidly processing and analysing large volumes of data, identifying patterns that may be overlooked by people. This is a very valuable capability in domains such as financial trading where very rapid decisions are needed. The recognition of subtle patterns provided by AI can give people valuable foresight, helping them to anticipate potential outcomes and prepare for various scenarios. There are modelling capacities now available that allow for sophisticated simulations, which can be used by people to understand the consequences of their decisions and refine their strategies.
HMD is an evolving field with significant implications. HMD moves beyond basic interaction to joint decision processes, combining human judgement with machine processing capabilities. HMD bridges the gap between basic interaction and complex decision-making and is a crucial step in developing more sophisticated approaches to challenging problems. This field considers, among other things, how to allocate decisions between humans and machines. The integration of AI and human capabilities in decision-making processes can enhance efficiency, accuracy, and scale. Viewing both styles as complementary can lead to added value. Humans and AI working together can leverage the strength of each to produce collaborative intelligence [
52]. HMD is characterized by intuition, creativity, experience, and ethical awareness, in addition to analysis. For example, healthcare professionals rely on both their training and their empathy to make beneficial patient-care decisions, considering factors that may not be easily quantifiable, such as a patient’s emotional state or quality of life concerns. Peter and Reimer [
48] devised a framework which describes the seven kinds of capability that AI can provide us existing in a tiered stack, with each level building upon the one below. Pattern recognition is a basic capability provided by AI, and we can see this kind of capability occurring in everyday life, for instance, through facial recognition. Classification, in which AI can organise items into wholes, is another useful attribute. Prediction of the future based upon the past, recommendations of suitable options such as those we see popping up based upon our previous buying patterns, automation with a reasonable degree of autonomy, generation of new content that mimics the training data pool, and interactions of the kind that we are now familiar with by using chat bots, are all very useful capabilities provided by AI, but they fall short of integrating human judgement into the decision-making scenario.
There are various hybrid decision-making models and frameworks. HMD occurs along a spectrum from machines being in complete control right through to adaptive decisions in which both systems learn and evolve from feedback [
53]. At the basic level, machines operate independently to make decisions, and an example is that of self-driving cars. At the next level, we see decision support systems in which HMD is enhanced through data-driven insights such as medical diagnostics, where AI tools assist doctors via suggesting possible diagnoses [
49]. The next level is collaborative decision-making where there is a dynamic interaction between people and machines and both parties contribute to the decision-making process. In military operations, decision support systems provide soldiers with real-time data and strategic options that are then evaluated by a person. Augmented decision-making occurs when machines provide enhanced information and visualisation tools, such as dashboards and predictive analytics. Customer data can be used to provide salespeople with detailed insights into market trends. Human-in-the-loop (HITL) systems ensure human oversight over automated systems, and this is particularly important when the environment is high-stake. In this kind of system, human decision-makers can intervene to override algorithmic decisions such as those we see in financial trading systems. Adaptive decision systems are a more dynamic and evolving approach to human–machine decision-making, where a system’s performance improves from incorporating feedback from both machine analysis and human input. In personalised medicine, treatment recommendation systems can adapt based upon feedback from the patient and the clinician and continuously improve their accuracy.
With this understanding of HMI and HMD established, the next section examines how Cybernetics 3.0 principles can be applied to optimise these interactions.
5. Implications of Cybernetics 3.0 for Human–Machine Decision-Making
Cybernetics 3.0 represents an evolution in systems thinking that is very relevant to today’s interconnected dilemmas incorporating consciousness, human values, and ethics into systems thinking. This can contribute to solutions in various ways, including the integration of human consciousness into understanding HMI: recognising that observers and participants are part of the system, incorporating human values and ethics into system design, and bridging the gap between the technical and the human. The recognition of human acting systems and the inherent role of human action in shaping systems helps us to consider the dynamic interplay between people and machines, with both entities influencing each other’s behaviour. Cybernetics 3.0 helps us to understand that human systems can learn how to learn through second-order learning; people can come up with adaptive responses as difficulties emerge and can build resilience through continuous evolution. Cybernetics 3.0 encourages us to think about tools for better group decision-making, ways of sharing knowledge across silos, and ways of integrating various differing perspectives. It is clear from this discussion that by integrating Cybernetics 3.0 principles into the design of human–machine operation the consideration of multiple factors is enabled that can elevate ethical considerations in the decision-making process, including personal moral philosophy [
54].
Application of Cybernetics 3.0 in real-world situations would include, for example, healthcare, where improved diagnosis and treatment could arise through a collaborative approach between medical professionals and AI-powered diagnostic tools. Personalised medicine could be enhanced through adaptive decisions as the AI systems learn continuously from patient and clinician feedback and a dynamic series of iterations leads to improvement. A proactive approach to mitigating bias is suggested within this approach, in which medical practitioners and AI developers collaborate to curate the training data, implement bias detection, and ensure human oversight. Other domains in which Cybernetics 3.0 principles could bring beneficial outcomes could include financial trading, in which AI algorithms can detect underlying patterns and trends that could be missed by humans, while human traders can bring their experience and intuition into the arena. A collaborative approach between human traders working closely with AI systems leverages the strength of both, as continuous feedback from human traders could refine the algorithms and lead to a co-evolutionary improvement process. We can also think of the high-stakes and complex nature of military operations where decision support systems can provide real-time data, situational awareness, and suggest strategic options; however, human judgement and ethical considerations need to be entrained in evaluating the AI-generated recommendations.
This provides a helpful lens for considering future grand challenges that arise from HMI. Through acknowledging the complexity of human systems via the lens of Cybernetics 3.0, we can anticipate unintended consequences that might arise from what seem to be simple interactions between people and machines. Complexity can arise when new technologies are introduced or existing systems are modified, and this can create a ripple effect throughout the interconnected web of human and technological systems, which could lead to unforeseen emergence. The emphasis on human agency highlights the need for responsibility in the design and deployment of new technologies. If humans are active participants in shaping systems, they are also accountable for their actions and their consequences. Ethical implications of technology will be both good and bad. Economic, ecological, and other systems remind us to consider the broader implications of HMI as these unfold across many domains. Understanding these complex relationships will be crucial for navigating the dilemmas posed by technologies like AI, automation, and data-driven systems.
Cybernetics 3.0 recognises the agency of both people and machines. Machines are not now viewed as passive tools controlled by humans. Machines, especially those with advanced capabilities such as AI, can influence humans and can shape the dynamics of systems in ways that we may not immediately observe. This kind of mutual influence has implications for humans flourishing into the future. As machines become more sophisticated, humans need to adapt and co-evolve alongside them. This will need ongoing learning, critical thinking, and the ability to navigate complex systems as we increasingly see a blurring of the line between human and machine agency. We can already see HMI augmenting human capabilities. It will be necessary to ensure that this augmentation is in the direction and purpose of human flourishing, rather than undermining human autonomy or well-being. The core principles of cybernetics include feedback loops and therefore more responsive systems, the design of more intuitive interfaces, and the recognition that systems can learn from user behaviour and adjust, such as what we see in the provision of personalised recommendations based upon previous behaviour patterns. Understanding how people process information can minimise errors in systems such as the provision of warning messages anticipating common user errors.
Reversing the grand challenges notion, there are some grand opportunities here. Grand opportunities is a phrase now appearing in the literature: for example, Jyothi et al. [
55] use the phrase to describe creating wealth from separating rare-earth elements from secondary waste, and Bhattacharya et al. [
56] discussed five grand opportunities arising from computational breakthroughs in the field of single-cell research. Speaking of people acting together with machines, in the best-case scenario, there are grand opportunities that could virtually eliminate poverty and provide better education and healthcare globally. As machines become more and more capable, human autonomy needs to be highlighted and maintained. There is a need for wisdom in technological development. Such development should not be solely driven by technological advancements or economic gains, but instead a consideration of the broader social and ecological impact of new technologies must also be in the mix. Seemingly isolated innovations can have a ripple effect throughout interconnected systems. Therefore, a holistic evaluation beyond the narrow focus on efficiency and productivity is necessary. People will need to think long-term and consider the well-being of future generations. There is a need to ensure that the benefits of technology are distributed equitably, and to consider potential biases and discrimination that can arise when algorithms make decisions.
This paper suggests that the emergence of Cybernetics 3.0, building upon the foundations of Web 3.0 but with its focus upon human inputs, might provide the kind of impetus needed to examine the responsibility of people to bring themselves formally into the equation of HMI and HMD. Human wisdom of a holistic sort is needed in appreciating what machines can do and what people must do. People have unique characteristics, including the ability for holistic thinking. Machines lack emotion and ethical understanding. Critical thinking, wisdom, stepping back, and applying metacognition are all needed as we face the grand challenge of HMI. Cybernetics offers new paradigms to understand this growing field and, through it, the arrival of collective intelligence that can enable us to make more informed decisions. Integrating advanced machine learning with human oversight can provide constructive ways forward. The approach recognizes that while machines excel at processing vast amounts of data and identifying patterns, humans possess invaluable qualities such as intuition, creativity, and ethical judgment that are crucial for complex decision-making. Integrating these complementary capabilities could potentially lead to outcomes that are superior to what either humans or machines could achieve independently. When machines and humans work in tandem, each contributes their strengths to the decision-making process. Collaborative cybernetics takes a systems perspective, considering the interplay between the decision-makers, the design processes, and the environment. There is a focus on teams regulating and adapting their activities to changes in the design process. Information flows between team members and how this influences decision-making is specifically considered. Understanding how teams work towards common high-level goals while at the same time managing their subgoals and constraints is part of the process.
Interconnectedness is a fundamental principle for addressing complex challenges. Planetary health questions are interwoven, underscoring the need for transdisciplinary collaboration. Individual well-being, community empowerment, and environmental sustainability are all interconnected. We need a paradigm shift in our approach to development and progress. Shifting away from externally imposed solutions to inner transformation and the need for a deeper understanding of well-being and flourishing presents such a shift. Cybernetics 3.0 offers a compelling framework for reimagining the relationship between people and technology in the context of empowerment, sustainable development, and planetary health. Social, ethical, and philosophical dimensions need to be considered to bring about the responsible and transformative implementation of human–machine collaboration. Collective intelligence that is enabled by Web 3.0 and integrated into Cybernetics 3.0 systems gives us a promising approach for addressing grand challenges arising in this interaction. Ethan Mollick, in his book entitled “Co-intelligence” written in 2024, discusses the incredible opportunities arising from a positive partnering and collaboration between humans and AI. At the opposite end of the spectrum, some have suggested that we are rapidly losing the ability to apply critical thinking because of this development [
57]. Clearly this is an area of concern.
Practical conclusions that can be derived from the above literature review include the following: Regarding human–machine integration, the literature indicates the importance of maintaining human agency and ethical oversight while recognising the value of the complementary strengths of humans and machines. In terms of the principal system design, the literature indicates the need for integrating feedback loops for continuous learning. The literature indicates the need for the design of intuitive human–machine interfaces with emphasis on adaptability and self-regulation. In terms of decision-making, the literature indicates recognition of human wisdom in complex decisions and the importance of ethical considerations. In terms of implementation, transdisciplinary collaboration can encourage long-term consideration of social and ecological impacts and equitable distribution of the benefits. Risk considerations include anticipating unintended consequences, managing algorithmic bias, and protecting human agency.
To demonstrate how these theoretical principles can be applied in practice, the next section presents a cybernetic model of human–AI collaboration using causal loop diagrams.
6. A Cybernetic Model of Human–AI Collaboration
Causal loop diagrams (CLDs) provide an elegant approach for visualising and analysing the complex dynamics between humans and AI in decision-making systems. As visual tools that represent interdependent variables and their relationships, CLDs help us move beyond linear thinking to understand the emergent behaviours arising from feedback loops in complex systems. The diagrams consist of nodes representing key variables connected by arrows showing causal links, with loops that can be either reinforcing (positive) or balancing (negative). Reinforcing loops depicted by positive signs (+) are self-amplifying, while balancing loops depicted by negative signs (−) are self-correcting. CLDs are helpful in visualising complex systems and identifying nonobvious virtuous or vicious cycles created via feedback loops. CLDs therefore can be used to describe the basic causal mechanisms which generate system behaviour over time [
58]. They also help people to break away from linear thinking towards systems thinking. The earlier example of two nations who hate each other because they do not get to know each other, and will not get to know each other because they hate each other, is a clear illustration of a reinforcing feedback loop or vicious cycle, in which the initial lack of knowledge leads to hate and that hate then prevents actions that would lead to increased knowledge such as communication and cultural exchange. The continued lack of knowledge further reinforces the hating attitude and perpetuates the cycle.
Figure 1 presents a CLD that captures three critical feedback loops in human–AI collaboration: the core learning loop, the trust oversight loop, and the recommendation feedback loop. The core AI–human learning loop demonstrates how improved human decisions enhance AI capabilities, which in turn augment human knowledge, creating a virtuous cycle of continuous improvement. However, this cycle requires careful governance to prevent unintended consequences. The trust oversight loop introduces an essential balancing mechanism whereby increased oversight improves AI performance through monitoring: better performance reduces errors while worse performance increases them. When errors increase, trust decreases, leading to heightened oversight and subsequent performance improvements. This self-regulating loop maintains system stability by counteracting the deviations from the desired performance levels. The recommendation feedback loop illustrates the dynamic interaction where AI provides suggestions to support human decision-making. The quality of human decisions influences AI performance, which shapes future recommendations. This loop highlights the co-evolutionary nature of human–AI decision systems.
To illustrate the practical application of this model, let us consider the healthcare domain (
Figure 2). Here, the core learning loop manifests as medical professionals making treatment decisions and recording outcomes, which improves AI diagnostic capabilities. Their interactions with AI tools provide valuable feedback for algorithm refinement. The trust oversight loop becomes particularly critical: if an AI system misidentifies a tumour, decreased trust leads to enhanced review processes that ultimately improve accuracy. The recommendation loop shows how AI can analyse patient data to suggest treatment options that doctors evaluate using their expertise, leading to AI learning and improvements in future recommendations. The negative loop indicates that an increase in AI capabilities/performance leads to a decrease in AI errors.
This cybernetic model reveals that while AI provides powerful analytical capabilities, human judgement remains essential for oversight, contextual understanding, and ethical considerations. The framework suggests that optional outcomes emerge from carefully designed feedback mechanisms that leverage both human and machine intelligence while maintaining appropriate checks and balances. Cybernetic principles can inform the design of human–AI collaborative systems. This framework provides a theoretical foundation for understanding the delicate balance required between automation and human agency in complex decision environments. The work by Tekin et al. [
59] discusses the importance of timely diagnosis in unusual infections, and built a successful preliminary multivariate or diagnostic model for the early identification of unusual infections including tuberculosis in hospitalised patients. There are other applications within healthcare; for example, the use of block chain to empower healthcare with secure and decentralised data-sharing schemes including searchable encryption [
60]. Alhakami et al. [
61] described the use of the cybernetics concept and its potential in providing effective solutions within the management of healthcare services. The work by Khayal and Farid [
62] discussed applications within personalised healthcare delivery and managed health outcomes, using acute and chronic care examples to illustrate the versatility of this approach. Faggini et al. [
63] proposed a model to maintain sustainability in the healthcare environment, adding real-time examples as validation for the work. They comment that healthcare system sustainability requires ongoing value co-creation. Moving away from the area of healthcare, a paper by Ahmadirad [
64] examined the implications for investment strategies of the integration of AI into business ecosystems and separating real growth from speculative hype.
7. Conclusions
Using the lens of Cybernetics 3.0, this paper has demonstrated that HMD represents both a grand challenge and a grand opportunity for contemporary society. The integration of human judgement with machine capabilities offers unprecedented potential for addressing complex problems, but success depends critically on maintaining an appropriate balance between machine autonomy and human oversight. Human judgement remains indispensable for contextual understanding, including the understanding of ethical questions, and ensuring that decisions align with societal values and human flourishing. Web 3.0 technologies, while transformative in their impact on human–machine interaction, must be developed thoughtfully and purposefully. The principles of Cybernetics 3.0, with its emphasis on human agency and systems interconnectedness, provides a compelling framework for navigating this evolving landscape. The causal modelling presented in this paper demonstrates how feedback mechanisms can be designed to optimise the complementary strengths of human and machine intelligence while maintaining essential safeguards.
Critical to success will be the prioritisation of human capabilities such as creativity, critical thinking, and complex problem-solving, alongside technological advancement. The future of human–machine collaboration must be shaped by a commitment to enhancing, rather than diminishing, human agency. The healthcare example provided illustrates how carefully designed feedback loops can enable continuous improvement while preserving essential human oversight. Looking ahead, several imperatives emerge. First, technological development must be balanced with considerations about human well-being. Second, we must maintain human influence over critical decisions while leveraging AI’s analytical capabilities. Third, organisations need to address potential biases in algorithms and ensure equitable access to technological benefits. Finally, a culture of continuous learning will be essential as these technologies evolve rapidly. Tools and techniques that would be useful here in prioritising human capabilities alongside technological advancement could include AI systems that are designed to augment creative processes, assisting humans in generating new ideas and solutions. Such systems could provide diverse perspectives or suggest new and unconventional approaches that could help people to overcome their mental blocks or biases. AI can analyse large datasets to identify patterns and connections that humans might miss, thereby allowing new trains of thought to emerge [
49]. AI tools could then be used to rapidly prototype and test such creative concepts, giving more efficiency in the exploration of possibilities.
Certainly, humans need to maintain their critical thinking now more than ever [
50], and there could be potential here for training platforms to be developed to help people with their analytical and problem-solving skills. Such platforms might offer interactive training, simulations, and real-world scenarios that would challenge users to evaluate information in a critical fashion and identify biases. They could be trying to formulate well-reasoned arguments. Platforms would need to emphasise the importance of understanding interconnectedness and feedback loops, as well as considering a variety of perspectives. In this way, people could move away from linear thinking towards a more holistic approach to problem-solving, in line with the principles of systems thinking [
36]. Collaborative problem-solving frameworks would be helpful in facilitating effective teamwork and the sharing of knowledge. Such frameworks could incorporate systems thinking in which problems are approached from a holistic perspective and solutions are developed in a collaborative fashion. Collaborative platforms could be designed to integrate diverse perspectives and encourage sharing, breaking down the silo mentality sometimes seen [
35]. Such platforms could include tools for brainstorming, visualising complex relationships, and managing competing interests. Through the implementation of such tools and technologies, it becomes possible to bolster the advance of human capabilities in the face of the rapid advance of technology.
Methodologies for validating CLDs might include the following: deploying an action research approach involving iterative cycles of planning, action, observation, and reflection that integrates research directly into a real-world context. Thus, a CLD-based model could be introduced into a team or organization and the effects observed, with iterative adjustments to both the model and its implementation. The team could use a CLD to guide their decision-making, and researchers could collect data via interviews, observations, and performance metrics. Additionally, a quasi-experimental design could be used where the CLD-based model could be introduced to one team in a setting, comparing the results with those of teams where there is no CLD available. Data on key performance indicators, team dynamics, and user satisfaction could determine the effectiveness of the CLD-based model. Modelling and simulation could be used to test the system as represented by CLDs with different scenarios tested through the alteration of initial conditions or feedback strength. This would help to understand the sensitivity of the model to various inputs and would test its robustness. Researchers could identify leverage points or potential unintended consequences before implementing the model in the real world.
The framework presented in this paper suggests that by embracing cybernetic principles in the design of human–machine systems, we can work towards outcomes that will enhance human flourishing while addressing increasingly complex societal challenges. The future of human–machine collaboration holds immense promise, but realising this potential will require careful attention to the dynamics of feedback, learning, and system adaptation. Only through such thoughtful integration can we ensure that technological advancement serves the broader goals of human and planetary well-being.
The final section addresses limitations of the current study and suggests promising directions for future research in this rapidly evolving field.
8. Limitations and Future Research
While this paper presents a theoretical framework grounded in cybernetic principles, several limitations warrant acknowledgement. First, as a conceptual analysis, this paper would benefit from empirical validation across different organisational contexts and domains. The causal loop diagrams, while theoretically sound, would benefit from systematic testing in real-world settings to verify their practical utility and identify potential refinements. A significant limitation lies in the rapidly evolving nature of AI technology itself. The frameworks and models presented in this paper may need continuous adaptation as new capabilities emerge. This analysis primarily focuses on current technological capabilities; future developments may introduce new dynamics not captured in the current models. The healthcare example, while illustrative, represents just one domain of application. Different sectors may present unique challenges and requirements for human–machine collaboration not addressed in the current framework. For example, the areas of finance and education may present different challenges. Financial trading is a domain which involves rapid decision-making and complex data analysis wherein AI algorithms can detect underlying patterns and trends, and then human traders can apply their intuition and experience. Other complex domains such as military operations or large-scale project management should also be considered in the application of CLDs. In terms of measuring effectiveness, quantitative metrics might include efficiency, accuracy, speed, and overall effectiveness in the specific domain. Within healthcare, this might involve accuracy rates for diagnosis success rates for treatment. System stability could also be assessed via considering how well errors are detected and corrected. Quantifying the degree to which human knowledge is incorporated into AI systems and how well systems adapt to human feedback could be developed. Regarding qualitative metrics, user experience and satisfaction could be assessed via questionnaires, interviews, and focus groups. Observation of team dynamics and, in particular, how people work and communicate together could indicate the effectiveness of introducing CLD models in encouraging collaboration and information sharing.
Practical steps for validation might include the following: Identify a specific context where there are clear operational goals and metrics for success. Collect baseline data on current practices before the introduction of the CLD based model. Implement the model in the chosen setting alongside training for participants. Collect both quantitative and qualitative data using the methodologies suggested above. Analyse these data to assess if the CLD-based model is having the intended effect. Refine the model and implement methods based upon the data analysis, perhaps using an action research approach. Longitudinal studies can be used to see how the system changes over time and assess the long-term impacts of the CLD-based model. Undertaking these steps would enable the validation of the CLDs and their effectiveness in improving human–AI collaboration. The goal would be to see how the use of the CLDs impacts human agency, ethical oversight, and the co-evolution of human and machine capabilities.
Several promising directions for future research emerge from these limitations. The development of specific metrics and evaluation frameworks for assessing the effectiveness of human–AI collaborative systems would be useful. This would include examining not just operational efficiency but also the broader impacts on human well-being, job satisfaction, and organisational culture. Metrics to be investigated could include operational efficiency, job satisfaction, and measures of organisational culture. Future work should also explore the psychological dimensions of human–AI interaction, in particular, how different personality types and cognitive styles influence the effectiveness of collaborative decision-making. Understanding these individual differences could inform more nuanced approaches to system design and implementation. Research into educational and training approaches would also be valuable. As these systems become more prevalent, understanding how to develop the necessary skills for effective human–AI collaboration will be increasingly important. This will include both technical competencies and the development of critical thinking and ethical judgement capabilities. These additional research directions could help to develop practical guidelines for the implementation of these systems in ways that enhance human capability and agency.
At the societal level, implications include the development of human–AI systems that enhance rather than diminish human capability and agency, and the focus upon human wisdom and ethical considerations to serve broader societal goals. The models presented could inform policy development around AI governance and regulation. Understanding feedback dynamics could help to anticipate and prevent negative unintended consequences of AI implementation. This work also provides a foundation for ensuring equitable access to the benefits of human–AI collaboration across society. Long-term social and ecological impacts of AI systems need to be considered. Future research could involve longitudinal studies investigating the effect of Cybernetics 3.0 frameworks in various industries.