1. Introduction
We are in the prolegomena of the Fourth Industrial Revolution. It is practically impossible to anticipate its potential implication on our world, given the lack of precedents. Principio del formulario Essentially, we are developing technologies that make it possible to manipulate, create, re-create or modify matter, both the inert and the living, of which the human species is one. Racing against the clock, we have set ourselves the task of restructuring everything that has been left to us, whether cultural or natural, and to do so, we make hasty decisions with little debate or collective reflection.
In order to provide a degree of security in the face of possible scenarios, a set of legal norms is being created. These norms are referred to as the “fourth generation of human rights”. The most advanced of these are being developed in the context of the European Union, with the handicap of standards arising from single-market homogeneity. However, since the counterpoint is the anomie of the rest of the world, they are an unavoidable point of reference.
Specifically, we have to make decisions on the following questions as soon as possible: whether or not we are in favour of human germ-line modification, and where we draw the line, if at all; whether or not we share genetic material with animals; whether or not we replace non-human living beings with others that have been designed in the laboratory and adapted to our needs; how to deal with regulating artificial intelligence or neurotechnologies, etc.
First, this paper is a review of the state of the art from a scientific and ethical point of view. For this purpose, we will take into account the interrelationship between the disciplines involved, the calls from the scientific community for a moratorium on research and the potential social impact of such research. Second, we will briefly analyse the position of the academic world on the issues that a hypothetical fourth generation of human rights should address. They are divided between those who are in favour of the regulation of artificial intelligence and those who are in favour of the “bio” disciplines. Third, we will propose, as reference values for the fourth generation of human rights, the principles of human identity in their various manifestations and of precaution, as currently understood by the European Union. We believe that these principles can fulfil the functions that freedom, equality and solidarity have played in the first three generations of human rights. To conclude, we will take stock of the overall situation.
2. State of Play
We are at the beginning of the Fourth Industrial Revolution [
1,
2]. In order to understand the extent to which this affects our rights as human beings, we will distinguish three closely related levels. These levels can lead us to justify the need for a new generation of human rights. Specifically, we will analyse the following: (a) what is happening; (b) what is causing it; (c) and how the scientific community is responding to the ethical and social implications of its research.
(a) Firstly, the current industrial revolution is blurring the boundaries between perspectives, entities or limits that our ancestors could never have discussed.
(a.1.) The line between the cure and enhancement of human beings is being blurred [
3]. In fact, with today’s genome editing techniques, genes can be transferred from one species to another, or silenced or activated in a targeted, non-random way. Characteristics, skills or traits from other species could be introduced into the human gene pool and even passed on to offspring, warns one of CRISPR’s co-discoverers [
4]. Even a split of the human species (a species is a group of individuals that can reproduce and produce fertile offspring) would be biologically feasible in the medium term. Should we ban all forms of human germline intervention, as laid down in the European Convention on Bioethics [
5,
6], or only those that are not therapeutic, as the European Union’s Charter of Fundamental Rights seems to suggest [
7]? Do we use this technique for prevention [
8], as a kind of genetic vaccine? Last but not least, why not transfer to a human embryo the natural mutations already present in our species, which are the guarantee of above-average health [
9]?
(a.2.) The lines between animal and human are being blurred. The above-mentioned techniques of genome editing are making it possible to humanise certain animals [
10]. Although the purpose may be legitimate (developing human organs in animals that can be transplanted without risk of rejection or virus transmission [
11], humanising a mammalian brain for research into Parkinson’s, Alzheimer’s, dementia, etc.), the risks are unprecedented [
12].
We are not talking about genetically modifying a non-human being to make it more like us or, if necessary, to make it extinct (objectives that also raise questions, no less disturbing, as is the case with the
Gene Drives [
13]), but rather, we are talking about the introduction of human biological material into an animal embryo that could become [
14], by accident or intentionally, human neurons (or glial cells), increasing its cognitive capacity [
15], a scenario only surpassed by the possibility of it being passed on to offspring [
16]. What is the legal and ontological status of an animal with more consciousness or cognitive capacity than it already had? At what point does an ape abandon its animal nature to become confused with ours [
17,
18,
19]?
(a.3.) The differences between inert matter and living matter are becoming increasingly blurred [
20]. Genes are indeed the building blocks of life, but they are not living entities in themselves. What is the extent to which they can be re-used, re-engineered or re-formulated by synthetic biology to explore new possibilities (for example, George Church’s levorotatory life forms; Craig Venter’s artificial cell [
21,
22]), given that the end product will in fact be a living being? How many ways can a human cell with the capacity to become a human being be developed, differentiated or synthesised if they are implanted into a woman’s womb? On the other hand, how do we regulate the industrial ownership of entities such as genes or synthetic biology products [
23]?
Indeed, embryonic stem cells and IPS cells may, in the medium term, make sperm and eggs a thing of the past. What biological entity will a human quasi-embryo be for legal purposes [
24]? Should a synthetic human genome (with the capacity to create a human being) be equated with embryos for the purposes of the European Convention on Bioethics, which prohibits their creation for research (only leftovers from in vitro fertilisation are allowed for research)? And if an IPS cell can be fully reprogrammed to develop into a human embryo, what will be the status of skin cells, hair cells, etc.? When will an IPS cell be able to dedifferentiate up to the embryonic stage, as is already the case in mice [
25]? Is
every single cell in the human body going to be an embryo for the purposes of the law [
26]? Moreover, should human brain organoids be considered human, or only if they suffer and/or are conscious? Looking deeper into this question prompts the following: can a human brain organoid really become conscious and even feel [
27]? If so, what is the ontological and legal status of such a biological entity? Is it in legal limbo, as is currently the case with pseudo-embryos [
28]?
To paraphrase Pérez Luño, if the holder of the third generation human rights is the “interconnected” individual [
29], the fourth generation would be, in Rodotá’s felicitous expression, the “disseminated” human being [
30], or interconnected, but with the planet [
31], and even then we fall short, as we shall see later, because for the first time, we have to consider the legal status of entities, which are not necessarily biological, with human attributes.
(a.4.) The boundaries between the real, the imagined and/or the digital are becoming blurred [
32]. Can an AI create [
33], invent [
34] (Dabus case [
35]), reflect and, in short,
surpass us? What, in particular, is it supposed to cover [
36]? What is, ontologically and legally, a set of instructions (algorithms) that is capable of manipulating us, deceiving us, distorting our will and even endangering our societies [
37]? How do we regulate algorithms that generate new algorithms that generate new algorithms that generate new algorithms that generate new algorithms…? Is the Black Box of an artificial intelligence comparable to our consciousness, in the sense of being inscrutable or incomprehensible? In short, if we are just dealing only with algorithms, even if they have fabulous predictive capabilities, then what are we afraid of? But if they are something else, what exactly are they?
On the other hand, will neuro-rights be enough of a barrier to protect our brains from being invaded by political, economic and religious powers when they try to interfere with our minds through external or internal interfaces? Or is it still too early to regulate this matter without descending into the realms of science fiction [
38]?
And finally, will AI algorithms be able to understand us better than we do, to the point of anticipating our desires, imagining or facilitating our actions and channelling them appropriately, long before we are conscious (e.g., Libet case [
39]), if these two disciplines, AI and neurotechnologies, are intertwined?
(a.5.) As we have seen, we are witnessing an interaction between different disciplines that is blurring spheres, levels or subjects that we have always separated, but this process of convergence also affects the disciplines themselves, so that it is no longer easy to identify which specialist is dealing with what. In this way, the algorithms of artificial intelligence can be used, in a non-exhaustive list, to do the following: (a) accelerate discoveries in other areas of knowledge (e.g., the discovery of the three-dimensional structure of proteins); (b) design a living entity with the ability not only to survive but also to replicate itself (e.g.,
biobots) [
40]; (c) use biological material to support computing (e.g., biological computing); (d) create an entity with intelligence equal to or greater than ours (e.g., strong artificial intelligence), and so on. In all these cases, it is difficult to find an expression that captures the complexity of the interaction between the “bio” and “digital” disciplines, since we are witnessing a fusion of specialities, with very different objectives, but with an undeniable interaction, and therefore, a potential “emergence” of unprecedented events.
(b) Secondly, we ask ourselves what makes this progressive process of blurring possible, which confronts us with scenarios halfway between the hope of improving our quality of life and the most unimaginable dystopia [
41].
The technologies responsible for these questions are biotechnology, synthetic biotechnology, nanotechnology, neurotechnologies and artificial intelligence. Together, they are leading the Fourth Industrial Revolution, but what matters most is not what each can do on its own but how they overlap, interrelate and feed into each other so that advances in one have an impact on the others, something that had already been noticed in the USA in 2003 and was replicated in the European context the following year [
42,
43].
For this reason, these technologies are described as exponential (for their ability to increase our capacity on an exponential basis), emergent (playing on the double meaning of the word “emergence”, where the whole is more than the sum of its parts, as is the case with life or consciousness), disruptive (because of their potential to substantially alter our societies) and, finally, to paraphrase Ricoeur, technologies of suspicion (because of the intuitive reflection that everything we have taken for granted until now, such as life, the human beings or reality, can be irreversibly replaced) [
44].
(c) Thirdly and lastly, we consider how the scientific community reacts to the possible collateral effects of what we have described so far, a yardstick by which we can gauge the true extent of the present technoscientific revolution.
Indeed, it is through two closely related events that we can best understand the uniqueness of our times. We refer to the scientific moratorium that was called for by the biotechnologists in 1973 (Paul Berg et al. Asilomar [
45]) and the more recent one, March 2023, proposed by those researching artificial intelligence [
46,
47]. Meanwhile, another moratorium was called for in 2019. This time, experiments with the CRISPR genome editing technique will be suspended. To complete the circle of the interconnection between living matter, inert matter and algorithms, suffice it to recall that the
Asilomar Principles on Artificial Intelligence (2017) are named precisely in honour of their biotechnological counterparts for the moratorium invoked in such a symbolic place in the 1970s [
48].
The common link in this period, almost half a century, is that scientists have become aware of the uniqueness of their research, i.e., the fact that we have no precedent for the experiments, objectives or projects they are developing, hence their perplexity and even fear. It should be noted that the scientific community is not proposing to ban research but to postpone it until the risks to our societies have been properly assessed and the moral and legal limits have been established. It is for this reason that the possibility of such moratoria being legally binding has been under consideration.
In short, the issue we are analysing is a consequence of this progressive process of blurring of boundaries, which did not need to be addressed by previous generations of human rights; of the process of convergence of the aforementioned technologies, which did not need to be addressed by our ancestors; and, finally, of the warnings and fears expressed time and again by the various scientific communities about the imminent risks facing our societies.
Fourth generation human rights seek to provide a legal/philosophical response to this scenario.
3. Theoretical Justification
There is some consensus in the academic world about the need to create a fourth generation of human rights, which indirectly implies the acceptance of the traditional tripartite classification. However, this starting axiom does not entail a consensus on which facts in particular justify its existence, which fully affects its content. Therefore, we would have to make a distinction between the following:
The position of those who are in favour of the need for a fourth generation of human rights [
49,
50,
51,
52] but either disregard traditional academic classification or separate it from technoscience [
53].
The position of those who argue that current technoscientific advances justify the need for a fourth generation of human rights [
54,
55] but do not agree on which disciplines, in particular, justify this requirement. Thus, we must distinguish between the following:
The first sector of the doctrine focuses on the needs of the digital world [
56,
57,
58,
59,
60,
61,
62], including artificial intelligence [
63].
A second teaching area focuses on genetic and biomedical issues [
64,
65], which in practice represents a juridification of bioethics mediated by the term “bio-law” [
66,
67,
68,
69,
70,
71], which has led to a degree of sectoralisation (e.g., neurolaw/neurorights).
A third sector refers to both fields (digital and biotech) but not to the interaction between the two. This nuance is important because the problems continue to appear in watertight compartments.
Finally, we would like to stake our position at the following point: The justification for arguing for the emergence of a fourth generation of human rights lies not in a particular technology or technologies, let alone a political issue, but in the interaction and feedback of exponential or disruptive technologies; that is, in the way they feed backwards and forward in unison, blurring the academic boundaries between disciplines [
72], raising the troubling questions we discussed in the previous section (can an AI create viruses completely autonomously, even if it lacks self-awareness, communicating with laboratories via the internet and without us noticing it [
73]?) and forcing us to carry out a holistic analysis for which we have no precedents [
74,
75].
The challenge is to find values that can operate transversally to these technologies (just as these are presented to us as interwoven as a whole), which would allow for a subsequent normative concretisation.
However, if we take a closer look at the recent legislation on data protection, digital services or AI, or the more-than-outdated legislation on biotechnology (there is practically nothing on synthetic biology or neurotechnologies), we see that they have been drafted not from the perspective of human rights but from the point of view of the needs of the internal market, where the citizen is a customer and his or her demands are those of a consumer, hence the need to create a new generation of human rights, to draw red lines that cannot be crossed by a technoscientific development essentially driven by the capitalist system.
4. Intersection of the Fourth Generation in All Generations of Human Rights
With the need for a fourth generation of human rights justified by the technological revolution, we must ask ourselves what values inspire this generation and what relationship we can find between the previous three and the fourth generation.
Until now, the generations of human rights have hinged on a value that served as an axiom, which then allowed them to be broken down into different values of a secondary or subordinate nature. The challenge in the case of the fourth is to find a starting point that avoids falling into extreme casuism on the one hand and being overtaken by the rapid advances of technoscience on the other. That is, we must find a value that fulfils the functions that freedom, equality and solidarity, respectively, lent to the first three generations of human rights and then examine how they interrelate with the fourth.
However, we must not forget that the three most characteristic values of the first three generations of human rights are far from clear or uniform in their meaning, so it would not be reasonable to impose on the fourth generation the systematic requirements which, for various reasons, were not imposed on their counterparts.
To analyse these questions, we will compare the fourth generation with the first two and then with the third generation.
4.1. Confrontation of the Fourth Generation of Rights with the First Two Generations Rights
I believe that the “freedom/equality” pair that characterises each of the first two generations of rights has been replaced or complemented in the case of the fourth generation by the idea of “human identity”.
This right to identity can be broken down into at least four levels: (a) Subjective identity, which would imply the right that technoscience does not make us doubt or even lose our sense of self (e.g., neurotechnologies [
76]). (b) Objective identity, which would imply the right that technoscience does not make us doubt or even lose our sense of reality (e.g., deep fakes or ultra-counterfeits in the EU AI Act). (c) Species identity [
77], which would imply the right not to be left in doubt as to whether certain biological entities (e.g., chimaeras, brain organelles, etc.), non-biological entities (e.g., artificial intelligences) or hybrids (e.g., products of synthetic biology, such as Boecke et al.’s synthetic human genome, or designed by AI, such as the emerging
biobots) are comparable to human beings. (d) Identity in the sense of historical continuity, which would imply the right for our world to remain recognisable as such and not to be transformed or completely replaced (e.g., by nanotechnologies and other disruptive technologies,
gene drives in ecosystems, biological diversification of the human species, etc.), i.e., to avoid the objectives of transhumanist or any dystopian singularity that implies an absolute rupture with our past [
78].
In this sense, we can see how this value implicitly or explicitly inspires both the Oviedo Convention and the four Protocols that supplement it (e.g., the prohibition of modifying the germline, not even for medical reasons, to prevent the introduction into the human gene pool of variants that are foreign to us; the prohibition of human reproductive cloning, preventing the biological identity of the subjects from being artificially multiplied through the creation of twins). The Charter of Fundamental Rights of the European Union reinforces this value by prohibiting the human body or parts of it as such from being turned into an object of profit, which would blur or extend the identity of the subject to the extent that their DNA could be incorporated into other biological entities or used for spurious purposes (e.g., think of IPS cells and their use for animal/human chimaeras). The Explanatory Memorandum to the Charter links the principle of autonomy (Art. 3.2.1) to moral integrity (physical and psychological, as it appears in the first paragraph), citing the jurisprudence of the EU Court of Justice [
7]. The Court’s ban on the patenting of totipotent human cells (cells capable of giving rise to a human being), whatever name, embryo, pseudo-embryo, etc., may be given to them, is also based on this idea of respecting “human identity”. What matters is whether it is identifiable as human or not, hence its withdrawal from the market (Sandel’s famous book, “What Money Can’t Buy”, is part of this attempt to put limits on the market), reinforcing this idea of “subjective identity”. (The biological material of a person is not to be equated with an object that can be appropriated and/or transferred.) When the first article of the Universal Declaration on the Human Genome and Human Rights (UNESCO, 1997) states that, “in a symbolic sense, the human genome is the heritage of humanity”, it strengthens our identity as a collective and focuses our definition as a species not on metaphysical abilities (consciousness, language, transcendence, symbolism, etc.) but on something tangible and material: our DNA. A human being is someone who shares a genetic identity with the other members of our species, so our species is bounded and defined. All other living or inert entities are not human. This is why it is so important to avoid blurring our genetic identity by sharing it with other entities, especially animals that are evolutionarily close to us (e.g., the chimaera problem).
The idea underlying all these rules is that our genome transcends the individual (it belongs not just to one person but to the human species as a whole), hence the need to set direct and indirect limits on how it can be isolated, modified, recreated or mixed with that of other species. The final result, whatever it may be, must not violate our “identity”, an undoubtedly ambiguous term (in any case, no more so than “freedom” or “equality”, which inspired the other two generations) but the only intellectual handle for dealing with certain experiments. This is why the first article of the Oviedo Convention links “dignity” to “identity”, i.e., that whatever happens, a biological entity should be unmistakably human, or not, but without ambiguities or disturbing tertium genus.
But as we have discussed, the Fourth Industrial Revolution affects not only living matter but also inert matter, both in itself and when it interacts with living matter. As far as this issue is concerned, human beings are in the midst of two technologies that can seriously condition our nature as morally autonomous individuals: neurotechnologies (external and internal interfaces) and artificial intelligence. As a result, and for the first time in our history, we take action not against the actions of other human beings but against living or inert entities that we could never before imagine to be an affront to our dignity or blur our sense of reality and/or subjectivity (identity).
The expression that best encapsulates the translation of the principle of identity to our interaction with these new technologies is that of “meaningful human contact”, with content that expands as the possibilities for conditioning us grow.
Thus, it is not enough for the interaction to be voluntary (equivalent to the principle of autonomy in bioethics), but it is essential that the subject is aware that he or she is facing an artificial intelligence, i.e., that he or she identifies the interlocutor as “non-human”. As the ability to blur reality through images or sound is increasing by leaps and bounds, the European Union has created the concept of “deep fake” (ultra-falsification), so that it is mandatory to inform the citizen not only that he or she is dealing with an AI but also that what he or she is seeing, however real it may seem, is artificially created.
The line between confusion and manipulation is a fine one, so fine that the EU AI law prohibits any form of altering human behaviour that could endanger their physical or psychological safety or that of others. In this sense, we must emphasise how the possibilities of neurotechnologies border on the implausible, from the blurring of personality to the possible emergence of new properties with interconnected brains (who knows if also to the internet). For this reason, neuro-rights seek to complement the incipient regulation of AI in order to protect our most intimate sphere, the human mind, as much as possible, hence the special precautions for the most vulnerable groups, such as minors, the elderly or the disabled, in both neurotechnologies and AI.
The term “meaningful human contact” also implies the reservation of some human facets that cannot be delegated to AIs, especially those related to life and death. It is not about the red button (there must always be a human to take responsibility for actions) but about avoiding a total and absolute transfer of our lives as human beings, including our most intimate ones, to AIs, however safe they may be.
In short, the Fourth Industrial Revolution takes us to the vertiginous heights of situations where ultimately our identity as human beings is at stake, from our genomic integrity to our moral integrity, that is, our ability to make free, conscious and responsible choices without external interference that manipulates, conditions or limits us and without being able to confuse the reality or nature of our interlocutors. This is why we believe that “identity” should be the intersection of disruptive or exponential technologies and, therefore, the key axiom of this fourth generation of human rights.
Finally, unlike the value of “dignity”, which can also be used and is in fact constantly invoked in relation to technoscientific issues, “identity” allows us to establish more objective parameters, albeit not without difficulty, for assessing when it is violated and when it is not. For all these reasons, we could base our argument on the following reflection of the Academy of Medical Sciences:
“Whether or not a blended embryo is predominantly ‘human’ is an expert judgement, including an assessment of the likely phenotype, but neither the precise final composition of an individual embryo nor the phenotypic effect of blending will be readily predictable at the present state of knowledge [
10].”
4.2. Confrontation of the Fourth Generation of Rights with the Third Generation
If the third generation rights are concentrated around the value of “solidarity”, I believe that the fourth generation rights revolve around the precautionary principle [
79], the legal transcript of Jonas’ principle of responsibility [
80].
The EU Charter of Fundamental Rights did not enshrine the precautionary principle (it only established a “high level of protection” for the environment). The International Covenants on Civil and Political Rights or on Social, Economic and Cultural Rights did not either. Moreover, other iconic documents, such as the Universal Declaration of Human Rights or the European Convention on Human Rights, make no reference to this principle. It could be said that these are earlier legal documents, but we could counter this argument by showing how the European Convention on Bioethics (1997) or the Universal Declaration on Human Rights and Bioethics (2005) continue this line of silence on a principle that is particularly inconvenient for reasons that are not strictly legal but rather political and economic.
In fact, the precautionary principle obliges us to take action against risks that may themselves be highly unlikely, bordering on the impossible. It is what Ravetz calls “ignorance squared” [
81] (we do not know what we do not know), which forces us to refine predictions in situations of maximum uncertainty. That is exactly what is happening with the exponential, or disruptive, technologies leading to the Fourth Industrial Revolution.
It is about taking action on issues such as the following: (a) If gene drives are applied to mosquitoes in Brazil or mammals in Australia, could they have an impact on ecosystems across the planet? (b) If human germline modification is widespread, could it have a structural impact on our species? (c) If we introduce human biological material into animal embryos, are we taking an existential risk? (d) If we continue to increase the potential of AI, could it surpass the human mind? (e) If we continue to increase the potential of AI, could we take an existential risk? (f) Are we taking an existential risk by introducing human biological material into animal embryos? (g) Can brains really be connected to each other or to a computer network?
In fact, the precautionary principle was not born, at the time, to answer these questions. Its context is that of environmental protection in the 1960s. However, the emergence of these and many other unanswered questions led to the extension of this principle to any scientific activity, technology or research that might pose a risk, however unlikely, to the planet. It is, therefore, perhaps fair to recognise that the precautionary principle, as we know it today, has been a product of the European Union, not because it had not been legally enshrined before but because it has a broader meaning, in line with the technologies discussed above.
Indeed, a European Commission Communication on the use of the precautionary principle explains how this principle has been extended beyond the environmental field as a field of application to be invoked when “scientific information is incomplete or inconclusive and the risk is considered too high to be imposed on society” [
82].
In order to understand the meaning of certain terms, it is necessary to clarify that “risk” is measured not only by the probability of an event occurring but also by the impact it would have if it did occur (e.g., a nuclear power plant is unlikely to explode, but the consequences would be intolerable if it did) and that this impact may affect people, the environment or even the planet.
Its most representative practical application was the 2018 ruling of the Court of Justice of the European Union, which invoked this very principle in relation to the CRISPR genome-editing technique [
83]. According to this ruling, if an organism is genetically modified using this technique, the 2001 Directive on GMOs will apply [
84]. This means that it will be subject to the same control as transgenesis, even if it does not technically exist and even if the end result is indistinguishable from a natural organism or one modified by non-ionising radiation or chemical products (critics of the decision make precisely the following argument: why apply the above-mentioned directive to a modified biological entity if it cannot be distinguished from the rest?). Regardless of how right the ruling is, we consider it to be the most representative example of the times we are living in. Such a ruling would be unthinkable in the United States.
Of course, the application of the precautionary principle comes at a high price. If we compare the policies of the European Union with those of the United States, not to mention China, we can see how this principle prevents the marketing of products or services that are widely used outside the European continent. As a result, foreign patents have to be paid for, leading European research companies have to leave and the pace of growth slows down. This explains the difference in the treatment of GMOs in the US and Europe. Recent legislation in the US has effectively put genetic engineering on an equal footing with traditional agriculture, so that there is almost no need for a filter for the marketing of GMOs [
85], while the EU is governed by a 2001 directive, which is clearly outdated, and the aforementioned of the European Court of Justice ruling, which restricts GMOs as much as possible. It is, after all, a principle that can be mistakenly invoked to justify protectionist economic practices or to defend the irrationality of technophobia.
In light of the above, I believe that the precautionary principle, in the sense in which it is currently recognised by the EU, can imply at least three rights: (a) the right that technoscience does not substantially alter our model of civilisation; (b) the right that technoscience does not pose a risk to ecosystems; (c) the right that no experiments are carried out or technologies developed that pose an existential risk to the human species and even to life itself on the planet (extinction).
In short, if the value of solidarity represents the third generation of human rights, I believe that the precautionary principle, as interpreted in the above-mentioned European Commission resolution, i.e., with an application that goes beyond the environmental context, characterises the fourth generation of rights under consideration. This is a consequence of the potentially disruptive nature of the technologies at the forefront of the Fourth Industrial Revolution.
4.3. Extension of the Rights of the First Three Generations to the Fourth
To summarise what has been discussed in the last two subsections, the first generation of rights would be represented by liberty, the second by equality, the third by solidarity and the fourth by identity and precaution.
This does not mean, of course, that they are watertight compartments. One need only recall what has already been discussed about how difficult it is to assign a particular right to a particular generation, because it is sometimes spread over three generations.
Notwithstanding the fact that the fourth generation of rights is fundamentally based on the values mentioned above, this generation has refined the rights acquired in the previous three generations.
In this way, the liberal/bourgeois “freedom” of the first generation has given way to the “autonomy” of bioethics, which has repercussions in many aspects, such as the voluntary nature of human experimentation, the acceptance or even refusal of medical treatment (active euthanasia), the question of transplants, including inter vivos, being the subject of predictive genetic diagnosis, etc. The right of children to know who provided the biological material that allowed them to be born (in vitro fertilisation with an anonymous donor) is part of the right to free development of the personality, and although it is not yet recognised in Spain, other European countries, such as Portugal, have not hesitated to put an end to anonymity (e.g., article 7 of the Convention on the Rights of the Child). Discussions about whether AI can create and/or patent an invention seek to extend freedom of thought, creation or research to a format for which they were not originally conceived but which can undoubtedly be included in the original freedoms.
Finally, respect for free will as a neuroright is about adapting individual freedom to the context of neurotechnological interfaces and their infinite possibilities for conditioning our will.
Worker/socialist equality and its flip side, the prohibition of discrimination, has slipped in the fourth generation into efforts to prevent the bias or the objectification of women with artificial intelligence, the non-discrimination of those subject to biomedical intervention (e.g., children born in vitro, hence no civil registration) or the avoidance of stigmatisation of ethnic minorities in research involving genetic material.
Finally, echoes of solidarity can also be found in the fourth generation in policies aimed at socialising scientific advances (art. 2.f of the Unesco Declaration on Human Rights and Bioethics), including possible genetic (e.g., Singer [
86]) or neurological (Yuste et al. [
76]) improvements of the human species. That is, together with a certain awareness of our obligations towards future generations, so that they do not find a worse world than ours, the objective at the moment is that no one is left behind in what seems to be a qualitative leap in our structural constitution as living beings. All or nothing seems to be the offer from the third to the fourth generation.
5. Conclusions
Looking at the interaction between the five disruptive or exponential technologies and the three generations of human rights examined, we can draw the following conclusions:
1. Unlike the other three generations, the fourth generation of human rights cannot be linked to a social or political revolution or to a specific declaration of rights. If the third generation was founded in 1979, it seems obvious that the fourth generation was born later. On the other hand, it can be observed that the use of the expression we are analysing has increased since 2000, perhaps due to the symbolism of the turn of the century. However, the advances that would, in our view, justify the need for a fourth generation of human rights are somewhat later and more gradual (e.g., neuroethics appeared in 2002; the leap in artificial intelligence took place in 2005; the CRISPR genome editing technique appeared in 2015, etc.). In other words, the term “fourth generation of human rights” is materially justified by the progress made since the term was first used, at the latest in the last twenty years.
On the other hand, the need for these rights is justified by the Fourth Industrial Revolution, and more specifically by the way in which the technologies at the forefront of that revolution intersect. They appeared unconnected throughout the 20th century (AI in the 1950s, biotechnology in the 1970s, etc.), but what is relevant is not the specific date on which each appeared separately but the moment when they interact with each other and feed backwards and forward in unison, something that was noted at the beginning of this millennium on both sides of the Atlantic in the two documents already cited.
2. We need to find a way of attributing subjective rights to entities, biological or otherwise, to which the law has never paid attention. In fact, the first generations of human rights argued bitterly about whether the rights holders were only individuals or whether, on the contrary, they also included human groups. Over time, this debate evolved into the no less problematic question of whether we have duties to animals or whether they have rights in themselves as sentient beings.
With the Fourth Industrial Revolution, the debates have taken on a new dimension. The question arises as to the point at which an entity, natural or otherwise, can be a holder of rights, not because we grant them to them, as we do to animals or groups of humans, but because their cognitive structure allows them to demand them. Thus, given the possibility of a Neanderthal living again, a mammal with human neurons, who knows if an artificial intelligence will be either fully comparable to us (e.g., hominins) or sufficiently close to animals (or distant from animals, depending on the perspective adopted, such as an ape with human biological material) to force us to fundamentally rethink the question of the subjective ownership of rights. In between, we will have to regulate the status of entities as strange to our legal tradition as biobots (cells designed by an AI), brain organelles, creations of synthetic biology, virtual or augmented reality, etc. We will have to deal with surreal debates, such as whether a group of interconnected human brains will lead not only to the emergence of new properties, as neurotechnologists predict or rather describe, but also to a group subjective right; we will have to narrow the concept of the individual, since a human being can not only reproduce after death by in vitro fertilisation but also clone himself or herself or, more worryingly, share cells, organs or genetic material no longer with an animal but with a lineage of animals (which raises the question of how to deal legally with the relationship between the owner of genetic material and a cohort of animal descendants with whom he or she shares this material); what about computing with biological material?
Finally, to the extent that a simple skin cell can become a totipotent cell (IPS cell), we will have to address the legal status of biological material that we have not paid much attention to so far (e.g., hair, skin).
On the other hand, we will have to face the regulation of the intellectual and/or commercial property of entities with diffuse individuality (nucleotides, chromosomes, genes, sequences and algorithms), which may, in turn, contain the rules or instructions for the creating living beings that can reproduce themselves (biotechnology, synthetic biology), in the context of when an artificial intelligence will be able to equal or even surpass us, even if only for commercial purposes.
In short, the number of entities that are subject to legal regulation, biological or otherwise, is growing exponentially, so that from the fourth generation of human rights onwards, both the ontological nature of these entities (what they are, what they can be equated with) and their legal status will have to be addressed.
3. The Fourth Industrial Revolution can be seen as, in effect, the fourth revolution or as a hiatus in human history. In any case, I believe that this dichotomy, a rupture or simple evolution, allows us to advocate the need for a fourth generation of human rights, in line with the interesting canon proposed by González Álvarez to justify the birth of a new generation of human rights [
87], without falling into the mere modulation of the previous ones or adapting to new threats without leaving this framework.
We need to ask how we can regulate the implicit or explicit, conscious or unconscious, voluntary or involuntary claim to create something superior to ourselves.
The “something” we seek may be an enhanced human being, an artificial intelligence, a hybrid, a chimaera, or some kind of entity currently unimaginable to us. For all practical purposes, we are indifferent. The axiom that drives us is that if we can, we will. Many things can happen in the meantime, from enormous benefits for all humanity (the AI unravels all possible combinations of proteins) to dystopias (we achieve, in effect, the first goal but not the second). The only thing that can prevent it is the laws of nature, i.e., insurmountable limits (e.g., the speed of light; Pauli’s exclusion principle, etc.), but everything that is technologically feasible but not morally possible will become a reality when we have made sufficient progress. The weakness of the law that we have examined, a reformulation of the classic laisser faire, laisser passer, is the legal response to this lack of reflection on where we are going.
Our connection with future generations is, therefore, different from the one our ancestors had with us. We can shape their world in structural and irreversible ways. And that is at best, since there is room for others, unlikely but no longer a merely imaginary scenario in which a collective hecatomb (Bostrom’s “hiatus”) occurs. For these reasons, the fourth generation of human rights must address the question of how to give legal value to scientific moratoria, so that they are not merely well-intentioned declarations; how to restrict or ban research (e.g., gain of function in viruses, gene drives, synthetic biology, artificial intelligence); how to regulate the democratisation of technology (e.g., biohackers experimenting with CRISPR); and a long etcetera that could endanger everything we know. Treaties on the non-proliferation of nuclear weapons, the banning of biological weapons, etc. are inadequate because of the scale and unpredictability of the scenarios that can arise. In short, we need to give legal status to Cecchetto’s “ethics for the absent” [
88].
To this end, we are trying to hide behind “identity” and “precaution”, a mixture of crypto-iusnaturalism and extreme utilitarian pragmatism. Nevertheless, once the axioms have been established, we must be as quick as we are cautious to set clear, precise and binding global limits on what are the morally insurmountable red lines of the current technological revolution, lest it at least leads to legal/moral involution.
Last but not least, neither atoms and molecules nor non-human beings are aware of our concerns and limitations, so it seems obvious that we cannot regulate a technoscientific revolution taking place on a planetary scale and on a regional, let alone national, level. For the first time, we can manipulate living and inert matter on a scale that is unprecedented both in terms of the scale and the symbolism of the objects involved. We are, therefore, faced with philosophical/legal problems of the first order, ranging from the gradual granting of rights to non-human entities that are close to our nature to the preservation of the planet for future generations, hence the need to develop, protect and deepen the fourth generation of human rights.
4. There are three institutions that are currently focusing international attention on AI. The most protective standard from a human rights perspective has been created by UNESCO [
89], which also attributes more risks to AI. But it is only a non-legal recommendation, i.e., a symbolic, non-binding statement. The Council of Europe’s draft Convention on AI is the second relevant standard [
90]. However, it will only be binding on those countries that are signatories to it voluntarily. Its immediate predecessor, the Convention on Bioethics, was not signed by most of the major European countries for one reason or another, and it has also been outdated by the rapid advances in biotechnology and biomedicine. It seems reasonable to expect the same to happen with the AI Convention. Thirdly and finally, the EU regulation on AI has the advantage of being binding on all 27 EU countries, but it is less ambitious than the other two regulations mentioned above, not least because its ultimate aim is the unity of the single market and not the protection of human rights. Moreover, the fact that it does not come into force until 2026 suggests that it will be outdated in the face of rapid technological progress. Finally, despite the fact that Bletchley’s statement, following the line of argument of a certain doctrinal sector concerned about these links, suggests it, none of the three standards links AI to biotechnology [
91].
Any hypothetical international human rights norm on disruptive technologies will face the same problems: how to reconcile the protection of citizens with the sovereignty of states that are reluctant to accept any regulation that might impede technoscientific progress; how to adapt the rules to the dizzying advance of technoscience; and, finally, how to interrelate such disparate disciplines.
Despite these limitations, it is clear that the only laws that can make sense in the context analysed in this paper are binding international standards. We need to achieve universal common minimum standards that transcend the framework of the nation-state, that are flexible enough to adapt to new realities and that manage to regulate the progress of the various disruptive disciplines in a cross-cutting way.
In summary, we believe the following: (a) it is undeniable that we are at the beginning of the Fourth Industrial Revolution; (b) this revolution is being led by disruptive technologies, i.e., biotechnology, synthetic biology, nanotechnology, neurotechnology and artificial intelligence; (c) the challenges we face in the short term are not met by the human rights established in the three previous generations of human rights; (d) the right to human identity can serve as a basis for addressing both the challenges of AI and the challenges of “bio” disciplines, including neurotechnology; (e) the precautionary principle, understood not only as an environmental principle but also as it is understood by the Court of Justice of the European Union and by the EU itself, can serve as a point of reference when taking decisions that go beyond the temporal and/or spatial framework of technological progress, allowing them to be concretised as the situation requires.