Next Article in Journal
Some Worries About Deontic Closure
Next Article in Special Issue
The Legal–Digital Metamorphosis of the Individual
Previous Article in Journal
Addressing the Sharing Economy—Some (Potential) Inconsistencies of Its Emancipatory Defense
Previous Article in Special Issue
Africa, ChatGPT, and Generative AI Systems: Ethical Benefits, Concerns, and the Need for Governance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards a Human Rights-Based Approach to Ethical AI Governance in Europe

1
School of Religion, Theology and Peace Studies, Trinity College Dublin, D02 T283 Dublin, Ireland
2
ADAPT Centre, Trinity College Dublin, D02 T283 Dublin, Ireland
*
Author to whom correspondence should be addressed.
Philosophies 2024, 9(6), 181; https://doi.org/10.3390/philosophies9060181
Submission received: 8 October 2024 / Revised: 23 November 2024 / Accepted: 28 November 2024 / Published: 30 November 2024
(This article belongs to the Special Issue The Ethics of Modern and Emerging Technology)

Abstract

:
As AI-driven solutions continue to revolutionise the tech industry, scholars have rightly cautioned about the risks of ‘ethics washing’. In this paper, we make a case for adopting a human rights-based ethical framework for regulating AI. We argue that human rights frameworks can be regarded as the common denominator between law and ethics and have a crucial role to play in the ethics-based legal governance of AI. This article examines the extent to which human rights-based regulation has been achieved in the primary example of legislation regulating AI governance, i.e., the EU AI Act 2024/1689. While the AI Act has a firm commitment to protect human rights, which in the EU legal order have been given expression in the Charter of Fundamental Rights, we argue that this alone does not contain adequate guarantees for enforcing some of these rights. This is because issues such as EU competence and the principle of subsidiarity make the idea of protection of fundamental rights by the EU rather than national constitutions controversial. However, we argue that human rights-based, ethical regulation of AI in the EU could be achieved through contextualisation within a values-based framing. In this context, we explore what are termed ‘European values’, which are values on which the EU was founded, notably Article 2 TEU, and consider the extent to which these could provide an interpretative framework to support effective regulation of AI and avoid ‘ethics washing’.

1. Introduction

In May 2024, the EU Council approved the much-awaited Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence for the 27 Member States, known as the EU AI Act, which entered into force in August 2024. The legislative process leading to the adoption of the EU AI Act, which was preceded by ethical guidelines issued by a High-Level Expert Group set up by the European Commission in 2019, reflects a deliberate integration of ethics into the legal framework. This link has also previously been highlighted in UNESCO’s Recommendation on the Ethics of Artificial Intelligence 6990, adopted in 2021.
The purpose of this paper is twofold. Firstly, it aims to contribute to the future of AI governance by making a case for a human rights-based, ethical framework for AI regulation. Secondly, it will assess the potential of the AI Act to mitigate the corrosive effects of ‘ethics washing’ and protect fundamental rights in the context of the values of human dignity, freedom, democracy, equality and the rule of law on which the European Union was founded. It will be argued that the EU AI Act, as a measure, has the capacity to catalyse a governance framework grounded in a substantive conceptualisation of human flourishing, which inter alia will support the meaningful regulation of AI that avoids ‘ethics washing’. This could be achieved through effective enforcement of fundamental rights, underpinned by and interpreted through the lens of the values enshrined in Article 2 of the Treaty on the European Union (TEU).
Section 2 of this article contextualises the pre-regulatory dynamics between law and ethics as perceived by the industry, which has historically favoured soft law, e.g., in the form of codes of conduct instead of binding legal norms. This preference has been critiqued in academic literature as ‘ethics washing’, wherein ethical commitments are employed to sidestep regulatory scrutiny. Section 3 argues that ‘ethics washing’ can be more effectively avoided by adopting a human rights-based, ethical framework for AI regulation. The human rights language functions both as a moral and legal language and, therefore, has the capacity to handle both the ethical and legal dimensions of the ethical AI project. Section 4 delves deeper into the positioning of fundamental rights enshrined in the EU Charter of Fundamental Rights in the AI Act. It is argued that while the EU framework has the potential to deliver on the human rights-based approach to AI governance, the AI Act alone does not guarantee this due to legal issues pertaining to the Act’s legal basis and purpose. This reflects the EU’s complex and diplomatically sensitive role in integrating fundamental rights. Section 5 explores what are termed ‘European values’, which are values on which the EU was founded, notably Article 2 TEU. The extent to which these could provide an additional standard of review or indeed an interpretative framework will be assessed, with the aim of delivering on the promise of a human rights-based ethical framework for AI regulation.

2. Against ‘Ethics Washing’ of AI Governance

Despite decades of advancements in AI technologies, including generative AI, formal regulatory efforts have emerged only recently [1,2]. The release of ChatGPT in 2022, a widely accessible generative AI tool, acted as a catalyst, intensifying the push for comprehensive AI governance. This momentum is reflected in the issuance of key legislative measures in 2023, such as President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (E.O. 14110, 88 FR 75191) in the United States and the updated draft of the EU AI Act. Elsewhere, Canada is anticipated to enact federal-level AI legislation in the form of the Artificial Intelligence and Data Act (AIDA)1. (AIDA) is set to establish specific obligations concerning ‘high-impact AI systems’ albeit, unlike the EU’s categorisation of ‘high-risk AI systems’, the Canadian legislative framework does not provide a precise definition of this term [3]. Another example is South Korea, where the National Assembly has proposed a series of draft bills that intend to introduce compliance measures for high-risk AI systems defined as having a significant impact on lives, safety and fundamental rights. Contrary to its European counterpart, however, the Korean approach appears to embrace a ‘preferential permission and ex-post regulation’ logic as opposed to ex ante prohibitions [4]. Finally, in Australia, the government introduced a ‘Voluntary AI Safety Standard’ while it considers enacting mandatory rules for AI ‘in high-risk settings’2.
Meanwhile, the legislative void had been filled with numerous codes of ethical conduct developed by the tech industry aiming to gain legitimacy for its technological advances, with ethics becoming a buzzword in the corporate world of technology. Metcalf and Moss observe the paradox that ‘Ethics is arguably the hottest product in Silicon Valley’s hype cycle today, even as headlines decrying a lack of ethics in technology companies accumulate’ [5]. The prevalence of voluntary codes of ethical conduct is notable; Mittelstadt has counted at least 84 such initiatives [6], and the list continues to grow. Self-regulation by industry is advocated via these codes of ethics, with the assumption that this is sufficient to guard against potential harms or abuses. In this context, advocates of industry self-regulation via codes of ethical conduct submit that stringent legal regulation is redundant, with proposals for ethical AI frameworks or codes presented as substitutes for government regulation [7]. Indeed, the prevalence of such frameworks risks creating the misconception that AI technologies are being developed and deployed according to strict, transparent and enforceable ethical principles. De Laat highlights that not all AI companies opposed regulatory intervention, as ten entities based in Europe and the United States formally expressed support for legislative measures targeting high-risk AI systems [8]. However, the subsequent 2023 amendments to the EU’s Proposal for the AI Act elicited criticism from industry stakeholders such as the Computer & Communications Industry Association (CCIA). The CCIA contended that the revised regulatory framework could hinder innovation and issued a cautionary statement regarding the ‘potentially disastrous consequences for the European economy’3.
While this focus on codes of ethics may be regarded as a welcome development that shows awareness of the potential dangers of AI, the sincerity of the commitment to ethics and the true impact of these ethical codes of conduct in practice is difficult to measure. Many industry-based ethical frameworks have the character of high-level, general principles devised and implemented by those with a commercial interest in the roll-out of AI, overseen by industry-led governance and usually accompanied by compliance training in these industry-developed codes of ethics. This primarily rhetorical adoption of ethics is often the extent of ethical AI and is the catalyst for concerns that ‘ethics washing’ is replacing ethics. Scholars have also expressed concern about the lack of an agreed set of standards for AI governance [9]. The principles of clinical ethics [10], that is, (1) beneficence, (2) nonmaleficence, (3) autonomy, and (4) justice, are frequently referenced in ethical AI, yet Mittelstand argues that these are not entirely transferable from bioethics to technology [6]. In response, Floridi and Cowl’s ‘Unified Framework of Five Principles for AI in Society’ proposes that the principles of clinical ethics should be augmented with a fifth, namely explicability [11,12]. Meanwhile, Mittelstadt observes that the consensus as to what constitutes the common good in the context of technology is more difficult to achieve than in the healthcare context [6]. Furthermore, although a democratic tool embedded in our daily lives, technology is not guarded by the same level of professional standards which apply to the medical professions. Nor do these professional standards have the force of law, as is the case pertaining to clinical professional standards in many jurisdictions. Additionally, Jobin et al. note that while the codes of conduct for AI tend to agree on the overarching ethical principles, there is no clarity on ‘(1) how ethical principles are interpreted, (2) why they are deemed important, (3) what issue, domain or actors they pertain to, and (4) how they should be implemented’ [13]. Thus, questions remain about the adequacy of the principles, the rationale for their prioritisation and the spheres of their applicability when assessing the use of ethics in the technology context.
The preoccupation only with ethical principles in industry-based discussions of ethical AI also creates an environment in which misconceptions about the nature of ethics abound. While formal, high-level principles are integral to ethical frameworks, they represent just one aspect of an ethical framework. This is the case whether the framework is deontological, consequentialist or aretaic in orientation [14]. Ethical frameworks include a substantive concept of ‘the good’, or ‘the right’, which is often expressed in the language of values. This is the basis from which ethical principles are then developed. From these foundational concepts of ‘the good’ or ‘the right’ ethical frameworks, we can then give an account of the normative criteria through which the conception of ‘the good’ or ‘the right’ is advanced [14]. It is here that the high-level principles, which are often the sole focus of engaging ethics in the field of AI, are located within the architecture of ethics. In addition, ethical frameworks usually include an account of how the skill of epikeia, that is, the capacity for practical moral judgment, is developed. It is this practical moral judgement that enables individuals to determine how ‘the good’ can be identified in each specific context [15]4. Ethical frameworks also often include an account of the moral qualities or virtues that need to be cultivated by individuals in order for the moral character of the person to develop and be capable of making good moral decisions [16]. When the comprehensive nature of ethics, the depth of its requirements and the extent of its reach are properly understood, the inadequacy of the truncated versions of ethics prevalent in the tech industry is clear. Moreover, an appropriately comprehensive understanding of ethics suggests that ethical frameworks do not necessarily occasion industry-friendly or lenient standards of practice.
Concerns about the adequacy and application of the principles are amplified by concerns about the lack of appropriate enforcement mechanisms for ethical regulation and governance of AI. As a result, ethics has tended to be viewed as an inferior standard to the law due to the former’s lack of ‘teeth’ [17,18,19]. This arguably explains the tech industry’s eagerness to stave off regulation with self-regulatory codes of ethical conduct containing token references to human rights. Termed ‘ethics washing’ [20,21,22], it signals a superficial advocacy of moral values without their being embedded in decisions about products or practices, thereby creating the illusion that the serious work of ethical assessment is being done in these contexts. Paul Nemitz, a prominent EU official advising the Commission on digital transition, is one of the most vocal critics of voluntary ethical codes of conduct in AI. Nemitz’s voice appears to be echoed in the strong enforcement mechanisms laid down in the AI Act. Coming from a background in EU consumer law, which is characterised by strict liability mechanisms, Nemitz addresses the weakness of the existing ethical codes of conduct arguing, correctly in our view, that ‘the numerous conflicts of interests which exist between the corporations and the general public as to the development and deployment of AI cannot be solved by unenforceable ethics codes or self-regulation’ [7]. In this context, he notes that a crucial question for democracy should be ‘which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws?’ [7].
This logic seems to have guided the EU AI Act, with its risk-based approach to regulating different uses of AI. Nemitz’s question of which aspects of AI require specific regulation via legislation is important. However, his characterisation that certain issues can be ‘left to ethics’ risks reinforcing the false alternatives of ethics or law that have accompanied debates about the ethical governance of AI to date. Moreover, it risks relegating ethics to a marginal role that is relevant only when the challenges are trivial and the risks are low. Bietti’s criticism is stronger, observing that Nemitz and Alston each formulate the requirements of AI governance on reasoning that results in law and ethics being viewed ‘as if these were incompatible alternatives’ [21].
The consequences of treating law and ethics as if they were incompatible alternatives are twofold. Firstly, framing ethics as a weaker standard that is inferior to the law deprives it of its significance in the formulation of legislation and risks turning all references to ethics and morality into empty slogans. Secondly, this dichotomous framing of ethics and the law is equally problematic for the law because if ethics and morality are merely token words, then what is the grounding of law and justice? More fundamentally, it represents a missed opportunity to acknowledge and build upon the essential complementarities between ethics and law conceiving of them as operating in unconnected spheres, with codes of ethics regarded as relevant only where little is at stake.
However, although ethics and law do indeed represent distinct spheres of operation, they should not be conceptualised as ‘incompatible alternatives’, but rather each should be recognised as having an essential role to play in the regulation of AI. Indeed, their distinct roles in AI regulation reflect the fundamental relationship between ethics and law in which ethics is foundational for positive law [23]. While conceptualisation of the relationship varies, as evident in Stanton-Ife’s comprehensive overview of the state of the debate, this paper stands within a critical natural law tradition in which ethics is understood to provide a conception of the good that underpins positive law and an account of the role that law plays in securing the good in the social context [24,25]. Thus, law’s legitimacy is in part, a reflection of its ethical basis. This symbiotic relationship between ethics and law, is particularly evident in the human rights-based approach to AI regulation for which we advocate, which has clear ethical underpinnings. In Section 3, we make the case for a human rights-based approach to AI governance, arguing that it has multiple strengths that warrant its recommendation. Following from this, in Section 4 we evaluate the merits of the EI AI Act as a human rights-based instrument. We discuss the ambiguities of the AI Act from this perspective but argue that any limitations are mitigated when its moral and legal basis in the European liberal democratic tradition is recognised. This is the heritage from which ‘the universal values of the inviolable and inalienable rights of the human person, freedom, democracy, equality and the rule of law’ have developed.

3. A Human Rights-Based Ethical Framework for AI Regulation

The ethical framework of human rights represents the single most effective approach to addressing the ethical challenges associated with AI and its regulation. In this, we concur with Yeung et al., who argue that the ‘international human rights framework provides the most promising set of standards for ensuring that AI systems are ethical in their design, development and deployment’ [9]. Of particular significance in this context is that human rights language functions both as a moral and legal language and, therefore, has the capacity to handle both the ethical and legal dimensions of the ethical AI project. A human rights framework thus has the added virtue of highlighting the complementarity between ethics and law. This is also noted by Sartor, who describes the language of human rights as a common denominator between the two disciplines [26].
As a moral language, human rights have been shaped by different philosophical, cultural and religious traditions and have become a ‘short-hand’ to express the fundamental goods in which people have interests as aspects of their well-being and flourishing [27]. Adapting Amartya Sen [28] and drawing on Henry Shue’s ground-breaking account of human rights first articulated in Basic Rights [29], we conceptualise the moral language of human rights in terms of ethical assertions about the critical importance of basic needs, core freedoms and essential relations based on the essential and equal dignity of persons, and the corresponding recognition of certain obligations to promote or safeguard those basic needs, core freedoms and essential relationships. From Sen, we note that:
(…) ethical proclamations of human rights are comparable to pronouncements in, say, utilitarian ethics—even though the substantive contents of the articulation of human rights are altogether different from utilitarian claims. Utilitarians want utilities to be taken as the only things that ultimately matter and demand that policies be based on maximising the sum-total of utilities, whereas human rights advocates want the recognition of the importance of certain freedoms and the acceptance of some social obligations to safeguard them. But even as they differ on what exactly is demanded by ethics, their battle is on the same- and shared—general territory of ethical beliefs and pronouncements.
[28]
We recognise that there are many ways of accounting for human rights claims, of which Sen is just one. Indeed, as has been noted already, a strength of human rights frameworks is that they can accommodate diverse accounts of their foundations. While Sen’s framing of the nature of human rights claims is important for our argument, of greater significance in this context is how he highlights the fundamental and radical differences between human rights and utilitarian frameworks, albeit recognising that they function in the same general territory of ethical beliefs. Human rights frameworks differ fundamentally from utilitarian ones. This recognition is vital but frequently overlooked in discussions about ethical AI.
In addition to the dual moral and legal orientation of human rights frameworks, there are other features that make them uniquely valuable for the regulation and governance of ethical AI. Amongst the most important and asserted with equal force by both moral and legal human rights frameworks is the claim that human rights belong equally to each person, with the concept of a person being a holistic one. This holistic understanding of the person means that there is a recognition that human beings have material, psychological, social, political and existential needs and that these are expressed in the different categories of human rights [29]. From this claim that human beings share an essential and equal dignity comes the understanding that human rights are universal and inalienable, and this vital feature is the basis on which human rights ‘provide the necessary protection to all human beings, in all essential spheres,’ including in contexts where AI is deployed [30]. In addition, human rights are understood to be interdependent and indivisible. The history of human rights in the 20th century is, in part, a story of rival emphases on either civil and political rights or on economic, social and cultural rights. Nonetheless, in recent decades, the ‘doctrine’ of the interdependence and indivisibility of human rights has held sway. This comprehensive vision, promulgated most famously in the Vienna Declaration5, insists that civil, political, economic, social and cultural rights are inherently complementary and equal in importance and that the violation of one damages the achievement of the others [31]. This assertion of the interdependence and indivisibility of human rights has far-reaching consequences for ethical assessments of AI. For example, it would suggest that AI technologies which enhance economic well-being must be implemented in a manner that protects civil and political rights. Thus, there can be no sacrificing of civil and political rights in the process of seeking economic ones. This focus on interdependence and indivisibility also allows for the intersecting inequalities that underpin systemic injustice to be foregrounded.
The recognition that all persons are both rights-bearers and duty-holders and that human rights bring corresponding enforceable obligations on states and other actors to promote and protect the human rights of all individuals equally forms an essential part of the human rights framework. This, too, is important in the context of the regulation of AI [32]. There is also a growing recognition that human rights frameworks require participatory and inclusive processes through which the specific requirements of human rights in each context are identified [33,34]. Furthermore, as evolving global moral and legal frameworks, human rights have the capacity to respond to new challenges and serve as the basis for new legislation. In this context, the articulation of (the controversially termed) third and fourth-generation rights is important. The rights to development, to share in the exploitation of the common heritage of humankind and the rights of future generations are particularly relevant in establishing the parameters of ethical AI, as are the recently articulated neuro-rights, which include the rights to mental privacy, mental integrity and cognitive liberty, rights that are especially important in the context of generative AI [35].
Human rights-based moral and legal frameworks are thus structured around the values of human dignity, equality and freedom and establish a set of inalienable and interdependent rights to which all persons are entitled and which must be respected. Codes of ethics that reflect moral frameworks of this kind provide a means through which the substantive moral values of human dignity, equality and freedom are protected in the development and deployment of AI. They cannot function as empty vessels into which an industry can funnel its interests or values or through which non-moral values of efficiency or innovation are prioritised. The fact that this moral discourse is underwritten by and given legal force through a range of intersecting national, regional and international legal frameworks gives it further appeal as a language through which ethical AI can be advanced. Moreover, debates about universality, sincerity and enforcement notwithstanding, the fact that the United Nations Declaration of Human Rights has been signed by all 193 countries suggests that human rights-based frameworks have potential for the global governance of AI.

4. Protection of Fundamental Rights in the AI Act: An Ambivalent EU Competence?

In this section, we evaluate the extent to which the European Union’s approach to AI governance measures up to the concept of a human rights-based regulation of AI. As a preliminary remark, we should, at this point, elaborate on the terminological difference between the concepts of human and fundamental rights. International frameworks, such as the UN Universal Declaration of Human Rights or the European Convention on Human Rights (ECHR), embrace the term ‘human rights’. In contrast, the term ‘fundamental rights’ used in the European Union derives from the tradition of these rights being historically protected by national constitutions of the Member States. As explained by Fabbrini [36], it is particularly consistent with the German tradition, where the Constitution is called Grundgesetz, i.e., the ‘basic’, ‘ground’, ‘foundational’ or indeed ‘fundamental’ law. Note that the earliest discussion over the place of these basic rights in the EU legal order took place in a dispute between the Court of Justice of the EU and the German Constitutional Court over the principle of primacy of EU law in the Solange saga [37]. This wording has been preserved throughout the CJEU’s case law, and it has eventually made its way into the EU’s flagship human rights instrument, i.e., the EU Charter of Fundamental Rights6. In this article, we respect these terminological differences and, consequently, refer to ‘fundamental rights’ when discussing the EU framework and ‘human rights’ when addressing international frameworks such as the UN or the ECHR.
The EU opted to regulate AI through a regulation, mirroring its approach, the General Data Protection Regulation (Regulation 2016/679, GDPR) and the Digital Services Act (Regulation 2022/2065). As per Article 288 of the Treaty on the Functioning of the European Union (TFEU), a regulation is directly applicable and does not require transposition into national law, thereby ensuring uniformity across Member States. Accordingly, the AI Act will take precedence over any national AI legislation, except in narrowly defined areas where Member States are expressly permitted to enact supplementary provisions7. Unlike instruments of international law, EU regulations are binding not only on Member State governments but also on all entities operating within the EU single market, including private sector actors such as technology companies. Once in force, regulations possess a direct horizontal effect, enabling individuals to invoke their provisions in national courts, including in disputes against other private parties. In Robin-Olivier’s words, this amounts, to some extent, to ‘the submission of private actors to TFEU, and to internal market rules in particular’ in the name of the effectiveness of EU law [38]. The EU’s regulatory impact is rooted in its establishment of a nouvel ordre juridique (Case 26–62 Van Gend en Loos), which is distinct from traditional international law in that it directly permeates the domestic legal orders of its Member States. Eckes notes that the Court of Justice of the European Union’s view is EU law is ‘autonomous and no longer rooted in and depending on the sovereignty of the Member States’ [39].
While undoubtedly more impactful in its effectiveness in regulating AI than the UNESCO Recommendation on the Ethics of AI, we argue that the EU AI Act is rather ambivalent in its framing of fundamental rights. On the surface, fundamental rights feature prominently in the AI Act, and yet the Act cannot be characterised as a fundamental rights-based measure. On one hand, Gregario and Dunn argue that the attempt to guarantee an optimal balance between innovation and the protection of rights is the fil rouge connecting the EU’s recent initiatives in technology law: the GDPR, the Digital Services Act, and the AI Act [40]. The ‘explanatory memorandum’ accompanying the proposed AI Act engaged with many of the rights enshrined in the EU Charter of Fundamental Rights. Examples include the right to human dignity, respect for private life and protection of personal data, non-discrimination and equality between women and men, consumer protections, children’s rights, the rights of persons with disabilities, and the presumption of innocence (p. 11). Indeed, the AI Act, in its preamble (rec 1), identifies fundamental rights as ‘overriding reasons of public interest’. Moreover, many of the amendments to the Commission’s draft introduced by the European Parliament further strengthened the role of fundamental rights in the AI Act. Notably, the Parliament proposed an obligation on most of the high-risk AI systems providers to carry out a fundamental rights impact assessment (amendment 413). With the embrace of the discourse of fundamental rights, the EU AI Act has replaced the idea of ethical impact assessments (utilised in the UNESCO Recommendation) with the requirement, laid down in Article 27, to carry out an ex ante fundamental rights impact assessment (Article 27(3)).
On the other hand, however, despite the many references to fundamental rights, the EU AI Act is not entirely grounded in a fundamental rights basis. As has been observed by various scholars, the EU chose to adopt a risk-based approach resembling some of its product safety legislation [40,41,42] and to place AI within the framework of the free movement of goods and services. From a pragmatic perspective, this approach is strategic, as product safety is one of the most established and enforceable domains of EU law, permitting substantial regulatory intervention while remaining within the scope of the Union’s competences. This framework benefits from a well-developed array of enforcement mechanisms, enhancing its regulatory efficacy.
The AI Act’s twofold legal basis—Articles 16 and 114 TFEU—reflects a rather complex orientation. While Article 16 TFEU pertains to the EU’s competence in safeguarding the right to privacy, thereby anchoring the regulation in a fundamental right, ensuring AI’s compliance with this right, notably with the GDPR, will likely present significant practical challenges. In contrast, the second legal basis, Article 114 TFEU, is focused on harmonising the internal market and, thus, ensuring a level playing field that will enable the EU to remain a competitive global player in the field of innovation. To illuminate the EU’s approach to regulating AI, it is worth reiterating the AI Act’s purpose, as explained in Article 1(1):
The purpose of this Regulation is to improve the functioning of the internal market and to promote the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, rule of law and environmental protection, against the harmful effects of artificial intelligence systems in the Union and supporting innovation.
The purpose of the EU AI Act is, thus, fourfold and involves balancing the following four goals: (1) improving the functioning of the internal market, (2) promoting human-centric and trustworthy AI, (3) protecting fundamental rights and health and safety, and (4) supporting innovation. While grounding legislation in internal market considerations is often necessary simply to justify EU intervention in line with the principle of subsidiarity, if it comes to litigation, the CJEU’s interpretation relies heavily on the legal basis and purpose of the measure at issue.
Some would argue that given the original purpose and function of the EU, the AI Act’s positioning of fundamental rights could not have been given priority over single market harmonisation. While authors such as Burgess and Mason remind us of historical evidence which suggests that the EU founders had always seen it as more than merely a community of economic interests [43,44], at its inception, it was precisely that—a European Coal and Steel Community, followed by a European Economic Community. Abbott highlights that in the early days, it was seen as Europe’s response to the GATT (General Agreement on Tariffs and Trade), the WTO’s predecessor and, therefore, the inclusion of human rights within EU law has historically been seen as peripheral to its primary economic focus [45,46]. De Búrca and Scott contend that both organisations were established ‘primarily to promote trade between states’ [47].
This is not to say that individual rights were not an important consideration but rather that they were perceived as sufficiently safeguarded by the constitutional traditions of the Member States. Therefore, in the early case law, the EU Court of Justice was cautious to avoid exceeding its jurisdiction and instead deferred to the constitutional frameworks of Member States when addressing fundamental rights concerns (case 11–70, Internationale Handelsgesellschaft). It is important to stress that those arose in disputes concerning customs and tariffs and rights such as the freedom to conduct business, which—while fundamental to the Member States’ constitutional traditions—were not considered human rights as such. For issues specifically concerning human rights, the prevailing assumption was that they would fall under the purview of the European Court of Human Rights, given that all EU Member States are also signatories to the European Convention on Human Rights. Additionally, there is a legal presumption that the protection of fundamental rights in the EU is equivalent to that offered by the ECHR (Case C-84/95 Bosphorus, note art 53 CFR), and the 2007 Treaty of Lisbon8 granted the Convention an equal status to that of the EU Treaties (art 6 TEU) [48].
However, the role of the EU in the sphere of fundamental rights protection has substantially increased over the years, with the Court of Justice at the forefront of this trend, which has been described by Muir as an ‘unsettling’ EU competence [49]. This has been reflected in the adoption of the EU Charter of Fundamental Rights, which has had binding force since the Treaty of Lisbon’s entry into effect in 2009. As De Búrca notes, the rationale behind the Charter was to make the protection of fundamental rights in the EU more visible to its citizens [50]. As EU law increasingly regulates emerging policy areas, intersections with fundamental rights have become more pronounced, which reflects the Charter’s impact on the Union’s policymaking, with the AI Act serving as a salient example. As mentioned above, the AI Act refers to many fundamental rights enshrined in the Charter which act as guiding principles for the EU legislator. Moreover, due to EU law’s distinct attributes—namely, direct effect and the principle of primacy—EU law is highly effective in practice. This holds a promise for fundamental rights protection, as was the case, for example, in case C-362/14 Schrems, which sparked controversy around the protection of personal data, leading to the GDPR being proposed. In litigation, however, the fundamental rights instruments at the EU’s disposal, and especially the Charter of Fundamental Rights, have traditionally been considered rather limited in terms of horizontal direct effect, i.e., the capacity to be relied upon in the courts of the Member States in disputes against private parties [51,52]. Thus, in the CJEU’s case law, the Charter remains considerably underused with only a few exceptions (e.g., case C-414/16 Egenberger).
In accordance with Article 52(1) of the EU Charter of Fundamental Rights, any limitation on the exercise of fundamental rights must not only be made subject to the principle of proportionality, but it shall also respect the essence of those rights. This provision has contributed to the increasing prominence of the contested doctrine of the ‘essence’ of fundamental rights, which conceptualises each right as comprising an inviolable core and a peripheral component that may be subject to lawful limitations [53,54,55]. Given the absence of existing case law addressing the intersection of AI and fundamental rights, any consideration of how this balancing act might manifest remains speculative. However, positioning fundamental rights as overriding reasons justifying restrictions on the free movement means framing them, in De Cecco’s words, merely as ‘trump’ cards [55]. Thus, it follows that—except for the right to privacy—every measure taken to protect fundamental rights will have to be balanced against the overarching objective of facilitating the deployment of AI systems as products and services on the EU market, with the principle of proportionality as the most likely standard of review. The burden of this balancing act will be placed on the courts, and particularly the CJEU should it be called upon to interpret the provisions of the AI Act in the future.
While AI governance is an uncharted territory in EU law, the balancing of fundamental rights with other EU priorities, such as economic expansion and competitiveness, has, over the years, been an issue also in other areas of EU law, particularly EU labour law. In this vein, a parallel might be drawn to the longstanding tension between the free movement of services and fundamental rights, including worker protection and the right to collective bargaining, which has in the past manifested itself in an (in)famous line of CJEU case law known as the ‘Laval quartet’9. The central issue in the Laval case pertained to the cross-border provision of lower-cost labour among EU Member States. By relying on internal market EU legislation, foreign service providers were able to circumvent nationally binding collective agreements, thus undermining the fundamental right to collective bargaining. Therefore, the conflict in Laval was, on the one hand, between the free movement of services, which is one of the four core freedoms constituting the EU single market, and the fundamental right to collective bargaining on the other. While the CJEU in Laval confirmed that the Union had ‘not only an economic but also a social purpose’ (Laval, para 105), it framed the social purpose as secondary to the economic one and dismissed the trade union’s argument. Labour law scholars have argued that the Laval quartet significantly disrupted the European trade union landscape, which had previously regarded itself as insulated from EU market intervention [56]. It has also been contended in numerous commentaries on the Laval judgment that by positioning economic integration and social rights as fundamentally conflicting objectives, the rulings exposed a structural tension within the EU legal order, challenging the assumption that social protections could coexist harmoniously with market liberalisation [57,58,59].
In the context of the AI Act, it is foreseeable that the enforcement of fundamental rights will encounter challenges analogous to those faced by the trade union movement in Laval, given that the Act’s legal basis is primarily rooted in single market harmonisation. However, the inherent tension between market integration and fundamental rights is not static; it is influenced by political dynamics, and the EU has increasingly been characterised as a community of values [60,61]. This evolution is evident in the domain of labour law, where recent CJEU jurisprudence indicates a gradual departure from its historically market-centric stance [62]. Consequently, in other areas of EU law, including AI governance, the CJEU may exhibit greater sensitivity to fundamental rights when adjudicating conflicts between regulatory objectives and fundamental rights considerations.
To conclude, the architecture of the EU law, including the issue of the Union’s competences and the principle of subsidiarity, precludes and explicitly frames the AI Act as a fundamental rights-based measure. Even though concerns over these rights have been of vital importance to the EU legislator, in future litigation based on the AI Act, any limitation to AI based on fundamental rights concerns will have to be balanced proportionately against the other aims of the AI Act, i.e., the improvement of the single market and support for innovation. Therefore, the achievement of a fundamental rights-based ethical regulation of AI is conditional upon adopting a purposive interpretation of the AI Act. The next section will discuss possible avenues that would facilitate explicit references to European values underpinning the EU’s regulation of AI.

5. European Values and the AI Act

The AI Act represents a pioneering regulatory initiative. As the first legislative framework globally to impose outright prohibitions on specific AI applications deemed incompatible with fundamental rights and EU values, it establishes a significant precedent. However, as argued in the previous section, the enactment of the AI Act alone does not guarantee a fundamental rights-based ethical regulation of AI in the EU. In response to increasing pressure from the tech industry, the EU will likely need to substantiate its specific regulatory measures with well-founded justifications, particularly in the context of litigation before the Court of Justice, even if these justifications will also continuously be attacked in certain quarters. This section evaluates some interpretative frameworks that could be employed to strengthen the fundamental rights dimension of the EU AI Act.
Raz, in ‘On the Nature of Rights’, posits that rights function as ‘intermediate conclusions in arguments from ultimate values to duties’ [63], arguing that rights are indeed underpinned by moral considerations. However, because societal consensus on these underlying values may be contested, they are often not explicitly articulated within the right itself, as explained by De Cecco and Gardner [55,64]. This ambiguity enables rights to protect individual interests while circumventing potential value-based disagreements. In our articulation, rights are understood as a moral and legal language through which specific values pertaining to human dignity are advanced in society, which although has differences, chimes with Raz and Gardner’s perspective above. Moreover, the benefit of the ambiguity which enables rights to protect individual interests while circumventing potential value-based disagreements highlighted by Gardner also aligns with our view that a core strength of human rights lies in their capacity to accommodate a plurality of moral frameworks, allowing for diverse justifications to coexist under a unified legal construct. However, a purposive interpretation of the AI Act is of critical importance to achieving the goal of fundamental rights-based AI governance in the EU. Therefore, contrary to the tradition of separating rights from values, we argue that it will be necessary for the CJEU to link fundamental rights protected by the Charter to the values that underpin them.
Indeed, the AI Act diverges somewhat from the traditional concept of separating rights from values as it not only focuses on rights but also incorporates explicit justifications for these rights. This is consistent with our assertion that the AI Act, and EU law in general, is grounded in the natural law approaches. Consequently, EU legislation is typically preceded by a preamble consisting of recitals laying out the justification behind the legislative measure at issue. Klimas and Vaiciukaite explain that while recitals themselves in EU law are not legally binding, the CJEU—which employs a purposive approach to interpreting legislation—frequently cites preambles of various EU measures to substantiate its interpretation [65]. Consequently, in Recital 7 to the AI Act, the EU institutions assert that the common rules governing high-risk AI are to be consistent with the EU Charter of Fundamental Rights, and they should also take into account the European Declaration on Digital Rights and Principles for the Digital Decade (OJ C 23/1), as well as the above-mentioned ethics guidelines for trustworthy AI of the High-Level Expert Group (HLEG) on Artificial Intelligence. In the context of AI governance, a purposive interpretation of the AI Act might incorporate the seven ethical principles coined in 2019 by the Commission’s High-Level Expert Group, as discussed by Larsson [66]: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, (6) non-discrimination and fairness, and (7) societal and environmental well-being and accountability. The AI Act addresses these in Recital 27, stressing that they should ‘serve as a basis for the drafting of codes of conduct’ under the Act.
In addition to the seven ethical principles of trustworthy AI, of critical importance to a purposive interpretation of the AI Act is Article 2 TEU, which includes respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights. The Preamble to the Treaty reiterates that the EU draws inspiration from Europe’s heritage, ‘from which have developed the universal values of the inviolable and inalienable rights of the human person, freedom, democracy, equality and the rule of law’. This wording, added by the 2007 Lisbon Treaty, echoes an earlier, unsuccessful attempt to ratify the 2004 Constitution for Europe [67]. However, the EU’s commitment to the protection of European values extends well beyond these more recent developments. A growing body of research utilising a historical approach to the study of EU law and integration reveals a deeper trajectory of embedding human rights discourse—and, by extension, European values—into the foundational structures of the European Communities, which later evolved into the EU. Within this context, Delledone and Fabbrini contend that ‘the rise of an EU human rights jurisprudence should be seen as the result of a transnational development consisting of greater sensitivity towards human rights at all levels of government’ [68].
Although the explicit emphasis on values within the AI Act is limited, the EU legislator systematically references these values alongside the fundamental rights codified in the Charter, indicating their underlying significance in the regulatory framework. For example, with regard to prohibited uses of AI, Recital 15 of the Preamble states that AI can ‘be misused and provide novel and powerful tools for manipulative, exploitative and social control practices’, with the EU legislator going on to explain that such practices should be prohibited ‘because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights (…)’. Therein lies the EU’s justification for banning certain uses of AI. While this provision has been present in the EU Treaty since the 1997 Amsterdam reform, recent years and the rule of law crisis have seen it gradually gain in importance. The values contained in Article 2 TEU have further been interpreted by the CJEU in case law related to the independence of the judiciary in Hungary, Poland and Portugal [69,70]. While in some quarters, doubts over the normativity of these values and their justiciability persist [71], and this includes the context of AI as highlighted by Kusche [72], it is important to note that the reference to the common EU values in the AI Act opens up another potential pathway to advocate for ethical AI in addition to the protection of individual fundamental rights. In this vein, the above-mentioned European Declaration on Digital Rights and Principles for the Digital Decade, which has also been referenced in the Preamble to the AI Act, opens with the following statement:
The European Union (EU) is a ‘union of values’, as enshrined in Article 2 of the Treaty on European Union, founded on respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. Moreover, according to the Charter of Fundamental Rights of the European Union, the EU is founded on the indivisible, universal values of human dignity, freedom, equality and solidarity.
In the context of the AI Act, a pertinent example involves AI systems designed to influence the outcomes of elections or referenda, as outlined in Annex III of the AI Act. Although potential infringements might be associated with Articles 39 and 40 of the Charter of Fundamental Rights, the application of these provisions is limited. Firstly, these articles safeguard only an individual’s personal right to vote, which would be exceptionally challenging to establish as violated in cases where election results are manipulated through online content. Secondly, under the principle of subsidiarity, the Charter’s protection of voting rights is limited to the European Parliament and municipal elections. Consequently, contesting such AI systems would likely necessitate invoking the values enshrined in Article 2 TEU as an interpretative context, providing an additional set of normative expectations for the Court of Justice of the EU.

6. Conclusions

Prior to the enactment of the AI Act, AI governance had been dominated by industry-led initiatives favouring self-regulation, often criticised as ‘ethics washing’ for using ethical commitments to bypass formal legal constraints and often promoting utilitarian frameworks in which the ends sought are efficiency, profit-maximisation or innovation. In this paper, we have argued that this can be avoided by adopting a human rights-based ethical framework for AI regulation structured around the values of human dignity, equality and freedom, thereby establishing a set of inalienable and interdependent rights.
Our analysis of the provisions of the AI Act concludes that the attempt to integrate fundamental rights appears to be only partially successful. While the Act acknowledges the significance of fundamental rights through requirements such as fundamental rights impact assessments for high-risk AI systems, it ultimately remains a regulatory instrument grounded in risk management rather than in a comprehensive rights-based approach. Thus, enforcement of fundamental rights in the context of AI governance will be largely subject to self-regulation by those involved in AI provision and/or deployment, with issues regarding the appropriate balancing of rights likely emerging before courts, including the CJEU.
Nevertheless, the inclusion of fundamental rights and their intersection with the broader concepts, including ethical principles and European values, as enshrined in Article 2 TEU, holds a promise for embedding ethical considerations within the EU’s legal framework via a purposive interpretative framework. Since the AI Act functions within a legal system that has a commitment to the protection of fundamental rights and foundational European values, it has the potential to establish a new normative standard for AI governance that balances regulatory oversight with this commitment, thereby setting a precedent for future regulation in this domain.

Author Contributions

Conceptualisation, L.H. and M.L.-M.; writing—original draft, L.H. and M.L.-M.; writing—review and editing, L.H. and M.L.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Science Foundation Ireland grant number 13/RC/2106_P2.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The funder had no role in the preparation of this manuscript.

Notes

1
2
3
See P Davies, ”Potentially disastrous” for innovation: Tech sector reacts to the EU AI Act saying it goes too far’ (Euronews, 15 December 2023) available at https://www.euronews.com/next/2023/12/15/potentially-disastrous-for-innovation-tech-sector-says-eu-ai-act-goes-too-far (accessed 3 October 2024).
4
See especially Chapters 1–3, which discuss the role of practical reason in Aristotle pp. 10–28, Hume pp. 29–44 and Kant pp. 45–60 respectively.
5
Vienna Declaration and Programme of Action (A/CONF.157/23) of 25 June 1993.
6
7
For example, in accordance with Article 2(11) of the AI Act, Member States may introduce provisions which are more favourable to workers in respect of the use of AI systems by employers, or to allow collective agreements which are more favourable to workers.
8
Treaty of Lisbon amending the Treaty on European Union and the Treaty establishing the European Community [2007] OJ C 306/1.
9
The quartet consists of cases C-341/05 Laval un Partneri Ltd. v Svenska Byggnadsarbetareförbunde and others [2007] ECR I-11767; C-438/05 International Transport Workers’ Federation and Finnish Seamen’s Union v Viking Line [2007] ECR I-10779; C-346/06, Dirk Rüfert v Land Niedersachsen [2008] ECR I-01989; and C-319/06 Commission v Grand Duchy of Luxembourg [2008] ECR I-04323.

References

  1. Smuha, N.A. The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence. Comput. Law Rev. Int. 2019, 20, 97–106. [Google Scholar] [CrossRef]
  2. Cath, C. Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2018, 376, 20180080. [Google Scholar] [CrossRef] [PubMed]
  3. Scassa, T. Regulating AI in Canada: A Critical Look at the Proposed Artificial Intelligence and Data Act. Can. B Rev. 2023, 101, 1–30. [Google Scholar]
  4. Park, D.H.; Cho, E.; Lim, Y. A Tough Balancing Act—The Evolving AI Governance in Korea. East Asian Sci. Technol. Soc. Int. J. 2024, 18, 135–154. [Google Scholar] [CrossRef]
  5. Metcalf, J.; Moss, E. Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics. Soc. Res. Int. Q. 2019, 86, 449–476. [Google Scholar] [CrossRef]
  6. Mittelstadt, B. Principles Alone Cannot Guarantee Ethical AI. Nat. Mach. Intell. 2019, 1, 501–507. [Google Scholar] [CrossRef]
  7. Nemitz, P. Constitutional Democracy and Technology in the Age of Artificial Intelligence. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2018, 376, 20180089. [Google Scholar] [CrossRef]
  8. De Laat, P.B. Companies Committed to Responsible AI: From Principles towards Implementation and Regulation? Philos. Technol. 2021, 34, 1135–1193. [Google Scholar] [CrossRef] [PubMed]
  9. Yeung, K.; Howes, A.; Pogrebna, G. AI Governance by Human Rights–Centered Design, Deliberation, and Oversight. In The Oxford Handbook of Ethics of AI; Oxford University Press: Oxford, UK, 2020; pp. 77–106. [Google Scholar]
  10. Varkey, B. Principles of Clinical Ethics and Their Application to Practice. Med. Princ. Pract. 2021, 30, 17–28. [Google Scholar] [CrossRef] [PubMed]
  11. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. In Ethics, Governance, and Policies in Artificial Intelligence; Floridi, L., Ed.; Philosophical Studies Series; Springer International Publishing: Cham, Switzerland, 2021; Volume 144, pp. 19–39. ISBN 978-3-030-81906-4. [Google Scholar]
  12. Pavlidis, G. Unlocking the Black Box: Analysing the EU Artificial Intelligence Act’s Framework for Explainability in AI. Law Innov. Technol. 2024, 16, 293–308. [Google Scholar] [CrossRef]
  13. Jobin, A.; Ienca, M.; Vayena, E. Artificial Intelligence: The Global Landscape of Ethics Guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  14. Blackburn, S. Ethics: A Very Short Introduction; Oxford University Press: New York, NY, USA, 2003; Volume 80. [Google Scholar]
  15. Audi, R. Practical Reasoning and Ethical Decision; Routledge: Oxfordshire, UK, 2006. [Google Scholar]
  16. Russell, D.C. Practical Intelligence and the Virtues; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
  17. Alston, P. Statement on Visit to the United Kingdom, by Professor Philip Alston, United Nations Special Rapporteur on Extreme Poverty and Human Rights; Office of the United Nations High Commissioner for Human Rights: Geneva, Switzerland, 2018. [Google Scholar]
  18. Fukuda-Parr, S.; Gibbons, E. Emerging Consensus on ‘Ethical AI’: Human Rights Critique of Stakeholder Guidelines. Glob. Policy 2021, 12, 32–44. [Google Scholar] [CrossRef]
  19. Pizzi, M.; Romanoff, M.; Engelhardt, T. AI for Humanitarian Action: Human Rights and Ethics. Int. Rev. Red Cross 2020, 102, 145–180. [Google Scholar] [CrossRef]
  20. Wagner, B. Ethics as an Escape from Regulation. From “Ethics-Washing” to Ethics-Shopping? Amsterdam University Press: Amsterdam, The Netherlands, 2018. [Google Scholar]
  21. Bietti, E. From Ethics Washing to Ethics Bashing: A Moral Philosophy View on Tech Ethics. J. Soc. Comput. 2021, 2, 266–283. [Google Scholar] [CrossRef]
  22. Van Maanen, G. AI Ethics, Ethics Washing, and the Need to Politicize Data Ethics. Digit. Soc. 2022, 1, 9. [Google Scholar] [CrossRef]
  23. Stanton-Ife, J. The Limits of Law. 2022. Available online: https://plato.sydney.edu.au/entries/law-limits/ (accessed on 28 November 2024).
  24. Kaveny, C. A Culture of Engagement: Law, Religion, and Morality; Georgetown University Press: Washington, DC, USA, 2016. [Google Scholar]
  25. Baehr, A.R. Liberal Feminism. In Stanford Encyclopedia of Philosophy; Penn State University Press: University Park, PA, USA, 2013. [Google Scholar]
  26. Sartor, G. Artificial Intelligence and Human Rights: Between Law and Ethics. Maastricht Journal of European and Comparative Law 2020, 27, 705–719. [Google Scholar] [CrossRef]
  27. Hogan, L. Justifying Human Rights: Plural Foundations, Embedded Universalism. In Die Freiheit der Menschenrechte: Festschrift für Heiner Bielefeldt zum 65. Geburtstag; Wochenschau Wissenschaft: Schwalbach, Germany, 2023; p. 13. [Google Scholar]
  28. Amartya, S.; Amartya, S. The Idea of Justice; Penguin Books: London, UK, 2009. [Google Scholar]
  29. Shue, H. Basic Rights: Subsistence, Affluence, and US Foreign Policy; princeton University Press: Princeton, NJ, USA, 2020. [Google Scholar]
  30. Kirchschläger, P.G. Digital Transformation and Ethics; Nomos Verlagsgesellschaft mbH & Co. KG: Baden-Baden, Germany, 2021. [Google Scholar]
  31. Minkler, L.; Sweeney, S. On the Indivisibility and Interdependence of Basic Rights in Developing Countries. Hum. Rights Q. 2011, 33, 351–396. [Google Scholar] [CrossRef]
  32. Young, K.G. Rights and Obligations. International Human Rights Law, 4th ed.; Oxford University Press: Oxford, UK, 2022; pp. 129–148. [Google Scholar]
  33. Koh, H.H.; Slye, R.C.; Slye, R. Deliberative Democracy and Human Rights; Yale University Press: New Haven, CT, USA, 1999. [Google Scholar]
  34. Young, A. Dialogue, Deliberation and Human Rights; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  35. Ienca, M.; Andorno, R. Towards New Human Rights in the Age of Neuroscience and Neurotechnology. Life Sci. Soc. Policy 2017, 13, 5. [Google Scholar] [CrossRef]
  36. Fabbrini, F. Fundamental Rights in Europe; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  37. Stone, J.H.D. Agreeing to Disagree: The Primacy Debate Between the German Federal Constitutional Court and the European Court of Justice. Minn. J. Int. Low 2016, 25, 127. [Google Scholar]
  38. Robin-Olivier, S. The Evolution of Direct Effect in the EU: Stocktaking, Problems, Projections. Int. J. Const. Law 2014, 12, 165–188. [Google Scholar] [CrossRef]
  39. Eckes, C. The Autonomy of the EU Legal Order. Eur. World Law Rev. 2020, 4, 1–19. [Google Scholar] [CrossRef]
  40. Gregorio, G.D.; Dunn, P. The European risk-based approaches: Connecting constitutional dots in the digital age. Common Mark. Law Rev. 2022, 59, 473–500. [Google Scholar] [CrossRef]
  41. De Cooman, J. Humpty Dumpty and High-Risk AI Systems: The Ratione Materiae Dimension of the Proposal for an EU Artificial Intelligence Act. Mark. Compet. Low Rev. 2022, 6, 49–88. [Google Scholar]
  42. Schuett, J. Risk Management in the Artificial Intelligence Act. Eur. J. Risk Regul. 2023, 15, 367–385. [Google Scholar] [CrossRef]
  43. Burgess, J.P. What’s So European About the European Union?: Legitimacy Between Institution and Identity. Eur. J. Soc. Theory 2002, 5, 467–481. [Google Scholar] [CrossRef]
  44. Mason, H.L. The European Coal and Steel Community: Experiment in Supranationalism; Springer: Dordrecht, The Netherlands, 2013. [Google Scholar]
  45. Abbott, F.M. GATT and the European Community: A Formula for Peaceful Coexistence. Mich. J. Int. Low 1990, 12, 1. [Google Scholar]
  46. Abbott, F.M. Integration without Institutions: The NAFTA Mutation of the EC Model and the Future of the GATT Regime. Am. J. Comp. Law 1992, 40, 917–949. [Google Scholar] [CrossRef]
  47. De Búrca, G.; Scott, J. The EU and the WTO: Legal and Constitutional Issues; Bloomsbury Publishing: London, UK, 2002. [Google Scholar]
  48. Craig, P. The Lisbon Treaty: Law, Politics, and Treaty Reform; OUP Oxford: Oxford, UK, 2013. [Google Scholar]
  49. Muir, E. Fundamental Rights: An Unsettling EU Competence. Hum. Rights Rev. 2014, 15, 25–37. [Google Scholar] [CrossRef]
  50. De Búrca, G. The Drafting of the EU Charter of Fundamental Rights. Eur. Law Rev. 2001, 26, 126–138. [Google Scholar]
  51. Frantziou, E. The Horizontal Effect of the Charter of Fundamental Rights of the EU: Rediscovering the Reasons for Horizontality. Eur. Law J. 2015, 21, 657–679. [Google Scholar] [CrossRef]
  52. Frantziou, E. The Horizontal Effect of Fundamental Rights in the European Union: A Constitutional Analysis; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
  53. Dawson, M.; Lynskey, O.; Muir, E. What Is the Added Value of the Concept of the “Essence” of EU Fundamental Rights? Ger. Law J. 2019, 20, 763–778. [Google Scholar] [CrossRef]
  54. Tridimas, T.; Gentile, G. The Essence of Rights: An Unreliable Boundary? Ger. Law J. 2019, 20, 794–816. [Google Scholar] [CrossRef]
  55. De Cecco, F. The Trouble with Trumps: On How (and Why) Not to Define the Core of Fundamental Rights. Common Mark. Law Rev. 2023, 60, 1551–1578. [Google Scholar] [CrossRef]
  56. Syrpis, P.; Novitz, T. Economic and Social Rights in Conflict: Political and Judicial Approaches to Their Reconciliation. Eur. Law Rev. 2008, 33, 411. [Google Scholar]
  57. Barnard, C. Viking and Laval: An Introduction. Camb. Yearb. Eur. Leg. Stud. 2008, 10, 463–492. [Google Scholar] [CrossRef]
  58. Davies, A.C. One Step Forward, Two Steps Back? The Viking and Laval Cases in the ECJ. Ind. Law J. 2008, 37, 126–148. [Google Scholar] [CrossRef]
  59. Reich, N. Free Movement v. Social Rights in an Enlarged Union-the Laval and Viking Cases before the ECJ. Ger. Law J. 2008, 9, 125–161. [Google Scholar] [CrossRef]
  60. Oshri, O.; Sheafer, T.; Shenhav, S.R. A Community of Values: Democratic Identity Formation in the European Union. Eur. Union Politics 2016, 17, 114–137. [Google Scholar] [CrossRef]
  61. Akaliyski, P.; Welzel, C.; Hien, J. A Community of Shared Values? Dimensions and Dynamics of Cultural Integration in the European Union. J. Eur. Integr. 2022, 44, 569–590. [Google Scholar] [CrossRef]
  62. Lasek-Markey, M. No Turning Back from Social Europe: A New Interpretation of the Refurbished Posted Workers Directive in Hungary and Poland. Ind. Law J. 2022, 51, 194–218. [Google Scholar] [CrossRef]
  63. Raz, J. On the Nature of Rights. In Theories of Rights; Routledge: Oxfordshire, UK, 2017; pp. 39–59. [Google Scholar]
  64. Gardner, J. Simply in Virtue of Being Human: The Whos and Whys of Human Rights. J. Ethics Soc. Philos. 2006, 2, 1–22. [Google Scholar] [CrossRef]
  65. Klimas, T.; Vaiciukaite, J. The Law of Recitals in European Community Legislation. ILSA J. Int. Comp. Low 2008, 15, 61. [Google Scholar]
  66. Larsson, S. On the Governance of Artificial Intelligence through Ethics Guidelines. Asian J. Law Soc. 2020, 7, 437–451. [Google Scholar] [CrossRef]
  67. Pernice, I. The Treaty of Lisbon: Multilevel Constitutionalism in Action. Colum J. Eur. Low 2008, 15, 349. [Google Scholar]
  68. Delledonne, G.; Fabbrini, F. The Founding Myth of European Human Rights Law: Revisiting the Role of National Courts in the Rise of EU Human Rights Jurisprudence. Eur. Law Rev. 2019, 44, 178–195. [Google Scholar]
  69. Spieker, L.D. Breathing Life into the Union’s Common Values: On the Judicial Application of Article 2 TEU in the EU Value Crisis. Ger. Law J. 2019, 20, 1182–1213. [Google Scholar] [CrossRef]
  70. Scheppele, K.L.; Kochenov, D.V.; Grabowska-Moroz, B. EU Values Are Law, after All: Enforcing EU Values through Systemic Infringement Actions by the European Commission and the Member States of the European Union. Yearb. Eur. Law 2020, 39, 3–121. [Google Scholar] [CrossRef]
  71. Spieker, L.D. EU Values Before the Court of Justice: Foundations, Potential, Risks; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
  72. Kusche, I. Possible Harms of Artificial Intelligence and the EU AI Act: Fundamental Rights and Risk. J. Risk Res. 2024, 1–14. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hogan, L.; Lasek-Markey, M. Towards a Human Rights-Based Approach to Ethical AI Governance in Europe. Philosophies 2024, 9, 181. https://doi.org/10.3390/philosophies9060181

AMA Style

Hogan L, Lasek-Markey M. Towards a Human Rights-Based Approach to Ethical AI Governance in Europe. Philosophies. 2024; 9(6):181. https://doi.org/10.3390/philosophies9060181

Chicago/Turabian Style

Hogan, Linda, and Marta Lasek-Markey. 2024. "Towards a Human Rights-Based Approach to Ethical AI Governance in Europe" Philosophies 9, no. 6: 181. https://doi.org/10.3390/philosophies9060181

APA Style

Hogan, L., & Lasek-Markey, M. (2024). Towards a Human Rights-Based Approach to Ethical AI Governance in Europe. Philosophies, 9(6), 181. https://doi.org/10.3390/philosophies9060181

Article Metrics

Back to TopTop