Next Article in Journal
Event-Triggered-Based Neuroadaptive Bipartite Containment Tracking for Networked Unmanned Aerial Vehicles
Previous Article in Journal
Unravelling the Characteristics of Microhabitat Alterations in Floodplain Inundated Areas Based on High-Resolution UAV Imagery and Remote Sensing: A Case Study in Jingjiang, Yangtze River
Previous Article in Special Issue
Recent Research Progress on Ground-to-Air Vision-Based Anti-UAV Detection and Tracking Methodologies: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Quantitative Legal Support System for Transnational Autonomous Vehicle Design

1
Institute of Logic and Cognition, Department of Philosophy, Sun Yat-sen University, Guangzhou 510275, China
2
School of Law, Old College, University of Edinburgh, Edinburgh EH8 9YL, UK
3
Department of Philosophy, Xiamen University, Xiamen 361005, China
4
Information Systems Department, Institute of Data Science and Intelligent Decision Support, Beijing Jiaotong University, Beijing 100044, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Drones 2025, 9(4), 316; https://doi.org/10.3390/drones9040316
Submission received: 17 March 2025 / Revised: 10 April 2025 / Accepted: 18 April 2025 / Published: 20 April 2025
(This article belongs to the Special Issue Unmanned Traffic Management Systems)

Abstract

:
One of the key expectations of AI product manufacturers for their products is the ability to scale to larger markets, especially across legal systems, with fewer prototypes and lower adaptation costs. This paper focuses on the increasingly dynamic legal compliance challenges faced by designers of AI products in achieving this goal. Based on non-monotonic reasoning, we design an automated reasoning tool to help them better understand the legal implications of their designs in a transnational context and, ultimately, adjust the design of AI products more flexibly. This tool supports the quantitative representation of the strength of legal significance to help designers better understand the reasons for their decisions from their own perspective. To illustrate this functionality, a case study on traffic regulations across the UK, France, and Japan demonstrates the system’s ability to resolve legal conflicts—such as driving-side mandates and speed radar detector prohibitions—through quantitative evaluation.

1. Introduction

The development of autonomous vehicles is rapidly changing the transportation industry. In addition to the expectations of improved road safety and efficient driving, the industry and society are paying close attention to the stable and continuous service provided by autonomous vehicles over a wide area, especially across countries. The economic benefits of applying autonomous vehicles to the global logistics industry are considered to be huge and revolutionary [1]. Meanwhile, manufacturers in transport industries, such as Waymo, Uber, Tesla, and Cruise, have researched and deployed autonomous driving applications across borders or listed them as key directions in their corporate reports. These applications include long-distance transport that continuously spans multiple countries and global taxis deployed in multiple regions (Waymo’s self-driving truck: https://waymo.com/blog/2022/02/enabling-autonomous-freight-movement, accessed on 30 December 2024; Uber’s global taxi and truck: https://www.uber.com/gb/en/autonomous/, accessed on 30 December 2024; Tesla’s electric truck: https://www.tesla.com/semi, accessed on 30 December 2024; Cruise’s experiment in multiple places: https://www.getcruise.com/rides/, accessed on 30 December 2024). From a cost perspective, the expectation of automated vehicle cross-border applications is reasonable. There are currently two major unavoidable costs for auto manufacturers and the transportation industry: (1) Delays and costs associated with constantly changing and verifying vehicles and drivers during cross-border transportation [2]. (2) In order to meet the regulatory requirements of different markets, manufacturers need to design and manufacture multiple different prototypes and production lines, as well as train and manage the corresponding staff. On the one hand, autonomous vehicles can drive for a long time with stable driving performance. On the other hand, some adjustments to autonomous driving, such as driving modes and whether certain hardware is activated, can be made more efficiently without relying on a complete redesign and the replacement of drivers.
To achieve this goal, the manufacturer’s core pursuit is to use the autonomous vehicle continuously or simultaneously in multiple regions with fewer design adjustments, that is, fewer prototypes and production lines. However, the unique aspect of autonomous driving is that it disrupts the division of concepts such as tools, designers, and users and changes the requirements imposed on it by law. According to some existing regulations, such as the EU AI Act [3] and the Law Societies of England and Wales proposal, manufacturers must provide engineering solutions to demonstrate their products’ ability to comply with local regulations. This means that designers need to provide this solution in a logical and explainable way. At the same time, this also makes autonomous driving face more complex ethical questions when convincing the wider market and society, that is, the user community [4]. However, the problem is that, whether it is legal requirements or ethical concepts, the requirements of different countries and people are divided rather than unified [4,5]. This forces manufacturers to consider how to meet these diverse requirements, especially dynamic ones [6,7], using a more efficient design. Additionally, in combination with their own design or business needs, they will make trade-offs among many design possibilities. If they fail to do so, they may need to revert to the traditional approach of arranging production and management for each distinct region by completely redesigning the vehicle. This would waste the advantage of autonomous driving in terms of design efficiency.
Against this background, this study represents an attempt to design an automatic reasoning tool based on a quantitative argumentation system to assist designers in obtaining legal tests and suggestions for designs more flexibly and efficiently. In existing research, there are two popular approaches to intelligent legal reasoning: (1) Conceptualize all legal knowledge to be evaluated from a judge’s perspective [8,9], i.e., create an electronic judge. (2) Enable the AI product itself to have sufficient legal knowledge (using methods such as machine learning) to make timely and correct decision reasoning [10], i.e., create a perfect citizen. Both methods actually attempt to conduct complete legal reasoning and decision making using existing technical capabilities and solve problems from the two most direct perspectives of the judge’s ex-post evaluation and the behavior of the parties, that is, the vehicle. However, both methods have encountered problems in current practice. The former ex-post perspective is unsuitable for guiding design, and the technical difficulty of the electronic judge idea is also evidenced by the fact that expert systems have never been effectively applied in industry. The latter’s explainability weakness makes it difficult to meet the many legal and ethical requirements for legal intelligence technology [11]. Therefore, we chose to improve this problem from the perspective of designers and assist them in their legal compliance tasks by providing smarter legal information during the design process. We have also positioned the legal reasoning system as a neutral, assistive tool based on pre-input information. In other words, the core function of this system is to provide reasoning capabilities based on user needs and legal requirements so that designers can obtain instant help during fast-paced and high-complexity design adjustments. It does not independently make legal assertions, nor does it replace any legal professional’s work. It is, in nature, a supplement to the workflow of automated vehicle design adjustments.
To address the complexities of practical/normative reasoning—exemplified by legal reasoning—various tools in non-monotonic reasoning have been developed (e.g., [12,13,14,15]). Among these, computational argumentation has emerged as a promising approach that bridges the gap between human and machine reasoning [16], also known as formal argumentation. It offers two key advantages:
  • It enables the computational implementation of argument evaluation, allowing for systematic and structured reasoning;
  • It closely aligns with human discourse, effectively modeling natural language arguments [17].
Additionally, formal argumentation enhances human interpretability through transparent reasoning mechanisms, making it well suited for explainable AI. It also provides a foundation for modeling legal–theoretical explanations [18,19]. Moreover, formal argumentation is particularly effective in handling reasoning within dynamic contexts while flexibly maintaining computational efficiency [20].
By building on formal argumentation frameworks [21], we have made some progress in addressing the core concerns of this paper by developing a legal support system for autonomous vehicles (LeSAC) [22,23]. These works provide functionalities for the compliance testing of design solutions in inconsistent and uncertain legal contexts and preliminary attempts to offer design recommendations for cross-border adjustments to autonomous vehicles. This study extends these efforts in two key directions. First, while the reasoning system in previous works only supported converting a design solution from one country’s legal framework to another, this study enhances the system to dynamically adapt to the legal requirements of multiple countries simultaneously. Second, earlier works provided only qualitative evaluations of arguments constructed from legal norms—that is, they offered answers regarding whether an argument was justifiable. Still, they lacked the ability to perform more nuanced evaluations. From the perspective of the designer, this limitation hindered a deeper understanding and practical application of the reasoning results, reducing their reference value. Moreover, this approach may not fully align with the actual functioning of legal systems. To address these issues, this paper introduces a quantified method based on argumentation theory for measuring the strength and conflict level of legal arguments.
The quantitative argumentation system proposed in this paper derives initial weights based on the stringency of legal clauses, and by considering conflicts among relevant legal clauses across different countries, we construct an abstract argumentation framework for computational argument evaluation [14]. This framework facilitates the computation of acceptance and rejection levels for arguments derived from legal clauses, providing quantitative, gradual outputs across two dimensions under different semantics. Users can filter conclusions based on their desired threshold levels. As a result, the system’s reasoning results can visually express the degree of acceptability and rejectability of a specific design in relation to particular norms. Additionally, designers can trace the process of generating these values to understand how multiple legal rules influence decisions. In this paper, we demonstrate this function by categorizing strength according to keywords related to the degree of legal enforcement, reflecting the perspective of designers’ concerns.
In summary, this paper contributes in the following aspects:
  • Proposing a quantitative argumentation system that resolves cross-border legal conflicts in autonomous vehicle design through the structured modeling of legal clause stringency and computational evaluation, enabling dynamic adaptation to multi-country regulatory requirements.
  • Extending prior legal compliance frameworks by introducing a quantified method to measure argument strength and conflict levels, addressing limitations of qualitative evaluations while enhancing designers’ capacity to interpret nuanced legal trade-offs.
  • Providing actionable compliance pathways through threshold-based design recommendations (e.g., cost minimization or market coverage optimization), supported by transparent reasoning processes that trace decision influences to specific legal rules.
The remainder of this paper is organized as follows: Section 2 provides a review of relevant studies in the field. Section 3 delineates the formalization of traffic regulations and proposes methodologies for assessing their basic weights. Section 4 introduces an argumentation-based, non-monotonic reasoning framework for quantitatively evaluating arguments derived from norms, along with a case study illustrating the evaluation methodology. Section 5 provides an analysis and discussion of our methodology, grounded in the experimental results presented in Section 4. Finally, Section 6 concludes the paper by summarizing the key contributions and outlining directions for future research.

2. Related Work

Our research focuses on transforming fragmented global regulations into a computable logical framework, offering potential benefits to AV manufacturers in the following areas:
  • Reducing compliance costs: dynamically adapting to multi-national regulations;
  • Accelerating market expansion: facilitating rapid vehicle design adjustments through computational tools;
  • Mitigating legal risks: prioritizing compliance with high-weighted regulations through quantitative analysis.
Accordingly, we discuss the related literature from four key perspectives:
  • Global landscape of AV regulations;
  • Comparative studies of regulatory frameworks;
  • Cross-border legal challenges;
  • Technical approaches to legal reasoning and conflict resolution.

2.1. Global Landscape of Autonomous Vehicle Regulations

The rapid advancement of AV technology has prompted the development of legal and regulatory frameworks worldwide. Alawadhi et al. [24] conducted a systematic literature review to analyze the importance of autonomous vehicle liability, identifying key challenges across jurisdictions and highlighting the need for comprehensive insurance and liability frameworks. Ilková and Ilka [25] provided an early overview of the legal aspects of AVs, examining fundamental questions of liability, insurance requirements, and the adaptation of existing transportation laws to accommodate this emerging technology. Bonnefon et al. [26] presented comprehensive recommendations from the European Commission on connected and automated vehicles, addressing critical areas including road safety, privacy, fairness, explainability, and stakeholder responsibility distribution.
The ethical dimensions of AV deployment have gained significant attention in the literature. Bonnefon et al. [5] introduced the ‘trolley problem’ into AV ethics, empirically demonstrating (through surveys) that while people generally approve of utilitarian AVs that minimize overall casualties, they prefer not to use such vehicles themselves, revealing a social dilemma. Himmelreich [4] argued that focusing exclusively on dramatic ethical dilemmas such as the trolley problem overlooks more common ethical challenges in everyday driving situations; they proposed that designers should prioritize addressing these mundane but frequent ethical decisions. Martinho et al. [27] analyzed how the AV industry addresses ethical issues, finding significant differences between theoretical academic discussions and practical industry approaches, with companies focusing more on safety and technical reliability than philosophical, ethical dilemmas. These three papers go beyond a purely technical or legal analysis of autonomous vehicles by incorporating a more comprehensive ethical perspective. Together, they establish a framework that highlights the multiple challenges facing autonomous driving technology, including the following: (1) The tension between individual preferences and collective interest—revealing that purely technical solutions cannot fully address the social coordination problems inherent in autonomous vehicle deployment. (2) Systemic ethical challenges in everyday driving scenarios—demonstrating that these challenges are more prevalent and impactful than extreme, high-stakes dilemmas often discussed in ethical debates. (3) The gap between academic ethical discussions and industry priorities—emphasizing the need for more practical ethical frameworks that align with real-world implementation and decision making in the autonomous vehicle industry.
Although it is not the current focus of this paper, our work builds upon a previously proposed legal support system for AVs (referred to as LeSAC, cf. [22]). This system supports the prioritization of norms based on the significance of ethical and legal principles. It allows for establishing preferences between arguments in conflicting contexts, enabling one argument to defeat another. This facilitates conflict resolution and outputs arguments and designs aligned with higher-priority ethical or legal principles. Therefore, when integrated with several past works [22,23,28], the present study is expected to simultaneously address reasoning related to prioritizing ethical and legal principles while providing explanations.

2.2. Comparative Studies of Regulatory Frameworks

Despite extensive research on individual countries’ legal frameworks for AVs, comparative cross-national studies remain relatively scarce. Taeihagh and Lim [29] highlighted this research gap while examining emerging governance responses to AVs across multiple countries, identifying varying approaches to safety, liability, privacy, cybersecurity, and industry risk management. Ki et al. [30] performed a targeted comparative analysis of AV policy evolution in Korea, Japan, and France, revealing substantial differences in official measures, policy actions, and legislative approaches that reflect each country’s unique technological priorities and governance structures.
Eastman et al. [31] provided comprehensive comparative analyses of regulatory frameworks for AVs, detailing legal approaches in Australia, China, France, Germany, Japan, and the United Kingdom. Their research uncovered the varying approaches to market access conditions, data privacy requirements, and insurance liability frameworks across these jurisdictions, identifying both convergences and divergences in regulatory strategies. Costantini et al. [32] specifically examined AVs through the lens of data protection regulations, comparing how different regions implement GDPR-like protections for the substantial data generated by AVs and highlighting the tension between innovation and privacy protection.

2.3. Cross-Border Legal Challenges

Research on the legal issues that AVs encounter while crossing borders remains particularly limited despite its growing importance in an interconnected world. Cyras et al. [33] identified complex challenges in applying private international law to cross-border traffic accidents involving AVs with V2X technology, noting the difficulties victims face in determining appropriate legal frameworks across different jurisdictions, vehicle registration countries, and accident locations. Dhabu [34] specifically examined the legal compliance challenges faced in cross-border commercial applications of AI products, highlighting how manufacturers need to develop capabilities to dynamically meet varying legal requirements across jurisdictions, contrasting with traditional, more stable compliance environments.
Several scholars have explored potential solutions to these international regulatory challenges. Cihon [35] analyzed how international standards could enable global coordination in AI research and development, proposing standardization as a pathway to harmonizing cross-border AI governance, including considerations for AVs. Daly et al. [36] offered global perspectives on AI governance and ethics, examining how different cultural and regional regulatory approaches might be reconciled to create more consistent international frameworks. Alic [37] conducted a comparative analysis of data protection and cybersecurity regulations affecting AI systems in the European Union, the United States, and China, identifying both convergences and fundamental differences in regulatory philosophies that complicate cross-border operations.
Research in the two aforementioned directions provides both a contextual background and a theoretical foundation for the application-driven concerns addressed in this paper.

2.4. Technical Approaches to Legal Reasoning and Conflict Resolution

Our work builds upon logic-based argumentation methods in non-monotonic reasoning [21,38,39,40], particularly the weighted argumentation approach proposed by Wang and Shen [41]. While similar approaches have been explored in the literature (e.g., [42,43,44]), this approach introduces rejectability degrees alongside traditional acceptability semantics, providing a more comprehensive framework for evaluating conflicting arguments. Our previous work [23] addressed conflicts between AV design plans and the traffic regulations of target countries by prioritizing compliance with the regulatory frameworks of destination countries. This approach generates coherent design adjustments to ensure compliance with local laws, offering a practical solution for manufacturers deploying AVs across borders. However, the challenge of quantifying regulatory incompatibilities remains unresolved. In the current paper, we focus on the systematic comparison of arguments constructed using the conflicting regulations of different countries. This evaluation is based on a quantified structured argumentation framework, which computes both the acceptability and rejectability of arguments derived from these regulations. Consequently, we aim to provide a valuable reference for AV designers navigating cross-border legal complexities. For instance, designers can prioritize adjustments for conflicts with a higher degree of mutual exclusivity using the method outlined in [23]. Conversely, conflicts with a lower degree of mutual exclusivity may be addressed through gradual adjustments or by ensuring that AV passengers are adequately informed of potential legal risks. This nuanced approach enables tailored and practical responses to regulatory conflicts, potentially enhancing both compliance and user experience in cross-border AV operations.

3. Legal Strength and Formalization

3.1. Legal Strength Analysis

As stated in the introduction, one of the goals of this paper is to provide a description of the different degrees of conflicts between design and law for the designers of autonomous vehicles. In our reasoning system, this function can be achieved using quantitative weights and, ultimately, determining the acceptability and rejectability of an argument. Each particular design associated with a particular legal rule has an initial weight representing its importance or mandatory strength in the legal system. The higher the mandatory strength, the higher its initial weight, and the more influential it is on other design choices. For manufacturers, this can be interpreted to mean that the higher the initial weight, the greater the strength of the resulting conflicts, and the more it needs to be taken into account and adapted. This is in line with the reality that legal rules with a higher degree of compulsion tend to be more valued by the country in which they are located and have more immediate and serious legal consequences. They tend to allow less room for individual preferences or negotiated compromises; this can be an important area for manufacturers to avoid legal penalties. Through an investigation of regulations on vehicle design and driving behavior in a number of countries, we have categorized the initial weighting into five tiers based on the strength of the mandatory force and how the legal consequences are triggered, as shown below:
Mandatory rules [1, 1]: Mandatory rules refer to those behaviors that are demanded or prohibited by the law with the utmost enforcement power, i.e., those duties that the law considers inescapable for citizens when conditions permit. Rules in this category usually use the inflections must, must not, shall, shall not, etc., to express the strength of the binding force. If found in violation of such rules, citizens are often stopped and punished immediately. For example, under British law, vehicles ‘must’ drive along the left-hand side of the road. This means that as soon as a police officer spots a motorist driving along the right-hand side of the road, he or she will immediately stop the behavior and punish them accordingly. Manufacturers of autonomous vehicles need to do their best to remain compliant with such rules, as breaching them is too costly for the proper operation of autonomous vehicles. Even designing in violation of these rules can lead to vehicles being directly disallowed from deployment in the corresponding country.
Requisite rules (0.5, 1): This category refers to legal declarations of behaviors citizens should perform or avoid in a given situation. They are usually expressed in terms of the degree of enforcement by such inflections as should, should not, and so on. Although this is also a category of rules with a high degree of enforcement and is accompanied by corresponding penalties, they are still different from the first category of rules. This category of rules is usually not based on the act itself as the basis for punishment but rather on a flexible judgment based on the consequences caused. For example, the laws of most countries state that drivers should maintain a safe following distance. However, there are usually no police officers dedicated to monitoring whether this is being fulfilled. It is only in the case of an accident, such as a rear-end collision, that the police decide whether the accident was caused by following the car too closely and decide whether or not to impose a penalty. For manufacturers of autonomous vehicles, the risks of these types of rules also need to be avoided, but there is some room for balance. For example, in traffic jams or slow-speed situations, autonomous vehicles can be exempted from strictly maintaining too long a following distance. How this is designed depends largely on the criteria for triggering legal penalties and the designer’s personal preferences. Therefore, this type of rule is given a slightly lower weight than the maximum weight.
Suggestive rules (0, 0.5]: Suggestive rules refer to behaviors that are explicitly recommended by the law to citizens but do not contain the semantics of an obligation. They are often expressed in terms of suggestions, recommendations, and so on. Violations of such rules do not usually carry punitive consequences. However, such rules have a high degree of necessity in terms of normal and safe participation in traffic. Failure to observe such rules does not in itself lead to legal consequences but may lead to other accidents. For example, the laws of some countries recommend switching on special fog lights in low visibility. This rule is merely advisory in tone, but failure to switch on fog lights does pose a significant safety risk. Therefore, if an accident is caused by not switching on fog lights, it may also have adverse effects, such as insurance issues and public opinion problems. Therefore, the designers of automated vehicles need to consider the balance between driving efficiency and the possible consequences of such rules. For example, assuming that an autonomous vehicle has a more accurate way of monitoring road conditions in low visibility climates and can do so independently of visibility, it may choose to store more energy instead of switching on its fog lights. This type of rule is around the area of and is possibly at the tipping point of legal enforcement, i.e., it is not legally enforceable but has a strong reference value. Therefore, it is assigned weights in the range (0, 0.5].
Permissive rules [0, 0.5): A permissive rule is one in which the law explicitly states in the text that the behavior is permissible. They are not recommended or mandatory but generally have some practical value. Failure to comply with such rules will not result in legal penalties and is unlikely to cause an accident but may result in reduced efficiency or affect the experience of other drivers. For example, traffic regulations in China allow drivers to turn right when there is only a straight-ahead indicator and when the light is red. Not turning right in this situation does not raise any serious issues but may cause inefficiencies or block the traffic behind. For designers, this type of rule is worth considering but depends more on their own needs, so it holds a lower weight.
Non-explicit rules [0, 0]: This category of rules refers to behaviors for which the law does not specifically express approval or disapproval or to which the principle ‘what is not prohibited by law may be done’ applies. As long as it does not conflict with any existing rule, the designer can make decisions based solely on personal preference. For example, the law does not dictate what color the seats of a vehicle should be, so the designer can decide this for themself in the design of an autonomous vehicle. Such a rule would have a weight of zero, i.e., the regulation would not be minded at all.
Table 1 shows the initial weights at each level and their indicator words. While this is true in most cases, the indicators do not mean that the terms must appear directly in the legal text but rather that the semantics of the terms are expressed.

3.2. Case of Traffic Rules

The following demonstrates relevant provisions derived from the traffic regulations of the United Kingdom, France, and Japan. These provisions are simplified for illustration and assigned respective weights consistent with the summary in Table 1.
In the United Kingdom:
  • Driving Side: Vehicles must drive on the left side of the road (1);
  • Speed Radar Detectors: It is permitted to install speed radar detectors (0.3);
  • Highway Night Driving: High beam lights should not be used unnecessarily on highways at night (0.8);
  • Speed Limit: The speed on urban roads must not exceed 48 km/h (1);
  • Reflective Gear: It is recommended to wear reflective gear in case of an emergency stop (0.5).
In France:
  • Driving Side: Vehicles must drive on the right side of the road (1);
  • Speed Radar Detectors: The use of speed radar detectors is prohibited (1);
  • Highway Night Driving: Drivers can decide whether to use high beam lights on highways at night (0);
  • Speed Limit: The speed on urban roads must not exceed 50 km/h (1);
  • Reflective Gear: Reflective gear must be worn in case of an emergency stop (1).
In Japan:
  • Driving Side: Vehicles must drive on the left side of the road (1);
  • Speed Radar Detectors: It is permitted to install speed radar detectors (0.3);
  • Highway Night Driving: High beam lights should be used on highways at night (0.8);
  • Speed Limit: The speed on urban roads must not exceed 40 km/h (1);
  • Reflective Gear: Reflective gear must be worn in case of an emergency stop (1).
To illustrate the transformation process from legal text to system-processable parameters, we provide a structured framework depicted in Figure 1. This workflow begins with raw legal provisions (e.g., traffic regulations from the UK, France, and Japan), where keyword extraction identifies critical terms such as ‘must’, ‘should not’, or ‘recommended’. These terms are then classified into predefined legal strength categories (e.g., Mandatory, Requisite) through classification, aligning with the hierarchical weight ranges defined in Section 3.1. Subsequently, weight assignment assigns numerical values (e.g., 1 for Mandatory rules) to reflect enforceability levels. Finally, the classified rules are formalized into logical expressions (e.g., DriveSide DriveLeft ) compatible with the system’s reasoning engine. This structured approach ensures transparency and reproducibility in handling cross-jurisdictional conflicts, enabling designers to trace how legal norms influence quantitative evaluations. The color-coded stages in Figure 1 further clarify each step’s role, emphasizing the systematic transition from textual ambiguity to computational precision.
Based on this transformation process, the traffic regulations of the three countries mentioned above can be formalized as shown in Table 2.

4. Results

4.1. Legally Grounded Quantitative Reasoning Framework

In this subsection, we define the reasoning tool based on formal argumentation [21,41].
To avoid confusion, we first clarify the concepts of norms and regulations. This research focuses on the guidance of legal information on the driving behavior of autonomous vehicles and the design behavior of designers, so it falls within the scope of normative reasoning from a logical point of view. In terms of law, the research cases used in this paper focus on the rule level, that is, formalizing the rules of behavior at the implementation level. However, the logical expression and reasoning method of the system actually support reasoning about legal norms at a more abstract level (cf. [45]). Achieving this goal requires extracting legal norms into corresponding antecedent–consequent behavioral forms as needed and inputting them into the system as the basis for reasoning. However, this requires a multi-level analysis of the relationship between legal normative theory, legal regulations, and specific designs, as well as the establishment of more complex logical connections to support the correct determination of conflicts. The focus of this paper is on demonstrating the reasoning function of the system, so this part of the work has not been implemented. However, this will be a necessary research direction in the future.
Based on the approach provided in Section 3, we can obtain a set of formalized traffic regulations that can be structured as a normative theory with weight (QuT).
Definition 1 
(Normative theory). A normative theory with weights is a triple Q u T = L , N , W , where the following applies:
  • L is a formal language closed under negation (¬);
  • N is a set of legal norms;
  • W = { w 1 , , w n } is a set of weights for norms, where each w i ( 1 i n ) is a function from N to [ 0 , 1 ] .
Our evaluation mechanism under conflict contexts is primarily implemented based on the Abstract Argumentation Frameworks (AFs) initially proposed by Dung in his highly regarded paper [14]. The set of arguments and the conflict relations between the arguments are fundamental elements of an AF.
With respect to the structure of an argument, we will employ the notation shown in Table 3 in the following text.
Based on the set of norms, arguments representing corresponding designs can be constructed in a recursive way, defined as follows:
Definition 2 
(Argument). Let Q u T = L , N , W be a normative theory. An argument, A, has the following form: A 1 , , A n ψ if A 1 , , A n (where n 0 ) are arguments and Conc ( A ) = ψ such that there exists a norm, Conc ( A 1 ) , , Conc ( A n ) ψ . Then,
  • Sub ( A ) = Sub ( A 1 ) Sub ( A n ) { A } ,
  • ProperSub ( A ) = Sub ( A ) { A } ,
  • Context ( A ) = { Conc ( A i ) | A i ProperSub ( A ) } ,
  • Norms ( A ) = Norms ( A 1 ) Norms ( A n ) { Conc ( A 1 ) , , Conc ( A n ) ψ } ,
  • TopNorm ( A ) = Conc ( A 1 ) , , Conc ( A n ) ψ .
According to Section 3, it can be observed that traffic regulations in different countries may offer different provisions for the same circumstance, leading to conflicts between these regulations. These conflicts can manifest as the attack relation among arguments constructed based on the involved norms, that is, an argument, A, attacks another argument, B, if and only if their conclusions are incompatible while the contexts are identical. Formally, we present the following definitions.
Definition 3 
(Incompatible). Two literals, φ , ψ L , are incompatible if and only if at least one of the following holds:
φ ψ ¯ o r ψ φ ¯
where ψ ¯ denotes the complement or negation set of ψ, i.e., the set of literals that contradict ψ.
Basically, we have ¬ φ φ ¯ and φ ¬ φ ¯ . Another example is D r i v e L e f t D r i v e R i g h t ¯ and D r i v e R i g h t D r i v e L e f t ¯ (cf. Table 2).
Definition 4 
(Attack). Let A and B be arguments constructed based on a normative theory. The argument A attacks B if and only if Conc ( A ) Conc ( B ) ¯ such that Context ( A ) = Context ( B ) .
An abstract argumentation framework consists of a set of arguments and the attack relation among these arguments. Let A denote the set of all arguments constructed based on a normative theory, and let → denote the set of attacks. We present the following definition.
Definition 5 
(AF). An abstract argumentation framework (AF) is a tuple, A , , where A is a set of arguments based on a normative theory, Q u T = L , N , W , and = A × A is a set of attacks.
For any arguments, A and B A , we say that A attacks B if and only if ( A , B ) , which can also be denoted as A B .
The classic Dung-style abstract argumentation theory (cf. [14]) evaluates arguments based on argumentation semantics to obtain a set of simultaneously acceptable arguments under specific semantics. For example, according to the following argumentation semantics, we can derive a maximally consistent set of credulously justified arguments, thereby obtaining the corresponding set of acceptable conclusions—commonly termed the preferred semantics (cf. [14]).
A set of arguments, E A , is considered a maximal set (with respect to set-inclusion) of credulous justified arguments if it satisfies the following two conditions:
  • Conflict-free: There are no arguments, A and B E , such that A B ;
  • Defensible: For any argument, A E , if there exists C A such that C A , then there exists B E such that B C .
This approach helps achieve consistent conclusions that align with user attitudes, such as credulous or skeptical perspectives. However, it is not very conducive to the goals of design comparison outlined in this paper. On the other hand, classical quantitative argumentation methods (e.g., [42,46,47]) generally focus on the ultimate acceptability of arguments, which does not directly reflect the degree of conflict or exclusion between legal norms. Therefore, we adopt the method from [41], which introduces bilateral gradual semantics for weighted argumentation graphs, and we extend the evaluation procedure of the standard AF by integrating a weighted mechanism. Hereafter, we refer to this method as bilateral evaluation for brevity. It is termed ‘bilateral’ because it computes two values for each argument within the AF: its acceptability and its rejectability, taking into account both the acceptability and rejectability of the argument’s attackers.
In the current paper, the initial/basic weights of the arguments can be derived from the weights of the legal norms involved, assessed according to their legal strength, as discussed in Section 3 and summarized in Table 2.
Since the last norm applied in an argument is directly related to the actions or requirements represented by its conclusion, we consider the strength of the last norm used in an argument (referred to as TopNorm)—noting that the principle of comparing the ‘last link’ is more applicable in normative reasoning, as discussed in Ref. [48]—as its initial weight, defined as follows:
Definition 6 
(Initial weight of arguments). Let Q u T = L , N , W be a normative theory and A be an argument constructed based on it; the initial weight of A, denoted by W ( A ) and based on each w i W , is equal to w i ( TopNorm ( A ) ) .
The acceptability of an argument is calculated by considering its own weight and the acceptability and rejectability of its attackers, thereby providing a comprehensive reflection of the positive acceptability of the relevant regulation. Conversely, the rejectability of an argument is determined by considering the acceptability of its attackers, which may more purely reflect the degree to which the regulation in question is mutually exclusive with other conflicting regulations. Therefore, these two values can offer users with different aims more informative references for decision making. Additionally, Ref. [41] explores different specific semantics for evaluating an argument under bilateral evaluation and provides corresponding computational methods:
-
When prioritizing the quality of an argument’s attackers, AR-max-based semantics (ARM) can be used;
-
When prioritizing the number of attackers, AR-card-based semantics (ARC) are more suitable;
-
When considering both the quality and quantity of attackers, AR-hybrid-based semantics (ARH) provide a more comprehensive approach.
Once an AF and the initial weights of arguments are given, these semantics can be computed using the Python algorithm provided by this link: https://github.com/ZongshunWang/WAG_BGS_Calculation, accessed on 31 January 2025.
Based on the results of argument evaluation and the diverse requirements of users, the system provides design recommendations by linking arguments to specific design parameters. Figure 2 illustrates the workflow of our methodology from input to output.
Next, we demonstrate this process through the case of traffic rules presented in Section 3.2.

4.2. Case Study

To better demonstrate the system’s assistance capabilities, we assume that an autonomous vehicle developer is trying to introduce a design that has already been successfully used in the UK to the French and Japanese markets. This may involve enabling the vehicle to continuously drive between the UK and France by dynamically adjusting the driving mode, or it may involve the hope that, with as little adjustment as possible, the prototype can be deployed in all three countries. The correspondence of this design with the British regulations can be seen in Table 4. In this context, we have established the corresponding arguments for designs under different regulations based on rules provided in Section 3.2. To distinguish these, we denote the arguments for designs constructed based on the British, French, and Japanese traffic regulations as B i , F i , and J i , respectively.
According to the formalized rules provided in Table 2, we can derive AFs with attack relations and the initial weight of the arguments, as shown in Figure 3.
In Figure 3, the five different scenarios from the case studies mentioned in Section 3 are denoted by Drive Side, Speed Radar Detector, Highway Night Driving, Speed Limit, and Reflective Gear; these are displayed under their respective AFs and are numbered sequentially.
Based on the AFs and the initial weights of the arguments shown in Figure 3, and by applying the bilateral evaluation for a weighted AF, we can obtain the acceptability and rejectability of each argument in each scenario under the three semantics proposed in [41]. These results are presented in Table 5, where we use the subscripts a c c and r e j to denote acceptability and rejectability under the ARM, ARC, and ARH semantics for argument evaluation, respectively.

5. Discussion

In this section, we conduct an in-depth analysis and discussion by building upon the reasoning process and case study results demonstrated in Section 4.
From Table 5, we can observe that, according to the bilateral evaluation, under ARM semantics—which consider the strength of attackers—since the initial weight of all attackers is 1, the acceptability and rejectability of the arguments constructed based on the regulations of each country are the same in the first and fourth scenarios. However, when using ARC semantics, which considers the number of attackers, the acceptability of the arguments with more attackers decreases, and their rejectability increases. Similarly, under ARH semantics, the results are consistent: due to mutual attacks between arguments B 4 and J 4 , constructed based on the regulations of the UK and Japan, their acceptability decreases compared to the first scenario, and their rejectability increases.
In the second scenario, the attack relationships are similar to those in the first scenario. Arguments F 1 and F 2 , constructed based on French regulations, conflict with arguments B 1 , B 2 and J 1 , J 2 , which are based on British and Japanese regulations. However, since the weight of its attackers is relatively low, in similar conflict situations, the acceptability of F 2 is higher than that of F 1 under all semantics, and the rejectability of F 2 is lower than that of F 1 .
In the third scenario, argument F 3 , constructed based on French regulations, does not conflict with any argument based on the regulations of other countries. Similarly, in the fifth scenario, the arguments based on the regulations of the three countries do not conflict with each other, so their acceptability remains at the initial value, and their rejectability is 0.
In the fourth scenario, since the initial weights of the arguments based on the regulations of the three countries are the same and each attacks the other, the acceptability and rejectability of the three arguments are equal under all three semantics. They have no incompatibility difference, so they can only be compared to arguments from other scenarios.
Beyond the abovementioned points, based on the results in Table 5, we can also observe that when we pay more attention to the strongest attacker, the argument with the highest acceptability (excluding arguments in the fifth scenario that do not conflict with any other arguments; these will be assumed excluded in subsequent discussions) is F 2 from the second scenario, which is constructed based on French regulations. Meanwhile, the arguments with the highest rejectability are B 2 and J 2 from the same scenario, which are based on regulations from the UK and Japan. This indicates that arguments with a higher basic (or initial) weight and weaker attackers have the greatest advantage. That is, when the design obtained from a highly mandatory regulation conflicts with the design obtained from a less mandatory regulation, the former has more advantages. Such an observation can help users identify the most dominant designs from a legal compliance perspective. For manufacturers of autonomous vehicles, this insight highlights the most challenging design modifications; on the one hand, designs with higher non-compliance costs and legal reasons are pointed out; on the other hand, more extensive modifications are required.
When more attention is paid to the number of attackers, the attackers’ basic weights no longer significantly impact the results (as can be seen by comparing the values for F 1 and F 2 ). Under these semantics, the arguments with notably high acceptability are B 1 and J 1 , which have higher basic weights and are attacked by only one argument each. In contrast, the arguments with notably high rejectability are F 1 , B 4 , F 4 , and J 4 , as they are all attacked by two arguments. This suggests that when the initial weight of an argument is comparatively high, having fewer attackers provides a strong advantage. This insight helps identify designs that cause the least number of conflicts with the rules of other regions, and this can guide autonomous vehicle manufacturers in determining the most favorable legal environments. Designers can use this semantic when trying to reduce the number of modifications, whereby the existing design solutions have fewer conflicts with the target country’s regulations.
When comparing the results comprehensively, the results remain similar to those observed when prioritizing the number of attackers, but the differences between values become more pronounced. This further reinforces the advantage of arguments with both high basic weights and fewer attackers.
Manufacturers and AV users can utilize these insights to balance efficiency and cost according to their own preferences and determine the optimal strategy for decision making. For example, when a manufacturer attempts to move a vehicle from France to the UK, they may identify a conflict between the regulation allowing for the installation of speed radar detectors in France and the regulation prohibiting them in the UK. When using the bilateral evaluation (especially under ARM semantics), the role and priority of highly mandatory rules (such as the prohibition on radar detectors) will be more explicitly highlighted. As a result, manufacturers may opt to remove the speed radar detector before operating the vehicle in the UK or let the autonomous vehicle stop using a speed radar detector when driving from France into the UK.
Overall, by mapping regulations to corresponding design solutions, autonomous vehicle manufacturers can identify key design elements that require special attention and establish design priorities. This process helps manufacturers develop designs that align with global regulatory requirements more efficiently, make strategic business decisions regarding development planning, and, in the long run, achieve cost-effectiveness while mitigating legal risks. For instance, manufacturers may adopt a rigid embedding + hardware-level assurance strategy for high-weight regulations, a software-defined + parameterized configuration approach for medium-weight regulations, and a cloud-based service + post-installation adaptation strategy for low-weight regulations.
Additionally, the system incorporates mechanisms for traceability and explanation while offering quantified references. For instance, manufacturers may opt not to include a speed radar detector in the hardware when considering the specific countries involved. This decision is supported by the argument F 2 , which derives the conclusion ¬ I n s t a l l and holds the highest acceptability across all semantics. This preference stems from its initial weight W ( F 2 ) = 1 , which matches the weight of the regulation S p e e d R a d a r D e t e c t o r ¬ I n s t a l l on which it is based. In contrast, the conflicting arguments B 2 and J 2 , which advocate for the conclusion I n s t a l l , are grounded in regulations with lower weights; thus, their initial weights are W ( B 2 ) = W ( J 2 ) = 0.3 . This retrospective process can help manufacturers of automated vehicles provide an engineering explanation for the legal reasons behind their designs, and it can also help designers better understand the legal meaning behind their design changes.
By understanding the weight of each country’s regulations and the degree of conflicts with design, autonomous vehicle manufacturers can construct a multi-dimensional decision-making framework and achieve a precise allocation of compliance resources. For example, by cross-analyzing these two types of data, a strategic priority matrix can be created, in which the regulatory weight is located on the horizontal axis and the degree of conflict is located on the vertical axis, providing clearer strategic guidance. The technical approach proposed in this study not only offers methodological innovation for helping the global deployment of AVs but also holds potential as a reference for compliance solutions in other AI-driven transnational industries, such as medical devices or fintech. This framework could help these industries navigate complex regulatory landscapes, optimize resource allocation, and accelerate market entry while maintaining compliance across jurisdictions.

6. Conclusions

This paper focuses on the application domain of cross-border autonomous vehicles and proposes a non-monotonic reasoning-based legal support system to compare designs facing different countries’ legal requirements. We provide a detailed analysis and explanation for the formalization of traffic regulations and the assignment of weights based on legal modalities, present definitions for the reasoning theory and the construction of the system, and then demonstrate the application of three quantified bilateral semantics for the evaluation proposed in [41] using case studies. The analysis suggests that our system may offer legal reference support for diverse stakeholders with varying objectives, particularly in assisting AV manufacturers during compliance planning. The system can be integrated with our previously proposed reasoning system for legal support (i.e., LeSAC, cf. [22,23]). For example, it can integrate the priority or weight of ethical and legal principles underlying legal norms and the weights derived from legal modalities to formulate a design solution.
From a broader perspective, this study provides a legal assistance system specifically designed to assist designers in flexibly adjusting design solutions around compliance in a multi-national legal context. Combined with our previous work, the system proposed in this paper helps achieve the following core functions: 1. Multi-country conflicting laws detection and quantification: Through structured modeling and weight allocation, it identifies conflicts between design prototypes and the laws of multiple target countries and quantifies the degree of conflicts according to given criteria (e.g., the difference between ‘absolute prohibition’ and ‘recommended provisions’). 2. Individualized compliance path generation: Based on the designer’s needs, targeted design advice is given under different task objectives, such as ‘continuous driving mode adjustment’, ‘minimizing modification costs’, or ‘covering the largest number of target markets’. To illustrate task-specific adaptation, consider a manufacturer prioritizing market coverage over cost. The system may recommend retaining hardware-level compliance for high-weight regulations (e.g., driving side) while using software updates for medium-weight rules (e.g., speed limits). Conversely, a cost-sensitive designer might opt for region-specific hardware variants only where strictly necessary. 3. Dynamic regulatory adaptation and explainability: The system features robust capabilities for dynamically updating legal information, ethical rules, and user preferences to align with evolving regulatory landscapes. This approach offers an explainable alternative to opaque AI-driven compliance tools and post hoc judicial evaluation systems, allowing reasoning principles to be adjusted according to specific user needs.
The assignment of initial strength and the calculation of subsequent effects in this paper mainly rely on judging the degree of the mandatory nature of the corresponding regulations. This judgment is based on the indicator words directly used in the rule or when the rule expresses the same semantics as the indicator words. In this regard, we have plans for future research as follows:
First, the current capture of indicator words or related semantics is conducted manually. On the one hand, this still heavily relies on the artificial judgment of legal experts. On the other hand, it also poses efficiency challenges for the preliminary preparation of the system. This implies that when encountering a novel cross-regional driving case, substantial time must be invested to individually represent the various legal documents involved, transforming them into formulations that a logical system can compare and evaluate. Moreover, categorizing the semantic nuances of conflicts continues to rely on human intervention. To reduce reliance on manual annotation, we propose leveraging fine-tuned large language models (LLMs) with prompt engineering and Retrieval-Augmented Generation (RAG). This hybrid approach will automate the extraction of legal norms, classify clauses by enforceability (e.g., ‘must’ vs. ‘should’), and resolve ambiguities using context-aware reasoning.
Moreover, the importance of legal rules and their strength of influence on design is not only reflected in their degree of enforceability. The interaction between design and law should not be seen as discrete or unidimensional but rather as continuous and multi-dimensional. In determining the actual meanings of conflicts, factors such as scope, frequency of use, the nature of the regulations to which they belong, and the severity of penalties should be considered comprehensively. In particular, the sources and reasoning of legal norms for different design tasks are different. Therefore, in the future, we will introduce more professional legal analysis to consider how autonomous vehicles interact with different laws from multiple dimensions in more detail. We will describe the types and degrees of conflict more precisely, taking into account the needs of different design tasks. Furthermore, future iterations will incorporate multi-dimensional legal impact factors (e.g., penalty severity, jurisdictional scope) to refine weight assignment. A composite scoring model will aggregate these dimensions, providing designers with a holistic view of regulatory risks. The system’s argumentation structure aligns with Hage’s theory of legal coherence [49,50], which emphasizes resolving conflicts through priority relations among norms. By formalizing these relations (e.g., lex superior, lex specialis), the framework dynamically adapts to hierarchical legal principles, ensuring compliance with both local regulations and overarching transnational standards.
While the current study establishes a foundational framework for transnational legal compliance in autonomous vehicle design, several avenues remain to enhance its real-world applicability. First, we plan to collaborate with legal institutions and AV manufacturers to empirically validate the system’s practicality through pilot deployments, ensuring alignment with industry needs and regulatory expectations. Second, to address scalability and reduce human bias in weight assignment, we aim to develop automated mechanisms leveraging natural language processing (NLP) and machine learning (ML) techniques. These tools will dynamically interpret legal texts, infer rule stringency from contextual semantics, and adapt to linguistic nuances across jurisdictions. Third, recognizing the fluid nature of legal systems, we intend to integrate real-time regulatory updates via modular architectures and governmental APIs, enabling the framework to autonomously adjust to evolving norms while maintaining computational efficiency, as supported by argumentation-based dynamic reasoning methods [20]. Together, these advancements are anticipated to contribute to the development of a more practical and scalable solution for global AV deployment, advancing the transition from theoretical innovation to real-world application.

Author Contributions

Conceptualization, Z.Y., Y.L. and H.Z.; data curation, Z.W.; formal analysis, Z.Y.; funding acquisition, Z.Y. and Y.L.; investigation, Y.L. and Y.Y.; methodology, Z.Y., Y.L. and Y.Y.; project administration, Z.Y. and H.Z.; resources, Y.L.; software, Z.W.; supervision, Z.Y. and Y.Y.; validation, Z.Y., Y.L. and Y.Y.; visualization, Z.Y.; writing—original draft, Z.Y., Y.L. and H.Z.; writing—review and editing, Z.Y., Y.L. and Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Social Science Fund of China grant number 21&ZD065.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

DURC Statement

Current research is limited to the legal informatics and AI-driven regulatory compliance, which is beneficial for enhancing legal compliance efficiency, reducing cross-border operational costs, and improving transparency and explainability in autonomous system design. The authors acknowledge the dual-use potential of the research involving computational legal reasoning tools and confirm that all necessary precautions, including strict data anonymization protocols, access controls, and oversight by institutional ethics review boards, have been taken to prevent potential misuse. As an ethical responsibility, the authors strictly adhere to relevant national and international laws and guidelines concerning ethical AI development and deployment. The authors advocate for responsible deployment, ethical considerations, regulatory compliance, and transparent reporting to mitigate misuse risks and foster beneficial outcomes.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Willems, L. Understanding the impacts of autonomous vehicles in logistics. In The Digital Transformation of Logistics: Demystifying Impacts of the Fourth Industrial Revolution; Wiley: Hoboken, NJ, USA, 2021; pp. 113–127. [Google Scholar]
  2. Brown, W.M. How much thicker is the Canada–US border? The cost of crossing the border by truck in the pre-and post-9/11 eras. Res. Transp. Bus. Manag. 2015, 16, 50–66. [Google Scholar]
  3. European Union. Proposal for a Regulation of the European Parliament and the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. EUR-Lex-52021PC0206. 2021. Available online: https://artificialintelligenceact.eu/the-act/ (accessed on 25 December 2024).
  4. Himmelreich, J. Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations. Ethical Theory Moral Pract. 2018, 21, 669–684. [Google Scholar] [CrossRef]
  5. Bonnefon, J.F.; Shariff, A.; Rahwan, I. The social dilemma of autonomous vehicles. Science 2016, 352, 1573–1576. [Google Scholar] [CrossRef]
  6. Pattinson, J.A.; Chen, H. A barrier to innovation: Europe’s ad-hoc cross-border framework for testing prototype autonomous vehicles. Int. Rev. Law Comput. Technol. 2020, 34, 108–122. [Google Scholar] [CrossRef]
  7. De Bruyne, J.; Vanleenhove, C. The rise of self-driving cars: Is the private international law framework for non-contractual obligations posing a bump in the road? IALS Stud. Law Rev. 2018, 5, 14–26. [Google Scholar] [CrossRef]
  8. Susskind, R.E. Expert systems in law: A jurisprudential approach to artificial intelligence and legal reasoning. Mod. Law Rev. 1986, 49, 168–194. [Google Scholar] [CrossRef]
  9. Mills, M. Artificial Intelligence in Law: The State of Play; Thompson Reuters The Legal Executive Institute: Toronto, ON, Canada, 2016. [Google Scholar]
  10. Leith, P. The rise and fall of the legal expert system. Int. Rev. Law Comput. Technol. 2016, 30, 94–106. [Google Scholar] [CrossRef]
  11. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
  12. Reiter, R. A logic for default reasoning. Artif. Intell. 1980, 13, 81–132. [Google Scholar] [CrossRef]
  13. Nute, D. Defeasible logic. In Handbook of Logic in Artificial Intelligence and Logic Programming (Vol. 3): Nonmonotonic Reasoning and Uncertain Reasoning; Oxford University Press, Inc.: Oxford, UK, 1994; pp. 353–395. [Google Scholar]
  14. Dung, P.M. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 1995, 77, 321–357. [Google Scholar] [CrossRef]
  15. Bench-Capon, T.J.M. Persuasion in Practical Argument Using Value-based Argumentation Frameworks. J. Log. Comput. 2003, 13, 429–448. [Google Scholar] [CrossRef]
  16. Kakas, A.; Michael, L.; Dietz, E. (Eds.) Computational Argumentation: A Foundation for Human-Centric AI; Frontiers Media SA: Lausanne, Switzerland, 2024. [Google Scholar] [CrossRef]
  17. Yu, Z.; Ju, S.; Chen, W. Context-based argumentation frameworks and multi-agent consensus building. J. Log. Comput. 2024, 34, 199–228. [Google Scholar] [CrossRef]
  18. Bench-Capon, T.; Prakken, H.; Sartor, G. Argumentation in Legal Reasoning. In Argumentation in Artificial Intelligence; Simari, G., Rahwan, I., Eds.; Springer: Boston, MA, USA, 2009; pp. 363–382. [Google Scholar] [CrossRef]
  19. Rotolo, A.; Sartor, G. Argumentation and explanation in the law. Front. Artif. Intell. 2023, 6, 1130559. [Google Scholar] [CrossRef] [PubMed]
  20. Liao, B.; Jin, L.; Koons, R.C. Dynamics of argumentation systems: A division-based method. Artif. Intell. 2011, 175, 1790–1814. [Google Scholar] [CrossRef]
  21. Modgil, S.; Prakken, H. A general account of argumentation with preferences. Artif. Intell. 2013, 195, 361–397. [Google Scholar] [CrossRef]
  22. Lu, Y.; Yu, Z.; Lin, Y.; Schafer, B.; Ireland, A.; Urquhart, L. An argumentation and ontology based legal support system for AI vehicle design. In Proceedings of the 35th International Conference on Legal Knowledge and Information Systems (JURIX 2022), Saarbrücken, Germany, 14–16 December 2022; pp. 213–218. [Google Scholar]
  23. Lu, Y.; Yu, Z.; Lin, Y.; Schafer, B.; Ireland, A.; Urquhart, L. A Legal System to Modify Autonomous Vehicle Designs in Transnational Contexts. In Proceedings of the 36th International Conference on Legal Knowledge and Information Systems 2023, Maastricht, The Netherlands, 18–20 December 2023; pp. 347–352. [Google Scholar]
  24. Alawadhi, M.; Almazrouie, J.; Kamil, M.; Khalil, K.A. Review and analysis of the importance of autonomous vehicles liability: A systematic literature review. Int. J. Syst. Assur. Eng. Manag. 2020, 11, 1227–1249. [Google Scholar] [CrossRef]
  25. Ilková, V.; Ilka, A. Legal Aspects of Autonomous Vehicles—An Overview. In Proceedings of the 2017 21st International Conference on Process Control (PC), Strbske Pleso, Slovakia, 6–9 June 2017; pp. 428–433. [Google Scholar] [CrossRef]
  26. Bonnefon, J.F.; Černy, D.; Danaher, J.; Devillier, N.; Johansson, V.; Kovacikova, T.; Martens, M.; Mladenovic, M.; Palade, P.; Reed, N.; et al. Ethics of Connected and Automated Vehicles: Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility; European Commission: Brussels, Belgium, 2020. [CrossRef]
  27. Martinho, A.; Herber, N.; Kroesen, M.; Chorus, C. Ethical issues in focus by the autonomous vehicles industry. Transp. Rev. 2021, 41, 556–577. [Google Scholar] [CrossRef]
  28. Lu, Y.; Yu, Z.; Lin, Y.; Schafer, B.; Ireland, A.; Urquhart, L. Handling Inconsistent and Uncertain Legal Reasoning for AI Vehicles Design. In Proceedings of the Workshop on Methodologies for Translating Legal Norms into Formal Representations (LN2FR 2022), Saarbrücken, Germany, 14 December 2022; pp. 76–89. [Google Scholar]
  29. Taeihagh, A.; Lim, H.S.M. Governing autonomous vehicles: Emerging responses for safety, liability, privacy, cybersecurity, and industry risks. Transp. Rev. 2019, 39, 103–128. [Google Scholar] [CrossRef]
  30. Ki, J. A Comparative Analysis of Autonomous Vehicle Policies Among Korea, Japan, and France; Discussion Paper Series #20-02; Fondation France-Japon de l’École des Hautes Études en Sciences Sociales (FFJ): Paris, France, 2020. [Google Scholar]
  31. Eastman, B.; Collins, S.; Jones, R.; Martin, J.; Blumenthal, M.S.; Stanley, K.D. A Comparative Look at Various Countries’ Legal Regimes Governing Automated Vehicles. J. Law Mobil. 2023, 2023. Available online: https://repository.law.umich.edu/jlm/vol2023/iss1/2 (accessed on 25 December 2024).
  32. Costantini, F.; Thomopoulos, N.; Steibel, F.; Curl, A.; Lugano, G.; Kováčiková, T. Autonomous Vehicles in a GDPR Era: An International Comparison. In Advances in Transport Policy and Planning; Shiftan, Y., Kamargianni, M., Eds.; Academic Press: New York, NY, USA, 2020; Volume 5, pp. 191–213. [Google Scholar] [CrossRef]
  33. Čyras, K.; Rago, A.; Albini, E.; Baroni, P.; Toni, F. Argumentative XAI: A Survey. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 19–27 August 2021; pp. 4392–4399. [Google Scholar]
  34. Dhabu, A.C. Legal Implications of Artificial Intelligence in Cross-Border Transactions. Master’s Thesis, Lund University, Lund, Sweden, 2024. [Google Scholar]
  35. Cihon, P. Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development; Future of Humanity Institute, University of Oxford: Oxford, UK, 2019; Volume 40, pp. 340–342. [Google Scholar]
  36. Daly, A.; Hagendorff, T.; Hui, L.; Mann, M.; Marda, V.; Wagner, B.; Wang, W.; Witteborn, S. Artificial intelligence governance and ethics: Global perspectives. arXiv 2019, arXiv:1907.03848. [Google Scholar] [CrossRef]
  37. Alic, D. The Role of Data Protection and Cybersecurity Regulations in Artificial Intelligence Global Governance: A Comparative Analysis of the European Union, The United States, and China Regulatory Framework. Master’s Thesis, Central European University, Vienna, Austria, 2021. [Google Scholar]
  38. Dung, P.M.; Kowalski, R.A.; Toni, F. Assumption-Based Argumentation. In Argumentation in Artificial Intelligence; Simari, G., Rahwan, I., Eds.; Springer: Boston, MA, USA, 2009; pp. 199–218. [Google Scholar]
  39. García, A.J.; Simari, G.R. Defeasible Logic Programming: An Argumentative Approach. Theory Pract. Log. Program. 2004, 4, 95–138. [Google Scholar] [CrossRef]
  40. Besnard, P.; Hunter, A. A logic-based theory of deductive arguments. Artif. Intell. 2001, 128, 203–235. [Google Scholar] [CrossRef]
  41. Wang, Z.; Shen, Y. Bilateral Gradual Semantics for Weighted Argumentation. In Proceedings of the 38th AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 10732–10739. [Google Scholar]
  42. Amgoud, L.; Ben-Naim, J.; Doder, D.; Vesic, S. Acceptability Semantics for Weighted Argumentation Frameworks. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), Melbourne, Australia, 19–25 August 2017; pp. 56–62. [Google Scholar] [CrossRef]
  43. Amgoud, L.; Doder, D.; Vesic, S. Evaluation of argument strength in attack graphs: Foundations and semantics. Artif. Intell. 2022, 302, 103607. [Google Scholar] [CrossRef]
  44. Baroni, P.; Rago, A.; Toni, F. From fine-grained properties to broad principles for gradual argumentation: A principled spectrum. Int. J. Approx. Reason. 2019, 105, 252–286. [Google Scholar] [CrossRef]
  45. Prakken, H.; Sartor, G. Law and logic: A review from an argumentation perspective. Artif. Intell. 2015, 227, 214–245. [Google Scholar] [CrossRef]
  46. Hunter, A.; Thimm, M. Probabilistic reasoning with abstract argumentation frameworks. J. Artif. Intell. Res. 2017, 59, 565–611. [Google Scholar] [CrossRef]
  47. Li, H.; Oren, N.; Norman, T.J. Probabilistic Argumentation Frameworks. In Proceedings of the Theorie and Applications of Formal Argumentation; Modgil, S., Oren, N., Toni, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1–16. [Google Scholar]
  48. Modgil, S.; Prakken, H. The ASPIC+ framework for structured argumentation: A tutorial. Argum. Comput. 2014, 5, 31–62. [Google Scholar] [CrossRef]
  49. Hage, J.C. Reasoning with Rules: An Essay on Legal Reasoning and Its Underlying Logic; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997. [Google Scholar] [CrossRef]
  50. Hage, J. Formalizing Legal Coherence. In Proceedings of the 8th International Conference on Artificial Intelligence and Law (ICAIL), St. Louis, MO, USA, 21–25 May 2001; pp. 22–31. [Google Scholar]
Figure 1. Rule formalization process.
Figure 1. Rule formalization process.
Drones 09 00316 g001
Figure 2. Workflow of legal compliance evaluation for autonomous vehicle design.
Figure 2. Workflow of legal compliance evaluation for autonomous vehicle design.
Drones 09 00316 g002
Figure 3. Argumentation frameworks in different contexts.
Figure 3. Argumentation frameworks in different contexts.
Drones 09 00316 g003
Table 1. Legal strength levels and indicators.
Table 1. Legal strength levels and indicators.
CategoryIndicators
Mandatory rules [1, 1]must, must not, shall, shall not, be prohibited, not allowed
Requisite rules (0.5, 1)should, should not, need
Suggestive rules (0, 0.5]be suggested, be recommended, be not suggested, be not recommended
Permissive rules [0, 0.5)can, may, be permitted, be allowed, need not
Non-explicit rules [0, 0]-
Table 2. Formalization and weights.
Table 2. Formalization and weights.
FormalizationUKFRJP
D r i v e S i d e D r i v e L e f t 1 1
D r i v e S i d e D r i v e R i g h t 1
S p e e d R a d a r D e t e c t o r I n s t a l l 0.3 0.3
S p e e d R a d a r D e t e c t o r ¬ I n s t a l l 1
H i g h w a y , N i g h t D r i v i n g , H i g h B e a m L i g h t U s e 00.8
H i g h w a y , N i g h t D r i v i n g , H i g h B e a m L i g h t ¬ U s e 0.80
U r b a n , S p e e d L i m i t 48  km/h1
U r b a n , S p e e d L i m i t 50  km/h 1
U r b a n , S p e e d L i m i t 40  km/h 1
E m e r g e n c y S t o p , R e f l e c t i v e G e a r W e a r 0.511
Table 3. Notations.
Table 3. Notations.
FunctionsReturns
Conc ( A ) The final conclusion of an argument A
Sub ( A ) The set of all sub-arguments of A
ProperSub ( A ) The set of all proper sub-arguments of A (that do not include A)
Context ( A ) The set of conclusions from all proper sub-arguments of A 1
Norms ( A ) The set of all norms applied in A
TopNorm ( A ) The last norm applied in A
W ( A ) The weight of A
1 This encompasses all previous conclusions that contribute to the final conclusion, collectively reflecting the circumstances leading to a particular requirement or action.
Table 4. Arguments and corresponding designs constructed based on the (last) norms.
Table 4. Arguments and corresponding designs constructed based on the (last) norms.
ArgumentsDesign
B 1 AVs drive on the left side of the road.
B 2 AVs are equipped with speed radar detectors.
B 3 AVs do not use high beam lights when driving on highways at night.
B 4 AVs operate at speeds not exceeding 48km/h on urban roads.
B 5 AVs use reflective equipment during emergency stops.
Table 5. Bilateral semantics.
Table 5. Bilateral semantics.
Argument ARM acc ARM rej ARC acc ARC rej ARH acc ARH rej
B 1 0.6751570.4030110.4851830.5246640.4630840.561070
F 1 0.6751570.4030110.3113180.6991090.2783030.745292
J 1 0.6751570.4030110.4851830.5246640.4630840.561070
B 2 0.1680980.4730130.1452870.5257970.1372440.568034
F 2 0.8975570.1439220.3264270.6770910.3149530.694611
J 2 0.1680980.4730130.1452870.5257970.1372440.568034
B 3 0.5652400.3611440.3839430.5300710.3590960.576104
F 3 0.0000000.0000000.0000000.0000000.0000000.000000
J 3 0.5652400.3611440.3839430.5300710.3590960.576104
B 4 0.6751570.4030110.3198700.6887890.2987780.722029
F 4 0.6751570.4030110.3198700.6887890.2987780.722029
J 4 0.6751570.4030110.3198700.6887890.2987780.722029
B 5 0.5000000.0000000.5000000.0000000.5000000.000000
F 5 1.0000000.0000001.0000000.0000001.0000000.000000
J 5 1.0000000.0000001.0000000.0000001.0000000.000000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, Z.; Lu, Y.; Zhan, H.; Yu, Y.; Wang, Z. A Quantitative Legal Support System for Transnational Autonomous Vehicle Design. Drones 2025, 9, 316. https://doi.org/10.3390/drones9040316

AMA Style

Yu Z, Lu Y, Zhan H, Yu Y, Wang Z. A Quantitative Legal Support System for Transnational Autonomous Vehicle Design. Drones. 2025; 9(4):316. https://doi.org/10.3390/drones9040316

Chicago/Turabian Style

Yu, Zhe, Yiwei Lu, Hao Zhan, Yang Yu, and Zongshun Wang. 2025. "A Quantitative Legal Support System for Transnational Autonomous Vehicle Design" Drones 9, no. 4: 316. https://doi.org/10.3390/drones9040316

APA Style

Yu, Z., Lu, Y., Zhan, H., Yu, Y., & Wang, Z. (2025). A Quantitative Legal Support System for Transnational Autonomous Vehicle Design. Drones, 9(4), 316. https://doi.org/10.3390/drones9040316

Article Metrics

Back to TopTop