Next Article in Journal
A Short-Term Power Prediction Method for Photovoltaics Based on Similar Day Clustering and Spatio-Temporal Feature Extraction
Previous Article in Journal
Application of Variable Universe Fuzzy PID Controller Based on ISSA in Bridge Crane Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AFTEA Framework for Supporting Dynamic Autonomous Driving Situation

by
Subi Kim
1,†,
Jieun Kang
1,† and
Yongik Yoon
2,*
1
Department of IT Engineering, Sookmyung Women’s University, 100, Cheongpa-ro 47-gil, Yongsan-gu, Seoul 04310, Republic of Korea
2
Department of Artificial Intelligence Engineering, Sookmyung Women’s University, 100, Cheongpa-ro 47-gil, Yongsan-gu, Seoul 04310, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2024, 13(17), 3535; https://doi.org/10.3390/electronics13173535
Submission received: 30 July 2024 / Revised: 28 August 2024 / Accepted: 4 September 2024 / Published: 6 September 2024

Abstract

:
The accelerated development of AI technology has brought about revolutionary changes in various fields of society. Recently, it has been emphasized that fairness, accountability, transparency, and explainability (FATE) should be considered to support the reliability and validity of AI-based decision-making. However, in the case of autonomous driving technology, which is directly related to human life and requires real-time adaptation and response to various changes and risks in the real world, environmental adaptability must be considered in a more comprehensive and converged manner. In order to derive definitive evidence for each object in a convergent autonomous driving environment, it is necessary to transparently collect and provide various types of road environment information for driving objects and driving assistance and to construct driving technology that is adaptable to various situations by considering all uncertainties in the real-time changing driving environment. This allows for unbiased and fair results based on flexible contextual understanding, even in situations that do not conform to rules and patterns, by considering the convergent interactions and dynamic situations of various objects that are possible in a real-time road environment. The transparent, environmentally adaptive, and fairness-based outcomes provide the basis for the decision-making process and support clear interpretation and explainability of decisions. All of these processes enable autonomous vehicles to draw reliable conclusions and take responsibility for their decisions in autonomous driving situations. Therefore, this paper proposes an adaptability, fairness, transparency, explainability, and accountability (AFTEA) framework to build a stable and reliable autonomous driving environment in dynamic situations. This paper explains the definition, role, and necessity of AFTEA in artificial intelligence technology and highlights its value when applied and integrated into autonomous driving technology. The AFTEA framework with environmental adaptability will support the establishment of a sustainable autonomous driving environment in dynamic environments and aims to provide a direction for establishing a stable and reliable AI system that adapts to various real-world scenarios.

1. Introduction

The accelerating development of AI technology has revolutionized many fields, including healthcare, education, transport (autonomous driving), and finance [1], and is significantly improving people’s quality of life. As AI technology is capable of solving simple or complex problems, making decisions, and automating more accurately and faster than humans, innovative AI-based technologies are spreading and being applied in various domains.
However, the complexity and diversity of the real world means that rule-based learning, which relies on specific data and scenarios, is limited in its ability to represent and reason about the convergent and diverse situations and scenes of the real world. In addition, AI technologies that focus on perfecting algorithm design and the accuracy of conclusions are limited in their ability to produce results that are fair and adaptable and to transparently explain the causes and rationale behind the results, reflecting various social norms and rules shaped by humans [2].
Social phenomenon data should reflect various social phenomena, but if the training data do not reflect the situation of a particular group, or if AI is trained with data that contain historical and socially unfair customs, disciplines, etc., this can lead to social disruption due to AI technologies that are not adapted to different social environments and are unable to solve social phenomena in different areas. If they are trained with a bias towards certain data or are designed with algorithms that are not applicable to the problem they are trying to solve, then they lack the ability to adapt to diverse data and cannot be used to solve complex social problems. From the perspective that AI algorithms should reflect complex social phenomena and situations, the algorithm structure is very complex, and it is difficult to intuitively interpret the input and output relationship that solves the problems of social phenomena.
Therefore, for the sustainable usage and improvement of AI algorithms, a clear rationale for the results must be explained. However, if the learning process of the data input to the algorithm and the causes and rationales for each result cannot be accurately visualized and explained, then the results are ambiguous, leading to a black-box-like transparency challenge. It should be transparent about what data the AI model is trained with, how it is trained in stages, how data contributes to each layer, and how predictions are made.
The problem with most AI models is that they have difficulty in clearly expressing the complex interrelationships between data, social issues, and data in the learning process and in clearly visualizing and explaining the causes and rationale for the results obtained. XAI [3,4,5] is one of the concepts that has emerged to solve and complement this problem, and XAI aims to secure high reliability and stability by making the entire structure of decision-making explicable and interpretable.
However, explainable artificial intelligence (XAI) currently derives a verbal description of the visualized data or derives the contribution of each data characteristic applied to the learning process. In other words, in the case of XAI, it is necessary to be able to derive the cause of each characteristic applied to the result and to explain what kind of predictive results will be produced by changing characteristics in real time, rather than a description limited to surface descriptions and probabilistic contributions. In addition, the social environment in which AI technology is applied may ultimately go in a different direction from the existing rules and methods, and many unexpected and sudden situations may occur. Conclusions and predictions based on existing rules and scenarios are difficult to reflect upon and use to respond to the changing social environment in real time, and there are limitations in deriving flexible results for the social environment. In a social environment where various laws, rules, and values are applied, AI needs to draw adaptive conclusions according to the actual field of application.
However, different laws, rules, and values are applied in each social environment where AI is applied, and it must be able to draw stable and flexible conclusions through contextual understanding and reasoning in a dilemma. AI technology can contribute to understanding social phenomena, overcoming and solving problems and difficulties that arise in society. However, laws, rules, and values vary widely in different social environments, and rules and customs in different contexts are intertwined, so data and learning that is limited to specific data and scenarios can lead to unfair judgments and biased decisions. It is therefore evident that if AI does not accurately reflect the various social structures and values present within a given social environment, then it will be unable to reflect the diversity of that society and adapt and respond to social values and environments in a convergent manner. In the absence of flexible understanding and reasoning about such situations, AI will be constrained in its ability to apply and respond to a range of situations and environments in real time [6].
In order to overcome the limitations of AI and ensure the reliability and safety of AI, fairness, accountability, and transparency (FAT) research is conducted [7,8]. FAT supports the construction of AI models that consider fairness, accountability, and transparency as a means of addressing the biases present in data and algorithms, the limited interpretability and explainability of models due to their complexity, and the lack of accountability for the issues caused by AI models. However, the concept of transparency in FAT entails that the implementation of the model’s functioning can be interpreted in a transparent manner, thereby focusing solely on the derivation of results from the model. This model-centered approach aims to elucidate the inputs employed to generate the outputs at each neural network stage. However, it has limitations in terms of clearly delineating the data contributions and causal factors that led to the result. Consequently, following fairness, accountability, and transparency (FAT) [7,9], the proposed fairness, accountability, transparency, and explainability (FATE) [10,11,12,13] architecture incorporates explanations such as the interpretation of input data and interrelationships between data, as opposed to model-centered explanations.
In FATE (explainability) [10,11,14], explainability refers to the ability to interpret the results. In Shin et al. (2020) [11], the authors emphasize that the predicted outcome can be interpreted as a human would understand it. In Quttainah et al. (2024) [15], both transparency and explainability are considered so that the entire process of an AI algorithm is available for visualization, explanation, and interpretation. Shaban–Nejad et al. (2024) [11] explain that in most current AI systems, there is no clear explanation of how the AI produced certain values, so interpretation and explanation of resulting values is the essential element for advancing AI in various fields.
In the real world, however, there are a number of factors at work, including a variety of objects, complex and unpredictable interrelationships between objects, and dynamic situations. The existing FAT and FATE models do not take environmental adaptation into account, which limits their ability to understand dynamic situations and respond to changes in a reliable manner. In the absence of the capacity to recognize novel situations or to derive intricate combinations of disparate data, it becomes challenging to regulate and respond in an effective manner to the perceived data. In order to ensure the stable application of AI in the real world, it is essential to derive and infer intricate relationships between objects and to possess a clear understanding of novel situations and a rationale for appropriate responses to such situations. Therefore, in order to understand the new environment and situation through the analysis of various interrelationships of various situation perceptions and objects and to infer the appropriate response and control methods for the situation, contextual adaptive factors are essential for stable response and control in dynamic social environments.
In this paper, a new architectural framework is proposed, namely the adaptability, fairness, transparency, explainability, and accountability (AFTEA) architecture. This framework is designed to address convergent situations simultaneously. While numerous architectures and studies have considered the elements of an adaptive environment either alone or in combination with some elements of FATE, relatively few have considered and utilized all five elements simultaneously.
AFTEA addresses the need to consider fairness, accountability, transparency, explainability, and environmental adaptability simultaneously and in combination across the entire process of contextualization, understanding, reasoning, decision-making, and control. In order to support this necessity, this paper defines and offers an explanation of the manner in which the elements of AFTEA are defined and how they should be utilized in each process and stage of the entire AI process, from data collection to control.
In the next section, FAT and FATE are described along with related work on how each element has been applied in AI. Then, in Section 3, each element of AFTEA is defined and explained, and the AFTEA proposed in this paper is described in detail. Finally, Section 4 summarizes and concludes the paper.

2. Related Works

2.1. Adaptability

Within the interrelationship between autonomous vehicles (AVs) and human-driven vehicles (HVs), rule-based driving learning, which does not reflect the personal characteristics, personality, and preferences of human drivers, is challenging in predicting out-of-rule environments.
Valiente et al. (2022) [16] proposed an adaptive driving framework that combines human experience-driven driving with a rule base. The decentralized reinforcement learning framework optimizes experience-based utility and exposes learning agents to a wide range of driver exposure, becoming robust to human driver behavior and enabling cooperative and competitive control independent of the HV’s aggression level and social preferences. Second, it prioritizes safety to avoid high-risk behaviors that could compromise driving safety. To this end, agent partial observability enables adaptive driving for moving objects with different characteristics in the environment.
Zhao et al. (2024) [17] proposed a human-like decision-making framework for adapting to uncertain human behavior in order to overcome the limitations of the ability of autonomous driving vehicles (ADVs) to balance personal efficiency against public safety. They built a Stackelberg game model and a social potential field model that took into account social value orientation (SVO), which measures the degree of social value altruism, and used a reverse reinforcement learning-based algorithm to capture social interactions. This was followed by recursive horizon optimization (RHO), which allowed it to be adaptive to its environment by learning to reproduce human behavior in various social changes.
Wang et al. (2023) [18] introduced a motion tracking framework for learning physically plausible human dynamics from real-world driving scenarios to minimize the error between real and simulated human behavior in safety-critical applications. The objective was to infer the human body dynamics of occluded parts of uneven trajectories based on visible motion. Therefore, it enabled motion prediction and adaptive driving by reasoning about dynamic human motion and adapting to the inferred motion. By proposing a model that is adaptable not only to visible data but also to occluded or hard-to-see data, the model became adaptable to various human movement scenarios.
Liao et al. (2024) [19] presented a novel dynamic geometric graph approach that eliminated the need for manual labeling during driving, integrating centrality metrics and behavior recognition criteria based on traffic psychology, decision theory, and driving dynamics to represent driving behavior that was adaptive to the driving vehicle and the driving environment. By conducting experiments using NGSIM, HighD, RounD, and MoCADdatasets, the approach demonstrated exceptional robustness and adaptability in various traffic scenarios such as highways, roundabouts, and busy urban locales, and efficiently adapted to different scenarios by accounting for the uncertainty inherent in the predictions.
Li et al. (2023) [20] pointed out the difficulty of reasoning in a rule-centric paradigm and the challenge faced by manually generated, limited, rule-based autonomous driving technologies in responding to situations that require decisions that are out of step with the complexity and diversity of the real world. Therefore, this paper proposes a framework for constructing scenario semantic spaces in the human cognitive system to organize knowledge about new information and enable logical adaptation and interpretation.

2.2. Fairness

Mehrabi et al. (2021) [21] investigated a number of real-world applications that were biased in different ways, examining the various biases and sources that can affect AI applications. This was followed by a categorization of fairness definitions and a taxonomy of fairness definitions that researchers have used to avoid existing biases in AI systems. It identified different areas and subsections of AI with unfair consequences and described how researchers have been trying to address them.
Ye et al. (2022) [22] categorized data bias and algorithmic bias as two potential reasons for unfairness in machine learning results. The study categorized data bias into two types: distortions that bear upon what the machine learning algorithm learns and distortions that prevent it from making fair decisions based on the algorithms’ operational behavior. In the context of decision-making in an autonomous environment, it proposed FairLight based on reinforcement learning, which emphasizes the importance of making decisions that are not biased towards any particular group. FairLight optimized overall traffic performance by considering the fairness of the individual vehicles and enabled fair and efficient traffic control through the allocation of efficient signal duration.
Roh et al. (2023) [23] presented the problem of bias in the training and distribution data of current AI models and the bias between distribution labels and sensitive groups. This study introduced a correlation shift to account for fairness algorithms and group fairness. Next, the authors proposed a new preprocessing step. They sampled the input data to reduce correlation shifts and formalized the data ratio adjustment between labels and sensitive groups to overcome the limitations of processing. This allowed them to perform pre-processing correlation adjustments and unfairness mitigation on the processed data.
Njoku et al. (2023) [24] proposed the concept of acting as an impartial judge in vehicle accidents and proposed an accident monitoring algorithm and accident investigation and analysis solution. This supported fair decision-making in autonomous vehicle accidents.

2.3. Transparency

Pokam et al. (2019) [25] emphasizes that information that needs to be communicated to humans in autonomous vehicles must be transparently displayed to enable environmental understanding, and they emphasized that human–machine interface (HMI) transparency for accurate and efficient communication is a critical factor in promoting safe driving. Pokam et al. (2019) evaluated five HMIs that integrated some or all of the following functions: information gathering, information analysis, decision making, and action execution. To verify the transparency principle, the transparency-based HMIs were evaluated for situational awareness, discomfort, and participant preference.
Llorca et al. (2024) [26] presented a new problem of uncertainty by applying AI decision-making processes to autonomous driving. They argued that the autonomous driving environment comprises several interrelated components, including cybersecurity, robustness, fairness, transparency, privacy, and accountability, and therefore, the various issues that arise in such a complex framework need to be resolved. Accordingly, an algorithm for the explainability of AV testing and validation in the localization, perception, planning, control, human–vehicle interaction, and system management phases of autonomous vehicles was presented. By providing an understanding of the underlying algorithms and decision-making processes, system safety and reliability are ensured, and evaluating performance in different scenarios is made possible, which promotes transparency and accountability.

2.4. Explainability

Xu et al. (2020) [27] proposed the need for artificial intelligence (AI) to understand and reason about scene objects in the same way that humans solve problems. For scene understanding, only objects directly related to the driving task need attention and finite descriptions. In response, the authors proposed BDD object-induced action (BDD-OIA) to determine what drives an action and output relevant pairs of actions and descriptions to maximize explanatory power. BDD-OIA is a novel architecture for collaborative action/explanation prediction. It was implemented as an object detection module based on Faster RCNN and a global scene context module based on multi-task CNN. The study utilized SG2VEC, a spatiotemporal scenario embedding methodology that uses graph neural networks (GNNs) and long short-range neural networks (LSTMs) to recognize visual scenes and predict future collisions through GNN and long short-term memory LSTM layers.
Atakishuai et al. (2024) [28] proposed the need to describe AI-based decision-making processes that enable not only safe real-time decision-making in autonomous vehicles but also compliance across multiple jurisdictions. The study provided a thorough overview of current and emerging approaches for XAI-based autonomous driving. It presented a future direction and a new paradigm for XAI-based autonomous driving that could improve transparency, trustworthiness, and socialization by proposing an end-to-end approach to explanation.
Omeiza et al. (2021) [5] proposed a novel multimodal deep learning architecture to generate textual descriptions of driving scenarios that can be used as human-understandable explanations, collaboratively modeling the correlation between images (driving scenarios) and language (descriptions). This study confirmed that autonomous vehicles can effectively mimic the learning process of a human driver, and generating legitimate sentences for a given driving scenario enables appropriate driving decisions.
According to Omeiza et al. (2021) [29], AI technologies should explain and justify decisions in an understandable way in order to build trust and increase public acceptance. However, existing explanation techniques have primarily focused on explaining data-driven models (e.g., machine learning models) and are not well suited for complex goal-based systems like autonomous vehicles. Furthermore, these explanations are often only useful to experts and are not easily utilized by the general user. The paper proposed an interpretable and user-oriented approach to explanation provision in autonomous driving with the goals of providing clarity and accountability. In an autonomous driving environment, a combination of object identification techniques, traffic conditions, scene graphs behavior, and road rules was used to generate descriptions. It focused only on how this combination generated different descriptions (i.e., road rules) and suggested how different descriptions could be created and in which types of driving conditions descriptions were relevant.

2.5. Accountability

Two major challenges to overcome with autonomous vehicles are safety and liability. Society must hold autonomous vehicles accountable just as it holds humans behind the wheel accountable for their behavior and responsibility. One way to do this is to define the terms and conditions for autonomous vehicles. Rizaldi et al. (2015) [30] proposed five ways to make self-driving technology accountable: accepting responsibility by logging in before driving, configuring driving preferences to be accountable through scenarios, having some level of awareness of driving, inputting driving preferences, and a certification mark that says self-driving car manufacturers are accountable.
As described in Section 2, there are components that are considered in AI technology and are being developed into robust AI technology. Each of these components has a powerful impact individually, and if each component is considered convergently, it is expected to lead to more advanced AI technology. In this case, it is essential to consider transparency and explainability, which enable a transparent description of the situation and the provision of evidence-based criteria to derive clear validity and sustainable causality of the results. Based on transparent explainability, it is possible to understand results and provide prediction, inferences, decision-making, and control justification, and accountability is necessary for achieving reliable performance and results. When all of the elements of AFTEA interact convergently, robust control and predictable response is achieved. In the following Section 3.1, each component is defined and described in detail, and the value of each component and the necessity of building AFTEA architecture that emphasizes the convergence of each component is introduced.

3. AFTEA Architecture

AFTEA Architecture consists of adaptability, fairness, transparency, explainability, and accountability, as shown in Figure 1 and Figure 2.
Figure 1 is an architecture that highlights the inclusiveness of the five elements and illustrates how they are important in the process of implementing AI. The shapes represented within “Adaptability” and “Fairness” represent data collected in the real world and are the most important factors to consider in order to be fair and adaptable to the environment and situation context. In addition, as shown in the “Transparency” part of Figure 1, adaptability and fairness must be applied at the same time to ensure trustworthiness in the process of convergence of knowledge and the new data to the knowledge that flexibly represents the dynamic real world. Accountability encompasses all elements, and AFTEA demonstrates that by ensuring responsibility in each applied element, it is possible to achieve reliable outcomes in comprehension, inference, prediction, decision-making, and action execution.
Figure 2 illustrates the horizontal organic relationship of the elements, as compared to the inclusiveness described in Figure 1, and the key characteristics of each element described in Section 3. Therefore, Figure 2 represents the interconnectedness and co-operative relationship between each element when performing AI, so that each step of AI is able to be performed highly complementarily. These complementary AFTEA processes allow for adaptive, real-time responses to diverse environments and require fairness, transparency, and accountability for stable decision-making in the context of dynamic environments. Furthermore, accountability for the validity of the results must be considered in order to ensure that each element is complementary and applicable at all times. Fairness eliminates bias and ensures that AI performance and decision-making are transparent and accountable in all environments and situations. Transparency enables clear explanations, and accountability improves adaptability to dynamic environments and ensures explanations for fair and transparent results.
Thus, Figure 1 and Figure 2 show that each element of AFTEA is not independently applied in each stage and throughout the entire process of AI, but the relationship between the stages of inclusion of each element exists, and reliable AI is achieved when the interactions and harmonious balance of all elements are verified.
In the real world, it is essential to collect data from many different objects and then to construct knowledge on the current state and situation in the environment. In the data collection stage, adaptability and fairness are the most important considerations for clarity and contextualization in the dynamic real world. Fairness and adaptability are also considered at the stage of inferring new states and situations by generating knowledge that flexibly expresses the convergent real world and combining new data and existing knowledge.

3.1. Adaptability

Adaptability refers to converging existing information from the real world with new information in order to anticipate situations that may occur in different environments. It requires an inferential fusion of existing information and varying information acquired in new situations to represent the dynamic real world and respond to various changes immediately.
To express such a state of affairs, the concepts of knowledge and ontology are converged, and graph neural network (GNN) [31,32,33] technology and artificial general intelligence (AGI) [34,35,36] technology are being researched to understand the complexity and diversity of the real world and to express convergent relationships and respond to immediate changes. Knowledge represents a situation from a given circumstance by storing complex structured data. Therefore, knowledge allows one to derive relationships and rules between objects in order to infer a new situation from a known situation. To derive relationships and rules of objects, the concept of ontology is introduced, and ontology is applied to derive the interactions and relationships between objects by interconnecting with knowledge. In the real world, various objects and information are interactively and convergently connected, so knowledge-graphing technology [37,38,39] is applied to interconnect knowledge. Knowledge graphs not only derive the relationship between objects by connecting them, but they also express the relationship between the characteristics and meanings contained in the objects. Knowledge graphs are developing into adaptive AI by representing the dynamic real world with the development of graph neural networks and enabling reasoning and response to various changes in situations. Therefore, adaptability is crucial for building knowledge from various objects and characteristics in the real world and inferring associations and causality between data in a dynamic environment that is changing in real time. In particular, in the context of real-world driving, a multitude of rules and patterns coexist, reflecting the diversity of social environments. The interrelationships and combinations of objects within these environments give rise to a multitude of contexts and situations that deviate from the established rules and patterns.
The driving environment requires adaptive decision-making not only for situations that occur in a regular pattern but also for new situations that are irregular (outside the regular pattern) and different from previous experience. It is therefore evident that in order to gain knowledge of the driving environment, it is necessary to develop the ability to construct a framework that incorporates the established rules and patterns of experience and to derive knowledge that enables the inference of and adaptation to new situations in real time, thereby facilitating an immediate response to the changing environment. In order to generate adaptive conclusions to the situation by constructing a graph, it is necessary to derive a flexible knowledge graph that accurately understands the existing knowledge graph and the new situation, allowing for complex convergence. It is extremely valuable to be able to infer and derive new knowledge graphs that are compatible with existing knowledge graphs in order to infer possible actions in new situations and make the appropriate decisions. In other words, AI needs to be adaptable to a variety of contextual information and multiple socio-environmental factors through flexible perception and interrelationships with unconventional objects.
Therefore, it is necessary to continuously establish and converge knowledge that enables the description of various situations so that new patterns and rules arising from the knowledge graph of existing experiences become available for inference and appropriate response. Continuous and convergent knowledge graph construction allows for adaptive decision-making in various environments and contributes to establishing a safe autonomous driving environment.
The application of autonomous driving technology that is able to adapt to dynamic situations in real time is expandable by applying it to various domains that interact with dynamic environments. In other words, in order to build adaptive algorithms that can be used in various social domains beyond the autonomous driving environment, it is essential to perform convergent contextualization, reasoning, and decision-making in various situations through consideration of the overall background and context of the current situation as well as deriving relationships between objects.

3.2. Fairness

Fairness is a critical element in AI system-based decision-making processes, as it ensures that data and algorithms are unbiased and operate equitably across all scenarios and objects. The various environments in which AI systems are applied can be classified into dynamic and static environments. Static environments are those with minimal changes, where existing rule-based patterns can be applied without significant errors in the outcomes. However, in dynamic environments, such as autonomous driving, where objects move randomly and situations change in real time, achieving stable training effects is challenging, and maintaining consistency in feature-based patterns is difficult [40]. For unstructured and dynamic data, it is important to ensure that all situations and objects are considered fairly so that unbiased decisions are made. Therefore, applying existing feature-based patterns in real-time dynamic situations is likely to inadequately consider environmental changes and the irregularity of moving objects, potentially leading to inappropriate situations that do not guarantee fairness. To ensure fairness, it is essential that AI systems are meticulously designed and implemented to operate impartially, encompassing all stages from data collection to algorithm design and application [41]. As illustrated in the ”Fairness” section of Figure 1, maintaining unbiased decision-making is critical throughout the process. This process includes collecting and analyzing diverse data from a real-time changing road environment, developing purpose-driven algorithms (e.g., knowledge graphs), and applying the resulting models in actual environments. Each of these steps must be executed with the goal of ensuring fair and unbiased outcomes.
First, to address fairness issues stemming from dataset characteristics, it is crucial to collect data from diverse sources in order to ensure representativeness and verify that the data are not biased towards specific situations or groups [21,23]. This approach helps in constructing a balanced dataset that includes a wide range of scenarios and groups. Particularly in real-time environments like autonomous driving, where data may be skewed towards specific weather conditions, time periods, or regions, it is important to diversify and collect data across different times and locations in order to build a balanced dataset. For already collected datasets, it is necessary to check for overfitting caused by historical unfair practices reflected in the data and to review and refine the labeling process to eliminate any bias. If these biased data characteristics are not considered in advance, it can negatively impact the model’s performance and fairness.
Using biased data can cause the algorithm to produce results that are skewed towards specific categories, deviating from its original intended function and potentially failing to provide fair outcomes for all users. Efforts are made to minimize bias by training AI algorithms with the most fair datasets possible in order to avoid unfair results, but even after addressing data bias, the algorithm design process itself can still introduce unfairness [22,42]. Algorithm design focuses on problem-solving by extracting features of objects through comparative analysis of their histories and learning from these to predict unique situations. However, during the learning process used to achieve results in a specific direction, the algorithm can become confined to a single domain-based scenario. This creates a risk that the objective function the algorithm aims to optimize may be biased under certain conditions.
If the specific features of the input data contain unbalanced or discriminatory information against certain groups, the model structure may react excessively sensitively to some features. This can lead to the objective function that the algorithm aims to optimize not ensuring overall fairness or acting unfavorably towards certain groups. Additionally, since existing rules and patterns extract and learn features based on past data, it is difficult to discover new variables for unseen elements and changed situations. These existing patterns often include biases, and following them as they are can undermine fairness.
In a truly dynamic environment, multiple objects are interrelated, and the algorithm, through the comparative analysis of various features, may infer completely new types of information from numerous interactions, sometimes discovering incorrect correlations and making predictions based on them. For autonomous vehicles, where the environment in which the algorithm is trained can differ greatly from the environment in which it is applied, data collected from specific roads or conditions might not reflect other conditions. Consequently, the algorithm may make accurate predictions in some environments but malfunction in others. Such data bias and algorithm bias can be particularly pronounced in variable and unpredictable environments like autonomous driving. In other words, model fairness is an essential element for trustworthy AI. The process of building datasets and the functioning and operation of algorithms must be designed to be understandable and explainable. In addition, fair decision-making should be ensured by producing flexible and adaptive results that can merge with and adapt to new situations that deviate from existing rules and patterns, thereby avoiding biased outcomes. To achieve fairness, a situationally integrative approach is essential. This approach aims to produce unbiased outcomes by flexibly responding to new situations rather than being constrained by existing patterns. In an integrative approach, it is essential to combine various factors and variables in diverse situations in order to understand the overall context and formulate appropriate strategies. A situationally integrative approach involves comprehensively analyzing the current situation and flexibly applying existing rules. This method is crucial for adapting to new situations and maintaining fairness.
Therefore, by ensuring fairness through three approaches—data bias, algorithmic bias, and environmental adaptive bias—AI in autonomous driving environments can produce unbiased results in all conditions, deliver accurate and reliable outcomes, and operate fairly for diverse users. As a result, these approaches enhance the transparency and reliability of AI systems, helping them to operate fairly for diverse user groups. This ensures that AI systems can adapt to dynamically changing real-time environments and respond effectively to various situations. Ultimately, AI systems that ensure fairness can build social trust and contribute to the increased utilization and acceptance of AI technology.

3.3. Accountability

Accountability, in Figure 1, refers to the justification of actions and decisions and the adjustment of behavior according to current situations. As shown in Figure 1, “Accountability”, which is the outermost section and contains all the elements, requires that when conclusions are generated by applying artificial intelligence technology, there must be justification for the generated conclusions and that the AI must be able to be safely and reliably controlled in various situations. Accountability requires transparency for learning and reasoning that leads to unbiased outcomes as well as clear explanation of the reasoning and evidence behind results.
It is crucial to maintain responsibility for achieving stable results and making effective decisions in every circumstance, regardless of prior exposure. From a societal perspective, it is necessary to ensure that legal regulations and rules are correctly and validly applied to the domains in which AI algorithms are utilized and that they are generally applicable by providing reliable results to the users of the applications. Therefore, accountability in AI must consider the combination of both the technician and society.
When AFTEA is applied in the autonomous driving environment, it requires accountability for safe outcomes since human lives are involved. Unlike rule-based environments and learned situations, autonomous environments encounter many dynamic situations, including situations in which the rules are out of line or different from anticipated situations. It is necessary to be responsible for ensuring that fair conclusions are achieved that are adapted to a variety of changing factors, such as the surrounding environment and road driving regulations. Also, AI processes require accountability for transparent rationale and cause-based explanations as well as for ensuring that decisions are implemented reliably in order to enable stable driving control.
Fairness requires accountability for bias in data and models. To eliminate the bias in datasets, it is important to be able to assess whether the representation of a group of datasets is fair, to determine whether the data in the group are unbalanced, and to conclude whether it is the group of datasets from which biased data values are derived. If datasets contain data labels, it is also important to determine if there is a bias in the names of the data labels and the distribution of the data labels. For model bias, it is necessary to evaluate the extent to which the model’s predicted probabilities are consistent with actual consequences. To ensure a sustainable model, it should be possible to determine how well predicted outcomes match actual outcomes and to derive a clear error rate for the predicted values along with a process for refinement.
Transparency and explainability require clear indicators of how the results were derived, what the results are, what factors led to the results, and the rationale for the results. From a descriptive perspective, it is a generalized metric for validation of the accuracy of the model and the description of the results as well as for validating conclusions against existing knowledge and areas of expertise. Accountability should be assessed by evaluating the ability to effectively learn from new environments and new data and to fuse old and new data to produce reliable results and decisions in a variety of environments.
To ensure accountability for AI’s environmental adaptability, it can be divided into the aspects of robustness to changes in data and models and robustness to changes in the situation and environment in which AI technology is applied. In the case of data, it is necessary to be responsible for how robust and stable it is in the face of various changes in input data, noise, and attacks. If the reliability and robustness of the model are guaranteed when the same patterns and features of the input data are input during the learning process, then it will produce highly accurate results for the input data.
However, in the real world, where there are various environmental changes, data with subtle changes and patterns and characteristics that do not exactly match the data utilized for training are input, so it is essential to be able to accurately reflect these data changes and produce results that are appropriate for the changed data. An AI model should be able to understand new environments and reason about outcomes based on existing learned results while simultaneously assessing whether it can adapt reliably to situations that are rare and non-common scenarios. In an autonomous environment, accountability must be considered throughout the entire process, from object recognition to driving reasoning and decision making, in order to ensure reliable results. It must be ensured that the driving data of the ego vehicle, the surrounding objects, and the hazards and elements to be avoided are accurately recognized. This a crucial element in the adaptation of knowledge to dynamic scenarios and the fusion of objects based on fair criteria to produce a stable response in various driving environments. The knowledge appropriate for different scenarios and the creation of clearly defined criteria to respond to them enables accountability for transparency, sustainable comprehension, and decision-making. Providing dynamic driving response and avoidance criteria considering accountability in unexpected road driving environments such as construction sites, road closures, traffic light failures, etc., allows for transparent explainability when building convergence knowledge and then stable decisions and actions.
The robust decision-making and actions mean that beyond autonomous environments, in all domains where AI technology is applied, accountability is not just about the consequences of the results produced. It must be able to provide a clear justification for learning, understanding, reasoning, situation awareness, prediction, decision-making, and control that takes into account all the elements. It requires the ability to give reliable validity to the results obtained and to judge the rightness or wrongness of actions and controls when making decisions so that clear criteria for control and response can be established for the domain in which the AI technology is applied. AI technology must utilize all elements of AFTEA to provide valid assessments and must draw conclusions in which the reliability and robustness of all these components are ensured.

3.4. Transparency

Transparency is a crucial concept in many fields, signifying that information and processes are clear and publicly accessible. It is an important factor in enhancing reliability and fairness, and it enables the explanation and verification of results. In this paper, transparency is categorized into information and algorithmic transparency, transparency in result derivation and decision-making processes, and transparency for accountability. As shown in the “Transparency” section of Figure 1, it is emphasized that datasets, algorithms, derived results, decision-making, and the overall implementation process, all of which are applied differently to AI systems and autonomous driving technologies, must be transparently disclosed.
First, information and algorithmic transparency refers to the concept that datasets and algorithms should be publicly available. By making the algorithms transparent in terms of which datasets the system analyzed and learned from to make informed decisions, you build trust in the results it produces. This allows others to evaluate the fairness of the system and trace the source of errors if they occur. Understanding the relationship between data and algorithms and explaining the decision-making process of machine learning models is important and can contribute to increasing the reliability of AI systems and promoting the development of socially transparent technologies [43,44].
Second, transparency in result derivation and decision-making processes refers to clearly showing how results are generated and transparently presenting the outcomes. Especially in autonomous driving systems, it is essential to explain why the vehicle chose a particular action [45]. This provides information about the system’s state and helps users understand and trust the automated system’s operating principles and decision-making rationale. For example, if an autonomous car suddenly slows down, it should be able to clearly explain whether the reason was an obstacle on the road or the movement of another vehicle. Through these principles of transparency, it is essential to clearly define and provide the necessary information so that users can understand the system’s intentions [25,46]. In other words, clearly explaining the reasons behind the system’s decisions is crucial for enhancing human trust [47].
Liu et al. (2022) [46] proposed a functional transparency (FT) assessment approach to address the limitations of existing human–machine interface (HMI) transparency evaluation methods that rely on the quantity of information. Unlike traditional transparency, which merely emphasizes the amount of information provided, functional transparency (FT) focuses on how well the HMI can be understood by the user after interaction. This approach evaluates how effectively the HMI design enables users to understand the environment based on the information transmitted, and it reexamines the effectiveness and importance of the information delivery methods.
Polam et al. (2019) [25] aimed to extract the information needed by drivers in the design of HMI for autonomous vehicles, helping drivers to understand and trust the behavior of the autonomous driving system. This study considered driver–vehicle–environment (DVE) conditions and driver status, using a rule-based algorithm to visually clarify why an autonomous vehicle is reducing its speed, thereby aiding drivers in understanding the system’s intentions. This approach enhances drivers’ situational awareness (SA) and improves the transparency of the system.
Thirdly, transparency for accountability involves linking the results obtained and the supporting evidence to the system’s accountability by transparently disclosing them. This means that newly generated and accumulated data, derived results, and interpretations of decision-making must be openly and transparently available in real time. In the event of an accident in an autonomous system, this information should be transparently disclosed in order to reconstruct the chronological sequence of events to analyze the cause of the accident and interpret responsibility [24]. Through this, it should be clearly identified what data decisions were based upon and how those decisions were made in order to clearly determine responsibility and derive improvement measures for similar situations in the future.
The event data recorder (EDR) in an autonomous vehicle records data related to the operation of the vehicle. These data can reconstruct the events leading up to an accident and provide important information for legal proceedings and insurance claims. Researchers are developing algorithms to more accurately analyze post-accident data [48].
J.N. Njoku et al. (2023) [24] proposed an innovative concept that used recorded data and location-based identification to ensure fair judgment in vehicle accidents. Their research demonstrated the feasibility of the proposed solution for accident investigation and analysis.
A. Rizaldi et al. (2019) [30] addressed the problems of ensuring that autonomous vehicles follow traffic rules and clarifying responsibility in the event of a collision. To solve these problems, they proposed a method for datafying traffic rules and making them mechanically testable. They show that if traffic rules are precise and unambiguous, vehicles can avoid collisions while obeying traffic rules, which is important for establishing liability. This contributed to the behavior of autonomous vehicles being transparently evaluated and responsibility being clearly identified.
D. Omeiza et al. (2021) [29] proposed an interpretable tree-based user-centered approach to describe autonomous driving behavior. One way to ensure multiple accountability is to provide a description of what the vehicle ‘saw’, did, and can do in a given scenario. To this end, based on hazard object identification in driving scenes and traffic object representation using scene graphs, the authors combined observations, autonomous vehicle behavior, and road rules to provide interpretable tree-based descriptions. A user study evaluating the types of explanations in different driving scenarios emphasized the importance of causal explanations, especially in safety-critical scenarios.
In this way, the data, results, and decision-making processes related to autonomous vehicles must be formalized and transparently disclosed so that responsibility can be clearly identified and improvements can be made in similar situations. A lack of transparency can lead to a variety of problems, including a lack of trust, questions about fairness, and difficulties in accurately analyzing system errors. In autonomous driving systems in particular, a lack of transparency can significantly undermine the trust of users and the general public. Since autonomous driving systems are critical systems that directly impact human lives, ensuring safety and reliability through transparency is essential. Therefore, autonomous driving systems need to ensure transparency through the disclosure of the algorithms and datasets used, clear explanations of how results are derived, and transparent interpretations of results and decisions. For example, in a situation in which an autonomous vehicle decides to stop at an intersection (using various sensor data to detect pedestrians, other vehicles, traffic light status, etc.), if the datasets used and the algorithms processing this data are transparently disclosed, users can clearly understand why the vehicle stopped. Additionally, if the decision-making process is recorded and available for later analysis, it plays a crucial role in accident investigation and accountability. This enhances the explainability of outcomes and helps provide reliable decision-making. Transparency is a crucial element that provides clarity and legitimacy to the results, playing a key role in ensuring the safety and reliability of autonomous driving systems.

3.5. Explainability

Explainability at the recognition stage should be able to identify results by visually deriving the elements of each recognized object from the recognition of data in the model. The visual representation and derivation of recognition results enables status tracking and continuous monitoring and enables clear causes and rationales for the results when controlling the AI and notifying it about the situation. Providing a reasonable basis and causal factors for object recognition provides a clear visual explanation of the factors that contribute to situations, abnormal situations, etc.
In an autonomous driving environment, surrounding objects are visualized in real time, and certain dangerous areas are color-coded to explain the real-time driving environment. For this purpose, lane recognition technology, road surface marking recognition, object recognition, motion prediction technology in the road environment, and depth estimation technology for object recognition based on distance information are applied to autonomous driving technology. In addition, clear explainability of the object recognition results can be obtained through the correlation between images and language that can expand the interrelatedness of the recognized scenarios and objects.
The real world is composed of convergent interactions between objects, but even though each individual object can be perceived and described, it is difficult to perceive and understand the situations that they constitute if their relationships are not deduced and described. Deriving the interrelation of objects can elucidate the grounds and causes for the perceived objects and situations. By explaining these grounds and causes, it is possible to infer the relationships between objects formed in various and new situations and to continue to provide valid grounds and causes for the perception of new situations. It can elucidate the priority and importance of objects in the formation of mutual relationships among objects and provide the basis for forming mutual relationships.
In the driving environment, it is difficult to make stable driving responses and driving decisions based on single object recognition alone. For example, when it is necessary to change lanes to another lane or merge due to a merging section, it is important to recognize the road markings on the road surface that enable lane change, and then make a decision about when it is possible to stably merge by recognizing the movement and direction of the vehicle in the lane you want to enter in combination with the perceived situation. Even if it is necessary to drive in a different pattern from the existing rules, such as a construction site or a traffic light failure, the driver must be able to derive a reasonable justification for driving in conflict with the existing rules and regulations by combining the perceived situation with the movement of surrounding vehicles.
Knowledge is needed to represent and reason about interactions. Knowledge is represented in AI as a knowledge graph, which is a way of describing contextual information that occurs in the real world. The knowledge graph becomes the basis for making decisions and performing control over the situation by deriving the perceived situation information. The relationships between objects formed in consideration of interaction construct various knowledge graphs for judging the situation. When a new situation is recognized by forming a knowledge graph, it is possible to generate new knowledge for decision-making and control through a reasoning process suitable for unlearned situations by fusing the interrelationships between objects and existing knowledge, and to present decision-making and control criteria for that situation.
The knowledge formed by the convergent interrelationships between objects becomes the basis for decision-making and control in various situations, and the process of knowledge formation, deriving labels and descriptions of the knowledge according to what basis the derived knowledge provides for decision-making, is essential for reliable decision-making. By linking the above interactions to the process of performing them, it should be possible to clearly explain how the interactions between the objects contained in the knowledge were applied and which objects and what weights were derived to build the knowledge appropriate to the situation. In the case of new situation recognition, it is possible to express and recognize the situation by explaining the complex reasoning process used to determine which of the existing knowledge graphs should be selected, how it fuses with the interrelation of new objects, and to derive detailed criteria and specificity for situation-specific decision-making and control.
The AI model should be able to clearly identify the current driving location and environment, such as junctions, blind spots, and parking lots, and derive driving standards through knowledge that infers, predicts, and responds to the movement of objects. In the case of a junction, it should be possible to set weights for the movement and direction of the vehicle that wants to merge, and in the case of a blind spot or parking lot, it should be possible to present a definite basis and explanation for stable driving response even in a rapidly changing driving environment by setting weights of values that can infer the occurrence of sudden objects or the movement of unexpected objects that are different from the existing ones. When a new situation occurs, the interrelation of the perceived objects enables an applied understanding of the situation through the convergence of existing knowledge, and new knowledge is constructed by combining new interrelations with existing knowledge, so it is necessary to explain the criteria for valid knowledge selection. When recognizing a situation, it is necessary to present the basis and criteria for whether the interrelation of objects can select knowledge based on the learned situation. If there is no learned situation and new knowledge needs to be built with new situation cognition, then the criteria of the selection range should be presented through the interrelation of objects, and the way to utilize it should be introduced. Then, in the process of composite fusion of new interrelationships between objects and existing knowledge, the explanation of each part of how the new relation is connected and fused with the existing knowledge can be presented to improve the validity of composite reasoning for building new knowledge.
In the case of a driving environment that suddenly deviates from the rules, such as a construction site or traffic light failure, or when a detour is required due to a situation such as a road closure, it is necessary to derive knowledge that is capable of responding stably to the current driving environment by fusing existing knowledge with the surrounding objects and their movements. In this case, if the existing knowledge can be clearly explained, then the data elements required to generate knowledge fused with new data can be selected on a clear basis, and sustainable and explainable knowledge can be formed for sudden scenarios. Accordingly, even when it is necessary to perform driving outside of the existing rules or patterns in the current driving environment, it is possible to generate convergent knowledge that includes clear evidence elements and derive clear criteria for stable driving responses, enabling plausible complex inference driving in various scenarios.
Situations in the real world do not always occur in a certain pattern due to the diversity of perceived data and various interactions. Explainability in AI should not only explain the basis and cause of conclusions but should also derive valid factors for prediction, decision making, judgment, and control from the process of data recognition and cognition to the process of situation recognition, understanding, and reasoning. Explainability through clear evidence and factors for the results derived from each process enables more diverse information reasoning and interpretation with detailed evidence and cause interpretation and enables flexible situation-specific response and decision making.

4. Conclusions

This paper proposed the AFTEA framework to define the essential factors to be regarded in the learning, reasoning, and decision-making process of artificial intelligence and suggested a direction for the improvement of the stable and reliable autonomous driving environment and sustainable artificial intelligence technology. Current AI technology emphasizes the need for accountability, fairness, transparency, and explainability (or ethics). However, in the case of AI technology, it is imperative to consider contextual and environmental adaptive factors, as it must be compatible with a highly dynamic social environment such as the autonomous driving environment and real world, where diverse changes take place in real time.
However, much of the current research is based on FAT and FATE, and there is a lack of research on architectures that converge the previous elements, including adaptive elements. Therefore, this paper proposes an adaptability, fairness, transparency, explainability, and accountability (AFTEA) architecture to ensure a stable, clearly informed, and contextually adaptable approach to the dynamic real world.
This paper explains the need for AFTEA by defining the overall architecture of AFTEA and each of its components and describing how each component is applied in the process of performing AI.
Adaptability is the ability to fuse existing information and new knowledge in the real world. It is a characteristic that enables immediate response through the inferential fusion of existing information and new situations by predicting and reasoning about situations that occur in various environments.
Fairness is essential for trustworthy AI, making it possible to understand and explain how datasets are built and how algorithms function and work. It enables contextualization, context-adaptive and flexible outcomes, and fair decision-making in new situations that challenge established patterns and deviate from the rules.
Transparency refers to the clarity and public accessibility of information and processes and is an essential element for increasing reliability and fairness and enabling explanation and verification of results. Accordingly, this paper specifies the need for transparency in AI by dividing transparency into transparency of information and algorithms in general, transparency of results and decision-making processes, and transparency for accountability.
Finally, explainability is the interpretation of the reason and cause of the derived results in order to verify the validity and clarity of the results. As it is a complex inference value of the entire process from data collection to data recognition in the situation and environment, interaction between recognized objects and data, and situation awareness, it enables rational decision-making by presenting not only the basis for the final result but also a detailed explanation of the entire process of deriving the result.
Accountability is the ability to judge the legitimacy of decisions and control actions according to the situation, and is a factor that enables the justification of conclusions drawn by AI and the safe and reliable control of decisions in various situations.
As detailed above, this paper described and defined each element of AFTEA and proposed an architecture that can be convergently applied to the real world, including environmental adaptability. By including the environmental adaptability element, AFTEA provides a direction to improve upon existing algorithms that consider fairness, accountability, explainability, and transparency by creating algorithms that analyze situations that occur in various environments and are able to adapt and respond in real time.
The role of AI in autonomous environments and dilemma situations extends beyond rule-based reactions and requires highly adaptive and predictive capabilities to proactively respond to different situations. In particular, the development of generative AI increasingly emphasizes the importance of AI’s ability to reason and predict in new environments and situations that have not been experienced before. To respond to this requirement, it is mandatory to develop AI technologies that integrate the adaptability factors proposed by AFTEA. These technological developments derive selective criteria from a variety of data and perceived objects, generate convergent information and knowledge, and enable sustainable decision-making. In addition, with human-like reasoning capabilities, it is possible to derive detailed characteristics of states and situations that need attention in real time in various environments and situations and to perform real-time environment-adaptive actions in response to them, which will expand the applicability of AFTEA in many industrial domains other than autonomous driving environments—such as chatbots, emergency situations, and the robotics industry—used in actual industry or at home that need to adapt and respond to various situations and environments in real time.
Currently, AFATE research focuses on proposing a framework and the need to apply it to autonomous driving, which lacks quantitative metrics for experimentation in real-world scenarios. Therefore, based on the AFTEA architecture, we will extend the research to experiment and apply the architecture in a real autonomous driving environment. This research will be conducted to derive numerical and visual validation of the AFTEA architecture in real-world scenarios. In addition, AFTEA architecture will be enhanced to become a sustainable AI technology in various real-world domains beyond the autonomous driving environment.

Author Contributions

S.K. and J.K. conceptualized ideas and the framework, investigated related works, developed the theory, created and designed the architecture, and wrote the main manuscript text and figures. Y.Y. supervised the completion of the work, contributed to manuscript preparation, funding acquisition, and project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the IITP (Institute of Information & Coummunications Technology Planning & Evaluation)-ICAN (ICT Challenge and Advanced Network of HRD)(IITP-2024-RS-2022-00156299) and Development of Hashgraph-based Blockchain Enhancement Scheme and Implementation of Testbed for Autonomous Driving program (IITP-2024-RS-2022-00207391) grant funded by the Korea government (Ministry of Science and ICT). Also This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2023R1A2C1005779).

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Connor, S.; Li, T.; Roberts, R.; Thakkar, S.; Liu, Z.; Tong, W. Adaptability of AI for safety evaluation in regulatory science: A case study of drug-induced liver injury. Front. Artif. Intell. 2022, 5, 1034631. [Google Scholar] [CrossRef]
  2. Liu, H.; Wang, Y.; Fan, W.; Liu, X.; Li, Y.; Jain, S.; Liu, Y.; Jain, A.; Tang, J. Trustworthy AI: A computational perspective. ACM Trans. Intell. Syst. Technol. 2022, 14, 1–59. [Google Scholar] [CrossRef]
  3. Rane, N.; Choudhary, S.; Rane, J. Explainable Artificial Intelligence (XAI) approaches for transparency and accountability in financial decision-making. SSRN 2023, 4640316. [Google Scholar] [CrossRef]
  4. Alikhademi, K.; Richardson, B.; Drobina, E.; Gilbert, J. Can explainable AI explain unfairness? A framework for evaluating explainable AI. arXiv 2021, arXiv:2106.07483. [Google Scholar]
  5. Omeiza, D.; Webb, H.; Jirotka, M.; Kunze, L. Explanations in autonomous driving: A survey. IEEE Trans. Intell. Transp. Syst. 2021, 23, 10142–10162. [Google Scholar] [CrossRef]
  6. Novelli, C.; Taddeo, M.; Floridi, L. Accountability in artificial intelligence: What it is and how it works. AI Soc. 2023, 39, 1871–1882. [Google Scholar] [CrossRef]
  7. Shin, D.; Park, Y.J. Role of fairness, accountability, and transparency in algorithmic affordance. Comput. Hum. Behav. 2019, 98, 277–284. [Google Scholar] [CrossRef]
  8. Lepri, B.; Oliver, N.; Letouzé, E.; Pentland, A.; Vinck, P. Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philos. Technol. 2018, 31, 611–627. [Google Scholar] [CrossRef]
  9. Ahmad, M.A.; Teredesai, A.; Eckert, C. Fairness, accountability, transparency in AI at scale: Lessons from national programs. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; p. 690. [Google Scholar]
  10. Quttainah, M.; Mishra, V.; Madakam, S.; Lurie, Y.; Mark, S. Cost, Usability, Credibility, Fairness, Accountability, Transparency, and Explainability Framework for Safe and Effective Large Language Models in Medical Education: Narrative Review and Qualitative Study. JMIR AI 2024, 3, e51834. [Google Scholar] [CrossRef]
  11. Shin, D. User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. J. Broadcast. Electron. Media 2020, 64, 541–565. [Google Scholar] [CrossRef]
  12. Pillai, V. Enhancing Transparency and Understanding in AI Decision-Making Processes. Iconic Res. Eng. J. 2024, 8, 168–172. [Google Scholar]
  13. Zhou, J.; Chen, F.; Holzinger, A. Towards explainability for AI fairness. In Proceedings of the International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, Vienna, Austria, 18 July 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 375–386. [Google Scholar]
  14. Shaban-Nejad, A.; Michalowski, M.; Brownstein, J.S.; Buckeridge, D.L. Guest editorial explainable AI: Towards fairness, accountability, transparency and trust in healthcare. IEEE J. Biomed. Health Inform. 2021, 25, 2374–2375. [Google Scholar] [CrossRef]
  15. Diakopoulos, N.; Koliska, M. Algorithmic transparency in the news media. Digit. J. 2017, 5, 809–828. [Google Scholar] [CrossRef]
  16. Valiente, R.; Toghi, B.; Pedarsani, R.; Fallah, Y.P. Robustness and adaptability of reinforcement learning-based cooperative autonomous driving in mixed-autonomy traffic. IEEE Open J. Intell. Transp. Syst. 2022, 3, 397–410. [Google Scholar] [CrossRef]
  17. Zhao, C.; Chu, D.; Deng, Z.; Lu, L. Human-Like Decision Making for Autonomous Driving with Social Skills. IEEE Trans. Intell. Transp. Syst. 2024, 25, 12269–12284. [Google Scholar] [CrossRef]
  18. Wang, J.; Yuan, Y.; Luo, Z.; Xie, K.; Lin, D.; Iqbal, U.; Fidler, S.; Khamis, S. Learning Human Dynamics in Autonomous Driving Scenarios. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 20796–20806. [Google Scholar]
  19. Liao, H.; Li, Z.; Shen, H.; Zeng, W.; Liao, D.; Li, G.; Xu, C. Bat: Behavior-aware human-like trajectory prediction for autonomous driving. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 10332–10340. [Google Scholar]
  20. Li, X.; Bai, Y.; Cai, P.; Wen, L.; Fu, D.; Zhang, B.; Yang, X.; Cai, X.; Ma, T.; Guo, J.; et al. Towards knowledge-driven autonomous driving. arXiv 2023, arXiv:2312.04316. [Google Scholar]
  21. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. 2021, 54, 1–35. [Google Scholar] [CrossRef]
  22. Ye, Y.; Ding, J.; Wang, T.; Zhou, J.; Wei, X.; Chen, M. Fairlight: Fairness-aware autonomous traffic signal control with hierarchical action space. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2022, 42, 2434–2446. [Google Scholar] [CrossRef]
  23. Roh, Y.; Lee, K.; Whang, S.E.; Suh, C. Improving fair training under correlation shifts. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; PMLR: London, UK, 2023; pp. 29179–29209. [Google Scholar]
  24. Njoku, J.N.; Nwakanma, C.I.; Lee, J.M.; Kim, D.S. Enhancing Security and Accountability in Autonomous Vehicles through Robust Speaker Identification and Blockchain-Based Event Recording. Electronics 2023, 12, 4998. [Google Scholar] [CrossRef]
  25. Pokam, R.; Debernard, S.; Chauvin, C.; Langlois, S. Principles of transparency for autonomous vehicles: First results of an experiment with an augmented reality human–machine interface. Cogn. Technol. Work 2019, 21, 643–656. [Google Scholar] [CrossRef]
  26. Llorca, D.F.; Hamon, R.; Junklewitz, H.; Grosse, K.; Kunze, L.; Seiniger, P.; Swaim, R.; Reed, N.; Alahi, A.; Gómez, E.; et al. Testing autonomous vehicles and AI: Perspectives and challenges from cybersecurity, transparency, robustness and fairness. arXiv 2024, arXiv:2403.14641. [Google Scholar]
  27. Xu, Y.; Yang, X.; Gong, L.; Lin, H.C.; Wu, T.Y.; Li, Y.; Vasconcelos, N. Explainable object-induced action decision for autonomous vehicles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9523–9532. [Google Scholar]
  28. Atakishiyev, S.; Salameh, M.; Yao, H.; Goebel, R. Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions. IEEE Access 2024, 12, 101603–101625. [Google Scholar] [CrossRef]
  29. Omeiza, D.; Web, H.; Jirotka, M.; Kunze, L. Towards accountability: Providing intelligible explanations in autonomous driving. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021; pp. 231–237. [Google Scholar]
  30. Rizaldi, A.; Althoff, M. Formalising traffic rules for accountability of autonomous vehicles. In Proceedings of the 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 15–18 September 2015; pp. 1658–1665. [Google Scholar]
  31. Sadid, H.; Antoniou, C. Dynamic Spatio-temporal Graph Neural Network for Surrounding-Aware Trajectory Prediction of Autonomous Vehicles. IEEE Trans. Intell. Veh. 2024. [Google Scholar] [CrossRef]
  32. Bi, W.; Cheng, X.; Xu, B.; Sun, X.; Xu, L.; Shen, H. Bridged-gnn: Knowledge bridge learning for effective knowledge transfer. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 21–25 October 2023; pp. 99–109. [Google Scholar]
  33. Ye, Z.; Kumar, Y.J.; Sing, G.O.; Song, F.; Wang, J. A Comprehensive Survey of Graph Neural Networks for Knowledge Graphs. IEEE Access 2022, 10, 75729–75741. [Google Scholar] [CrossRef]
  34. Goertzel, B. Artificial general intelligence: Concept, state of the art, and future prospects. J. Artif. Gen. Intell. 2014, 5, 1–48. [Google Scholar] [CrossRef]
  35. Goertzel, B.; Pennachin, C. Artificial General Intelligence; Springer: Berlin/Heidelberg, Germany, 2007; Volume 2. [Google Scholar]
  36. Baum, S. A survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy; Working Paper; Global Catastrophic Risk Institute: Washington, DC, USA, 2017; p. 17-1. [Google Scholar]
  37. Zhang, J.; Zan, H.; Wu, S.; Zhang, K.; Huo, J. Adaptive Graph Neural Network with Incremental Learning Mechanism for Knowledge Graph Reasoning. Electronics 2024, 13, 2778. [Google Scholar] [CrossRef]
  38. Feng, S.; Zhou, C.; Liu, Q.; Ji, X.; Huang, M. Temporal Knowledge Graph Reasoning Based on Entity Relationship Similarity Perception. Electronics 2024, 13, 2417. [Google Scholar] [CrossRef]
  39. Li, Y.; Lei, Y.; Yan, Y.; Yin, C.; Zhang, J. Design and Development of Knowledge Graph for Industrial Chain Based on Deep Learning. Electronics 2024, 13, 1539. [Google Scholar] [CrossRef]
  40. Lei, X.; Zhang, Z.; Dong, P. Dynamic path planning of unknown environment based on deep reinforcement learning. J. Robot. 2018, 2018, 5781591. [Google Scholar] [CrossRef]
  41. Bird, S.; Dudík, M.; Edgar, R.; Horn, B.; Lutz, R.; Milan, V.; Sameki, M.; Wallach, H.; Walker, K. Fairlearn: A Toolkit for Assessing and Improving Fairness in AI; Tech. Rep. MSR-TR-2020-32; Microsoft: Redmond, WA, USA, 2020. [Google Scholar]
  42. Danks, D.; London, A.J. Algorithmic Bias in Autonomous Systems. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Melbourne, Australia, 19–25 August 2017; Volume 17, pp. 4691–4697. [Google Scholar]
  43. Larsson, S.; Heintz, F. Transparency in artificial intelligence. Internet Policy Rev. 2020, 9, 1–16. [Google Scholar] [CrossRef]
  44. Kemper, J.; Kolkman, D. Transparent to whom? No algorithmic accountability without a critical audience. Inf. Commun. Soc. 2019, 22, 2081–2096. [Google Scholar] [CrossRef]
  45. Oliveira, L.; Burns, C.; Luton, J.; Iyer, S.; Birrell, S. The influence of system transparency on trust: Evaluating interfaces in a highly automated vehicle. Transp. Res. Part F Traffic Psychol. Behav. 2020, 72, 280–296. [Google Scholar] [CrossRef]
  46. Liu, Y.C.; Figalová, N.; Bengler, K. Transparency assessment on level 2 automated vehicle hmis. Information 2022, 13, 489. [Google Scholar] [CrossRef]
  47. Cysneiros, L.M.; Raffi, M.; do Prado Leite, J.C.S. Software transparency as a key requirement for self-driving cars. In Proceedings of the 2018 IEEE 26th International Requirements Engineering Conference (RE), Banff, AB, Canada, 20–24 August 2018; pp. 382–387. [Google Scholar]
  48. Kropka, C. “Cruise”ing for “Waymo” Lawsuits: Liability in Autonomous Vehicle Crashes. Richmond Journal of Law and Technology, 23 November 2016. [Google Scholar]
Figure 1. AFTEAarchitecture showing the inclusiveness of each element.
Figure 1. AFTEAarchitecture showing the inclusiveness of each element.
Electronics 13 03535 g001
Figure 2. An architecture describing the interrelation of each element.
Figure 2. An architecture describing the interrelation of each element.
Electronics 13 03535 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, S.; Kang, J.; Yoon, Y. AFTEA Framework for Supporting Dynamic Autonomous Driving Situation. Electronics 2024, 13, 3535. https://doi.org/10.3390/electronics13173535

AMA Style

Kim S, Kang J, Yoon Y. AFTEA Framework for Supporting Dynamic Autonomous Driving Situation. Electronics. 2024; 13(17):3535. https://doi.org/10.3390/electronics13173535

Chicago/Turabian Style

Kim, Subi, Jieun Kang, and Yongik Yoon. 2024. "AFTEA Framework for Supporting Dynamic Autonomous Driving Situation" Electronics 13, no. 17: 3535. https://doi.org/10.3390/electronics13173535

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop