1. Introduction
The convergence of IoT and AI is transforming modern cyber-physical systems, from smart homes and grids to autonomous healthcare and industrial automation [
1]. However, alongside these benefits arise significant challenges around trust, transparency, and accountability in AI-driven IoT systems. Stakeholders often lack visibility into how an AI arrives at a decision, making it difficult to trust automated actions in safety-critical or sensitive domains. Regulators and users are demanding greater explainability and traceability for AI decisions, and new laws reflect this need [
2]. For example, the European Union’s GDPR grants individuals a right to an explanation of algorithmic decisions [
3], and the EU AI Act mandates that high-risk AI systems maintain decision logs for oversight [
4]. These trends underscore the pressing need for robust audit mechanisms to record what decision an AI system made and whyespecially in complex IoT environments where malfunctions or biases can lead to serious consequences.
Blockchain technology has emerged as a promising tool to address these trust gaps by providing a decentralized, tamper-evident ledger of events. Once data (such as an AI decision and its inputs) is recorded on a blockchain, it cannot be altered retroactively, creating an immutable audit trail that multiple stakeholders (device owners, service providers, and regulators) can trust without a central authority [
5,
6]. Despite growing interest in combining AI and blockchain, the specific idea of using blockchain ledgers to log the provenance of AI decisions in IoT remains under-explored. Many existing IoT deployments lack the comprehensive logging of AI behavior, and traditional logs—if they exist—are centralized and prone to tampering or loss. At the same time, much of the blockchain for IoT research has focused on device authentication, data integrity, or cryptocurrency applications, rather than detailed audit trails of algorithmic decision-making. Thus, there is a clear need to integrate AI decision provenance with blockchain-based immutability to fulfill emerging transparency requirements.
In this paper, we present a novel framework that fills this gap by logging every significant AI inference in an IoT system to a blockchain ledger along with key metadata to explain and reproduce that decision. This ensures a permanent, verifiable record exists for subsequent accountability—whether for debugging model errors, investigating incidents, demonstrating regulatory compliance, or providing explanations to users. For clarity, we define AI decision provenance as the record of all the information (inputs, model identifiers, etc.) necessary to trace how an AI decision was made, and we refer to the tamper-proof collection of these records as an immutable audit trail.
Our contributions can be summarized as follows: (i) We design a blockchain-based architecture for IoT that immutably records AI decisions and their provenance (inputs, model ID, and outputs) on a permissioned ledger. (ii) We present a working smart contract design to log decisions, and we evaluate its feasibility based on estimated performance metrics (throughput, latency, and storage overhead) on a representative IoT testbed. (iii) We analyze how our approach meets governance and compliance requirements, addressing privacy (via hashing and off-chain storage) and security threats, and we outline mechanisms for the GDPR’s “right to erasure” and system governance.
To our knowledge, this is one of the first works to intertwine blockchain logging with AI decision-making in IoT, going beyond prior efforts by focusing explicitly on the provenance of AI-driven actions. The remainder of this paper is organized as follows:
Section 2 reviews related work and situates our research within the existing literature on blockchain, AI, and IoT convergence, identifying the open gaps.
Section 3 details our proposed methodology and system architecture for blockchain-based AI decision provenance in IoT.
Section 4 discusses the relevant regulatory and governance landscape and how our approach addresses key requirements for trust and transparency.
Section 5 presents the conceptual framework and applies it to representative case studies to illustrate its usage.
Section 6 provides a critical discussion of the approach, including potential challenges (scalability, privacy, ethics, and security) and corresponding mitigation strategies. Finally,
Section 7 concludes the paper, summarizing the contributions and highlighting directions for future research and development.
2. Literature Review
2.1. Blockchain and IoT Convergence
Blockchain technology has been widely explored as a means to strengthen security and trust in IoT systems. Traditional IoT architectures rely on centralized brokers or cloud servers to aggregate device data, which creates single points of failure and raises concerns about data integrity and control [
5]. By contrast, blockchain offers a distributed ledger where data from IoT devices can be recorded in an immutable, append-only chain of blocks secured by cryptographic hashes. Researchers have shown that this approach can mitigate many IoT vulnerabilities by removing centralized control and making data tamper-evident [
7]. For example, Dorri et al. (2017) proposed an architecture for smart home IoT where blockchain maintains an access control list and transaction log for device interactions, improving security and privacy [
8]. In a similar vein, Banafa (2017) outlined the benefits and challenges of IoT-blockchain convergence, highlighting how decentralization can increase resilience and trust in IoT data exchanges [
9]. A recent survey by Fotia et al. (2022) further discusses decentralized trust management for IoT using blockchain, emphasizing that smart contracts can authenticate devices and verify data integrity in IoT’s multi-layered architecture [
10]. In parallel, Bhuva and Kumar (2023) [
11] demonstrated blockchain’s applicability beyond terrestrial IoT, using it to secure cognitive communication in space networks. Their approach showcases blockchain’s ability to establish distributed trust and mitigate tampering in high-stakes, autonomous systems-principles equally vital to intelligent IoT. Overall, the literature suggests that blockchain can provide security, transparency, and traceability to IoT networks, with promising applications in supply chain monitoring, smart cities, and sensor networks [
2]. However, much of this work has focused on general data or transaction logs (e.g., logging sensor readings or device access events on-chain), rather than the specific context of AI or machine learning processes running on IoT data.
2.2. AI Decision-Making and the Need for Audit Trails
As AI algorithms are increasingly deployed in IoT environments (for tasks such as anomaly detection, predictive maintenance, or autonomous control), concerns have grown regarding the opacity of AI decisions. The literature on explainable AI (XAI) and algorithmic accountability has repeatedly pointed out that AI systems require traceability and auditability to be trustworthy [
12,
13]. Users and regulators are wary of “black box” models whose inner workings are not transparent, especially in high-stakes applications like healthcare or autonomous vehicles. Several works have proposed frameworks for algorithmic auditing and logging. For instance, Ananny and Crawford (2018) discuss “algorithmic accountability” and the importance of audit mechanisms to understand AI outputs [
14]. Sokol et al. (2020) present methods for generating human-understandable explanations from AI systems to accompany decisions [
15]. However, ensuring these explanations and decisions are stored reliably over time remains a challenge. Traditional databases can log decisions, but those logs could be modified or deleted (intentionally or accidentally), undermining their usefulness in forensic analysis or compliance scenarios. This is where blockchain’s properties become attractive—by creating immutable records of AI system behavior, one can guarantee an audit trail that is secure from tampering. A blog by IBM described how blockchain could maintain an auditable log of the data and evidence that led an AI model to a particular prediction (illustrated with a simple fruit classification example) [
1]. Academic works such as “Blockchain as a platform for Artificial Intelligence Transparency” also suggest that transparent ledgers can help verify that AI models and their training data have not been tampered with [
16]. Complementing algorithmic explainability, recent work on continuous biometric authentication (Bhuva and Kumar, 2022) [
17] ensures that not only are AI decisions traceable, but the identities behind interactions with IoT systems are continuously verified. This reinforces the case for associating AI actions with authenticated actors in an immutable way. These discussions provide a foundation, but concrete implementations in IoT settings are still limited. Our work builds on this nascent area by focusing specifically on AI decision provenance in IoT—capturing not only that a decision was made, but contextual data about how it was made (inputs, model ID, etc.) in a way that is resilient and sharable.
2.3. Data Provenance and Audit Trails via Blockchain
The concept of using blockchains for data provenance (i.e., tracking the origin and evolution of data) has been explored in various domains, which we draw inspiration from. Liang et al. (2017) introduced ProvChain, a blockchain-based data provenance architecture in cloud environments that embeds provenance metadata (who accessed/modified data and when) into blockchain transactions [
18]. This allows cloud users to trace how their data is used in a multi-user environment with strong guarantees of integrity. ProvChain demonstrated that blockchain can serve as a robust audit log, resilient to deletion or forgery, and this concept can be analogously applied to IoT sensor data and AI outputs. In the IoT realm, researchers have proposed tailored provenance schemes: for example, Sharma (2023) describes a hierarchical blockchain-based data provenance system for IoT to address scalability, where multiple interconnected ledgers track provenance at different layers or domains of an IoT network [
19]. Also related is the work by Zyskind et al. (2015) on using blockchain for personal data management, where individuals could log data access events on a ledger for transparency [
20]. These efforts, while not AI-focused, show that blockchain is effective for audit trails in distributed systems. We leverage similar techniques (smart contracts, hashing of data, and event logs on chain) but target them to recording AI decision events and their provenance. Another relevant area is blockchain in accountable logging systems. Regueiro et al. (2021) present a blockchain-based audit trail mechanism for general IT systems, implementing a prototype where an enterprise’s system events are recorded on a private Ethereum-based chain [
21]. Their approach uses smart contracts to log events and a blockchain monitor for auditors to query the log. This shows the feasibility of integrating blockchain logging into existing software workflows. Our literature review indicates a clear research gap: while blockchain-based IoT data integrity and blockchain-based AI governance have been studied separately, integrating blockchain to provide an immutable audit trail for AI-driven IoT decisions remains largely unexplored and warrants original research.
In summary, the literature confirms that (a) IoT systems benefit from blockchain for secure data sharing and logging; (b) AI systems need better transparency and audit mechanisms; and (c) marrying blockchain with AI in IoT is a promising approach to achieve trustworthy and accountable autonomy. However, prior works have not fully realized an end-to-end solution that ties together IoT device data, AI decision-making, and blockchain logging. The novel perspective we pursue is to treat AI decisions in IoT as first-class transactions to be immutably recorded, akin to financial transactions in cryptocurrency but for “AI logic transactions”. By doing so, our approach goes beyond existing discussions in originality, introducing a ledger of AI decisions as an infrastructural component of IoT systems. The next sections detail this approach and how it addresses the identified gaps.
3. Proposed Methodology
To realize immutable audit trails for AI decisions in IoT, we propose a comprehensive methodology that combines a carefully designed system architecture with blockchain mechanisms tailored to IoT constraints.
Figure 1 illustrates IoT devices and gateways logging AI decisions to a blockchain ledger, which is then accessed by users and auditors. The blockchain ensures the integrity and transparency of the logged decisions. This provides an overview of the envisioned framework, depicting how various components (IoT devices, AI decision engine, blockchain network, and users/auditors) interact to create a trusted ecosystem. At a high level, our approach works as follows: as IoT devices generate data and AI algorithms produce decisions or predictions based on that data, every decision event (along with essential provenance information) is encapsulated into a transaction and written to a blockchain ledger. Smart contracts on the blockchain define the structure of these decision records and enforce rules (e.g., who can write or read the log). The blockchain network—composed of distributed nodes operated by stakeholders (manufacturers, service providers, and perhaps even regulators)—reaches consensus on each new decision record, timestamping and immutably linking it in the chain. This produces a chronological, tamper-proof log of AI operations that can be queried or audited by authorized parties at any time.
4. System Architecture
Our proposed architecture is structured in layers, aligning with typical IoT deployments, and augmented with blockchain and AI components at appropriate layers. We delineate four main layers: (L1) data generation, (L2) data aggregation, (L3) data services and blockchain, and (L4) application and audit interface.
Figure 2 illustrates these layers and their key functionalities in the context of our framework.
Figure 2 shows the layered architecture of the proposed blockchain-based AI audit trail system for IoT (inspired by the LoRaTRUST architecture [
7]).
4.1. L1: Data Generation (IoT Devices)
This bottom layer consists of IoT sensors, actuators, and embedded devices that generate raw data and also often execute local AI models (especially with the rise of edge AI). Each device in L1 is provisioned with a unique cryptographic identity (public–private key pair or blockchain address) which it uses to sign the data or decisions it produces. Lightweight software on the device (or associated gateway) handles sensor data retrieval (component C1.1) and may perform on-device AI inference if capable. Before sending data upward, the device can encrypt and/or sign the data (C1.4 data encrypter, C1.3 data signer) to ensure confidentiality and authenticity. For example, an IoT camera with an AI model for object detection would capture an image, run the detection locally, and then package the detected objects (decision) along with a hash of the image and a timestamp, sign it with its key, and send it as an authenticated record. The device registration with the blockchain occurs at setup: smart contracts on the blockchain maintain a verifiable registry of valid IoT nodes, recording each device’s public key, type, and metadata [
7]. Only registered, authenticated devices are allowed to submit decisions to the audit trail, preventing illegitimate data injection.
4.2. L2: Data Aggregation (Edge/Fog Gateways)
This layer includes IoT gateways, edge servers, or fog nodes that collect data/decisions from L1 devices (often over local networks like LoRaWAN, WiFi, etc.). Components in L2 aggregate and preprocess incoming information (C2.1 data router) and can enforce access control policies (C2.2 ACL policy) to filter or control which data moves onward. In our framework, L2 also serves as the liaison to the blockchain network. Resource-constrained sensors may not run a blockchain client themselves, so a gateway can batch multiple device decisions into blockchain transactions or serve as a proxy node that uploads device-signed records to the blockchain. Importantly, the gateway also attaches its own identity and possibly additional context (e.g., location, network status) to the record, which is useful for provenance. We note that gateways too are registered on-chain (similar to devices) and their contributions can be audited [
7]. By the time data reaches the end of L2, it has been authenticated, optionally encrypted, and is ready to be committed to the ledger.
4.3. L3: Data Services and Blockchain
This is the core layer where the blockchain ledger and related services reside. It encompasses two sub-components: (L3.1) off-chain backend services and (L3.2) blockchain services. The off-chain services (if used) might include databases or storage for large data (like raw sensor readings or images) that are referenced from the blockchain to save space. For instance, if an AI decision involves large input data, the raw data might be stored in secure cloud storage or IPFS, and only its hash and a link are put on-chain. The blockchain service components are the smart contracts and network nodes that form the distributed ledger. We implement a smart contract (AuditTrailContract) that defines a structure for AI decision records: e.g., deviceID, gatewayID, modelID, inputDataHash, outputDecision, confidence, timestamp. When an IoT gateway calls this contract to log a new decision, the contract validates the identities (only registered device/gateway combos can log; this is enforced using the registry contract) and then emits an event and stores the record on-chain [
21]. We choose an appropriate blockchain platform here—for example, a permissioned blockchain (like Hyperledger Fabric or Quorum) is suitable to meet IoT performance needs and privacy, since participants are known entities. Using a permissioned chain means we avoid the expensive proof-of-work mining of public blockchains; instead, a lightweight consensus protocol (such as Practical Byzantine Fault Tolerance (PBFT) or Proof of Authority) is used among a set of validator nodes. This is crucial for IoT, as traditional proof of work is computationally and energetically prohibitive in this context. Our design aligns with prior research that calls for lightweight blockchain consensus for IoT to achieve low latency and low energy consumption [
22]. In fact, one could implement a specialized consensus like Hierarchical Proof of Contribution (HPoC) which was designed for IoT settings to reduce node load, or simply use a fixed set of authority nodes (e.g., the IoT platform operator and a few independent auditors). The outcome in L3 is that all submitted AI decisions are validated and recorded on the blockchain ledger, each block containing one or more decision records. The blockchain ensures ordering (time-sequenced log) and immutability (earlier blocks cannot be altered without breaking the chain’s cryptographic links). Additionally, we integrate a Blockchain Monitor API in this layer—a RESTful service that external applications or auditors in L4 can use to query the stored records without needing direct blockchain access (this concept is similar to the blockchain monitor in Regueiro et al.’s system [
21]). The monitor listens to events from the AuditTrailContract and caches relevant info for user-friendly retrieval.
4.4. L4: Application and Audit Interface
The top layer represents the end-user applications, analytics dashboards, or audit tools that make use of the recorded AI decision provenance. This layer includes components like user authentication (C4.1 Auth), secure data access interfaces (C4.2 Data Access) for pulling relevant logs, data verifiers or analytics engines (C4.3 Data Verifier) that can analyze the blockchain data (for example, to detect patterns of bias or errors in decisions), and reporting tools (C4.4 Reporting) to present audit trail information to human auditors or system managers. A variety of stakeholders interact at L4: IoT application users (such as a utility company in a smart grid scenario, or a doctor in healthcare) can retrieve the sequence of decisions affecting them to understand system behavior. Regulators or third-party auditors can be granted access to inspect the logs (likely via the monitor API or by running their own blockchain node) to ensure compliance and accountability. We enforce that only authorized parties can read potentially sensitive data—while the blockchain ensures integrity, it does not inherently make all data public if we use a permissioned chain. Thus, access control is layered on via the Auth and Data Access services (for instance, using role-based permissions where a patient can see their health device’s decision log, a doctor can see their patients’ logs, an auditor can see anonymized aggregate logs, etc.). The Data Verifier component can cross-check the provenance: for example, it might fetch a decision’s input hash from the blockchain and compare it to a recomputed hash of the input stored off-chain to ensure consistency (detecting any tampering in storage). It could also verify digital signatures on the decision to confirm it was indeed produced by the claimed device and model. These verification steps add an extra layer of trustworthiness and are automated via smart contracts and cryptographic protocols.
The synergy of these layers yields a system where AI decisions in IoT are born verifiable. From the instant a decision is made at the edge, it is signed and eventually anchored in an immutable ledger, along with information linking it to the raw data and the algorithm that produced it. This design addresses key requirements: tampering of evidence (blockchain guarantees immutability), real-time logging (edge gateways push decisions to the chain as they occur, with minimal delay), provenance richness (smart contracts ensure metadata like device and model IDs are included, enabling later explanation), and availability (the distributed ledger means even if some nodes fail, the audit trail is preserved across others).
5. Data Provenance and Smart Contract Design
A critical aspect of the methodology is how we represent and store the provenance of AI decisions on the blockchain. We define the provenance as all information necessary to trace the decision’s origin and context. Concretely, for each AI inference made in the IoT system, we record the unique ID of the device or sensor that provided input, the ID or version of the AI model/algorithm that made the decision (this could be a model hash or an index referencing a model registry), a summary or hash of the input data on which the decision was based, the decision output itself (e.g., a predicted label or control action), any confidence score or explanatory pointers (if available, such as an explanation vector or a link to an explanation artifact), the timestamp of the decision, and the identity of the entity that logged it (which might be the gateway or edge node). This information is structured as a tuple or object that the smart contract on the blockchain can store.
We implement a smart contract (in Ethereum solidity or an equivalent chain code in other platforms) called AuditTrailContract. The pseudocode for the key parts of this contract is as follows:
Listing 1: Smart Contract for Decision Logging |
struct DecisionRecord { bytes32 deviceID; bytes32 modelID; bytes32 inputDataHash; string decisionOutput; uint256 timestamp; bytes32 loggedBy; // gateway or node ID }
mapping(uint256 => DecisionRecord) public decisions; // log storage uint256 public decisionCount;
function logDecision(bytes32 deviceID, bytes32 modelID, bytes32 inputDataHash, string memory decisionOutput) public { require(isRegisteredDevice(deviceID), "Device not registered"); require(isAuthorizedLogger(msg.sender, deviceID), "Not authorized"); decisionCount += 1; decisions[decisionCount] = DecisionRecord(deviceID, modelID, inputDataHash, decisionOutput, block.timestamp, msg.sender); emit DecisionLogged(decisionCount, deviceID, modelID, inputDataHash, decisionOutput, block.timestamp, msg.sender); } |
In the above, isRegisteredDevice checks the device registry contract to ensure the device is known (preventing fake device entries) [
7], and isAuthorizedLogger ensures that the msg.sender (the blockchain account calling this function) is allowed to log for that device (for example, the gateway or the device itself if it has direct chain access). Each new decision gets an index decisionCount and is stored in the contract state as well as emitted as an event. Emitting an event (DecisionLogged) is useful for off-chain monitoring tools to catch the log without constantly querying the chain.
We assume that each IoT device or gateway holds credentials to authenticate to the blockchain (like an Ethereum account key). In a permissioned network, these might be tied to X.509 certificates or other identities. The registration of devices/gateways happens via an administrative process (either a privileged contract function called by the IoT platform admin at deployment time, or automatically when a device first joins using a secure join protocol). That registry contract essentially acts as a permission layer on who can write to the audit trail.
A design decision here is data storage vs. cost. Blockchain storage is typically expensive (in terms of throughput and, on public chains, fees). In permissioned chains, we do not have fees, but performance could still be an issue if we log very high-frequency events (like sensor readings every second). Our focus is on AI decisions, which usually occur less frequently than raw data generation (e.g., an anomaly detection model might raise an alert a few times a day, or a predictive model might make hourly predictions). Thus, the volume of transactions should be manageable. If needed, we can aggregate multiple decisions into one transaction (logging an array of DecisionRecords in one call) to reduce overhead. Alternatively, for bulk data provenance (e.g., every single sensor reading), it might be more efficient to store only hashes on-chain (for integrity) and keep the detailed log off-chain. In our methodology, we are flexible: high-value events (like a critical decision or a violation) can be fully logged on-chain, whereas routine data can be logged as hashed references. This approach aligns with the idea of using off-chain storage + on-chain hash to balance load, a pattern used in systems like Tierion or Factom in data auditing.
Another element is the consensus protocol used by the blockchain network. As mentioned, IoT settings benefit from permissioned consensus. We might configure the blockchain nodes (which could be running on edge servers, cloud servers of the IoT service provider, and perhaps nodes run by third-party auditors) to use PBFT or a variant. PBFT can achieve fast finality (confirmation of transactions) in small networks with known nodes, which is ideal for near-real-time logging. There is precedent in the literature for using PBFT-based private Ethereum networks for audit logging [
21]. An alternative is Proof of Authority (PoA), where a few designated nodes validate all blocks. Using Proof of Authority (PoA) ensures low latency (few seconds per transaction), energy efficiency, and controlled access to blockchain membership. This is simpler to implement (e.g., Ethereum’s Clique PoA or Polygon Edge for IoT) and would ensure low latency and energy usage—a non-financial, lightweight consensus as suggested by Nie et al. (2022) [
22]. We do not use Proof of Work (to avoid heavy computation), and even Proof of Stake may be unnecessary overhead if the participants are pre-vetted; however, if a consortium of organizations runs the audit trail, a stake-based mechanism could be introduced to decentralize trust among them.
To complement our architectural analysis, we estimated the on-chain overhead associated with the logDecision smart contract function. Based on solidity code complexity and Ethereum gas benchmarks, this operation is projected to consume approximately 80,000 gas—a typical value for storing a compact struct with multiple bytes, 32 fields, and emitting an event [
23]. In a permissioned blockchain network using Proof-of-Authority (PoA) consensus, block confirmation times average 5–6 s, with low variance and energy cost [
24]. On public Ethereum networks, confirmation latency can range from 13–15 s depending on network congestion [
25]. Although permissioned ledgers do not require transaction fees, we estimate the economic equivalent by applying public gas rates: at 20 Gwei, the logging transaction would cost approximately 0.0016 ETH (around 3 USD) [
26]. These estimates suggest that the system can support real-time or near-real-time AI decision logging in IoT environments without introducing significant latency or computational cost.
To accommodate large data volumes—such as images, sensor streams, or other detailed inputs—and to ensure compliance with privacy regulations, our framework adopts a hybrid model that separates on-chain metadata from off-chain content. Specifically, only essential metadata is stored on-chain, including a cryptographic hash of the input data, the model identifier, and a URI or content identifier (CID) if using decentralized storage like IPFS. The actual data, which may contain sensitive or high-volume information, is retained in secure off-chain repositories such as encrypted databases, IPFS, or cloud storage. This design ensures both data integrity and regulatory compliance; any unauthorized change to the off-chain data will produce a hash mismatch, making tampering detectable, while sensitive information remains inaccessible from the public ledger, thereby aligning with GDPR and similar privacy mandates. The inputDataHash field within the smart contract is derived from the key input features used in the AI inference. Authorized auditors can retrieve the corresponding data from off-chain storage and verify its authenticity against the hash stored on-chain—an approach inspired by established systems like Tierion and Factom, which use blockchain as an anchor for external data verification.
To make the methodology concrete, consider an IoT-based smart irrigation system [
27], detailed in
Table 1. Soil moisture sensors (devices) send readings to an edge AI model that decides whether water valves should open or remain closed. Each decision made by the AI triggers a logging event captured immutably on a blockchain ledger, enhancing transparency and auditability. As illustrated in
Table 1, Sensor 123 detects moisture at 10 percent and transmits this measurement to Gateway A. Gateway A forwards the data to the AI model, which, due to a predefined threshold of 15 percent (specified by the IrrigationModelv2), determines the appropriate action: “valve = OPEN” for field X. Gateway A then prepares a transaction containing critical metadata—including the deviceID (123), modelID (IrrigationModelv2), a hash of the input data (0xabc123), and the decision outcome (OPEN valve X)—signs it, and submits it to the blockchain. Blockchain nodes verify the authenticity and permissions of Sensor 123 and Gateway A, confirm they are properly registered to field X, and subsequently store the record permanently. An event is emitted to indicate successful logging. Later, an auditor or the farmer can query “why was water released at time T?”, retrieving the detailed transaction record from the blockchain. The hash (0xabc123) from the blockchain can then be cross-referenced with off-chain stored sensor data (stored on IPFS or Gateway A’s database) to confirm the moisture reading was indeed 10 percent. Additionally, referencing the model registry, the farmer can validate that the irrigation threshold was set at 15 percent moisture, confirming the AI’s decision was correct and justified. This structured process significantly enhances transparency: every automated action is verifiable and fully explainable through logged metadata, and blockchain’s immutability ensures that neither farm operators nor device manufacturers can secretly modify decision logs.
In implementing this methodology, we also consider performance optimizations. IoT systems may produce bursts of events. The smart contract could be optimized (e.g., pre-allocate array sizes, avoid writing unnecessary data, etc.). We can also utilize layer-2 scaling solutions if needed: for example, a local off-chain channel could collect decisions and periodically anchor a summary on the main chain (but since we aim for auditability, we tread carefully with off-chain channels as they introduce delay in finalizing logs). Given that our chosen examples (smart grid, healthcare, and industrial logs) typically have manageable event rates (compared to, say, a high-frequency trading AI which might produce thousands of decisions a second), our direct on-chain logging approach is practical. To ensure data integrity, each record’s critical fields (like input hash and output) are part of the transaction’s hash that gets into the blockchain block. Thus, any attempt to change an output later would change the hash and break the chain, making tampering evident. Additionally, device signatures (where used) mean that even if a malicious gateway attempted to fabricate a decision from a device, it would not have the device’s private key to sign it, and the contract could detect that. This cryptographic chain-of-custody from data generation to decision logging is at the heart of our provenance solution.
We have mentioned logging modelIDs—this implies that there is a way to identify which AI model made the decision. In practice, AI models in IoT could be periodically updated (new firmware, retrained model). We incorporate a simple model management scheme: whenever a model is deployed to devices or edge nodes, a hash of the model binary or its version number is registered on blockchain (this could be another contract or part of the device registry). Then the device includes that model identifier when logging decisions. This creates an immutable record of which model version was responsible for a decision, aiding in accountability (e.g., if later a model is found to be faulty, all decisions made by that version can be traced and reviewed). This idea aligns with emerging practices in AI governance where model provenance is tracked (sometimes called “model lineage”). By utilizing the same blockchain for model registration (or an interoperable one), we ensure that model provenance and decision provenance are linked. To strengthen trust in decision provenance, our architecture supports optional integration with continuous biometric authentication modules, as outlined by Bhuva and Kumar (2022) [
17], enabling fine-grained identity verification for decision-triggering agents at the device or user level.
In summary, our methodology sets up a secure pipeline: IoT data and AI decisions flow upward through layers, accruing signatures and context, and end up cemented in a blockchain ledger via smart contracts that enforce correctness of origin. The design addresses IoT constraints by using lightweight cryptography and consensus and ensures that the resulting audit trail is rich enough to meet explainability and compliance needs.
6. Regulatory and Governance Considerations
The technical approach described above is deeply influenced by and aligned with evolving regulatory frameworks and governance principles for AI and IoT systems. In this section, we discuss how our blockchain-based audit trail framework addresses key legal and ethical requirements, focusing on the European context (which is at the forefront of AI and IoT regulation) such as the EU AI Act, GDPR, and the Cyber Resilience Act, as well as broader principles of trust, transparency, and accountability in AI. We also highlight any compliance challenges and how the framework is designed to navigate them.
6.1. Regulatory Alignment
Table 2 details the regulatory requirements of key EU frameworks and explicitly maps how the proposed blockchain-based audit trail system meets these requirements. It also highlights potential compliance challenges and identifies strategies to mitigate these challenges.
This mapping demonstrates clear alignment with the legal obligations for AI-enabled IoT systems. Specifically, the framework directly addresses Article 12 of the AI Act by recording each AI decision and its relevant metadata. For the GDPR, we mitigate the challenges of blockchain immutability by hashing personal data and supporting erasure at the off-chain storage level. Security requirements under CRA are addressed via cryptographic identity andthe auditability of software updates.
6.2. Ethical and Governance Principles
Table 3 contextualizes the broader ethical and governance principles of trust, transparency, and accountability and demonstrates how the blockchain framework supports these principles through tangible system capabilities and practical impacts [
2].
6.3. Governance Model and Oversight
To ensure ethical deployment and prevent misuse, our framework incorporates a robust governance model within the blockchain consortium. This model clearly defines the roles and responsibilities of all stakeholders: IoT device operators are responsible for maintaining device registrations and submitting AI decision logs; blockchain node operators manage the network, validate transactions, and oversee upgrades; auditors and regulators have permissioned access to the logs for oversight purposes; and end users or data subjects benefit from the system’s transparency and privacy safeguards. A consortium governance board, composed of representatives from these stakeholder groups, establishes and enforces policies related to access control, node membership, and the acceptable use of audit data. For instance, usage policies may restrict audit logs strictly to safety and compliance monitoring, prohibiting activities such as user profiling or unauthorized surveillance.
To enhance accountability, all access to audit logs is itself logged, creating a meta-audit trail that can be examined to detect misuse. Periodic third-party audits ensure that access aligns with declared purposes. If violations are identified—such as unauthorized querying to monitor user behavior—the governance board can revoke access rights, impose penalties through consortium agreements, or initiate legal actions. The governance framework also manages onboarding and updates: new devices or organizations are incorporated through multi-party consensus, and any updates to smart contracts (e.g., in response to regulatory changes) must be approved collectively. This ensures that no single entity can alter system behavior unilaterally, preserving the integrity and transparency of the audit trail over time.
Our framework includes a clear process for cryptographic key lifecycle management and data privacy enforcement to address GDPR requirements. Each user or device is provisioned with a unique cryptographic identity (e.g., an Ethereum account or certificate) when registered. If a data subject invokes their right to erasure (a deletion request), the system responds by revoking that identity’s key and purging or anonymizing any personal data stored off-chain that could link the user to on-chain records. The key revocation mechanism (implemented via the consortium’s identity management smart contract and/or a certificate authority) marks the user’s public key as invalid, preventing any further submissions or queries by that identity. Critically, this revocation and the removal of off-chain reference data render any existing on-chain log entries effectively unusable for identifying the individual. Since on-chain records contain only pseudonymous identifiers (hashes or device IDs without personal details), once the mapping to the user is securely removed and the key is invalidated, those immutable records can no longer be traced back to the individual. In practice, this approach upholds privacy rights: the blockchain maintains an audit trail for accountability, but through key revocation and data management protocols, we ensure compliance with GDPR’s “right to be forgotten” by making any personal references inaccessible while preserving the integrity of the overall log.
In conclusion, our framework is a proactive response to regulatory trends. It is built to future-proof IoT AI systems against upcoming compliance demands by providing infrastructure for transparency and audit. It transforms compliance from a paperwork exercise into an automated technical feature. While some challenges (like GDPR data deletion and defining access policies) require careful implementation, the net effect is that an operator adopting this system is better prepared to meet legal obligations and to build user trust. By ensuring accountability at the design phase, we avoid costly retrofits later or the risk of non-compliance. As regulations like the AI Act come into force, solutions like ours could be an enabler for companies to continue using powerful AI in IoT (which they want for efficiency) without falling afoul of the law or public trust. It is a win–win since the technology both satisfies governance needs and improves the system’s reliability and trustworthiness for its own sake.
7. Framework and Case Studies
To demonstrate the applicability of our proposed audit trail framework, we outline a general blockchain-based technical framework and then delve into two representative case studies: healthcare IoT and industrial IoT. These scenarios were chosen because they involve critical decisions by AI, have multiple stakeholders who need trust, and are subject to regulatory oversight, making them ideal proving grounds for our approach. In each case study, we describe how the framework would be instantiated and discuss the benefits and considerations observed.
7.1. Generalized Framework for Blockchain-Based AI Audit Trails in IoT
Across different IoT domains, the core elements of our framework remain consistent. These can be summarized as follows:
7.1.1. Decentralized Identity and Registry
Every IoT device and AI model is registered on a blockchain ledger with a unique identity (public key or address) [
33]. This creates a root of trust, ensuring that any logged decision can be tied back to a known entity. In practice, an initial smart contract serves as the registry (as described in
Section 4) where an admin or an enrollment protocol adds new device IDs (with metadata like owner, type, and permissions) and model hashes.
Table 1 illustrated this concept with a verifiable registry acting as the trust provider to assure data originates from legitimate sources.
7.1.2. On-Device Logging Trigger
IoT devices or edge gateways are configured to trigger a log event whenever an AI decision or other significant event occurs [
34]. The trigger could be implemented in software (e.g., a callback in the AI inference code that calls the blockchain logging function) or via an IoT platform rule engine. The point is to make logging an intrinsic part of the decision workflow, not an afterthought.
7.1.3. Blockchain Network and Smart Contracts
The blockchain network (permissioned) is set up connecting relevant parties. Smart contracts (such as the AuditTrailContract) reside on this network to accept decision records [
35]. Additional contracts might be present for specific functionality—e.g., a contract to manage incentives or payments, or a contract that automatically flags certain events. Because it is domain-agnostic, the same fundamental contracts could be reused across industries, with possible customization via configuration. For instance, the structure of a DecisionRecord might have optional fields that different deployments fill differently (healthcare might include “patientID” as a pseudonym, etc.).
7.1.4. Access Control and Data Privacy Layer
Each framework deployment defines who can read or query the logs. Typically, writing to the log is restricted to the IoT devices/gateways (through the registry), but reading might be open to all permissioned nodes or limited by role [
35]. Fine-grained access control can be achieved by storing sensitive data encrypted on-chain.
7.1.5. Audit and Analytics Tools
On top of the blockchain, we have tools (could be smart contracts or off-chain services) to analyze the logged data [
36]. In general, an interface is provided to filter and retrieve logs (e.g., “get all decisions by device X in last 24 h” or “find all instances where model Y gave output Z”). These tools might integrate with existing dashboards in that domain (e.g., a patient health record system) so that users may not even know a blockchain is behind the scenes—they just see a log of actions with an extra seal of trust (maybe an icon indicating the log entry is verified and immutable).
Using this general framework, we now tailor it to two scenarios to highlight the unique considerations and advantages in each.
7.2. Case Study 1: Healthcare IoT (Smart Healthcare and Medical Devices)
7.2.1. Scenario
In healthcare, IoT devices like wearables, implants, or monitoring sensors collect patient data (heart rate, blood glucose, etc.), and AI algorithms assist in diagnosis or triggering alerts [
37]. Examples include a smart insulin pump that uses AI to adjust dosage, a remote patient monitoring system that alerts doctors if vitals go out of range, or an AI diagnostic tool analyzing medical IoT sensor data (like an ECG patch streaming heart signals analyzed by an AI for arrhythmias). The decisions here are often life-critical and also sensitive from a privacy standpoint [
38]. There is heavy regulation (e.g., FDA in US, MDR in EU) for medical devices and liability concerns if something goes wrong.
7.2.2. Application of Framework
Each medical IoT device (e.g., a wearable ECG monitor) is registered on a permissioned blockchain shared by, say, the hospital and device manufacturer. The AI model that interprets ECGs is also registered (modelID perhaps corresponds to a certified algorithm version). Whenever the device’s AI flags an “arrhythmia detected” event, it logs it on blockchain with details: deviceID, modelID, inputDataHash, decision: arrhythmia alert, timestamp. The treating cardiologist, who has access to the system, can later see this log and also see any subsequent decisions (maybe “alert dismissed by patient” or “alert forwarded to ER”). If a patient suffers an adverse event, investigators can examine whether the AI gave a timely alert and whether it was acted upon. This is crucial for malpractice or product liability cases: the manufacturer could prove “our device did alert at time X as logged immutably, so if action was delayed, it was not due to device failure.” Conversely, if the device failed to alert and it should have, the log (or absence thereof) provides evidence. To illustrate the applicability of the design, we present a synthetic example of how the proposed logging framework would operate in a healthcare IoT scenario. No physical prototype was deployed; rather, we use conceptual models and representative data to simulate the behavior of the system under realistic conditions.
7.2.3. Benefits in Healthcare IoT
The immutable audit trail can literally save lives by ensuring that no critical alert is lost or tampered with. In traditional systems, a faulty device might not log an event at all (so no one knows it missed an alert). With our framework, even the act of logging (or the lack thereof within expected conditions) is noticeable. It imposes a kind of self-auditing on the device which, if an expected periodic heartbeat or report is missing from the ledger, can automatically raise a flag (since blockchain events can be monitored in real-time). This could detect device malfunctions more quickly. For patients, the transparency can improve trust in AI-assisted care. Patients are understandably cautious about AI diagnosing them. Knowing that every AI decision is recorded and can be reviewed by humans may reassure them that the AI is not operating in a black box void of oversight. It also helps doctors trust AI outputs more, since they can retrospectively examine what data led to what diagnosis and if needed, contest it with evidence. Over time, these logs can provide valuable data for improving algorithms or for regulatory audits to re-certify devices.
Privacy is a big concern here. We take measures like hashing personal data on-chain and restricting access. Likely the blockchain network is confined to medical professionals bound by confidentiality and perhaps patient representatives. GDPR and health privacy laws (like HIPAA in the US) would necessitate strict access control. However, since the primary goal is audit, not broad sharing, we can tailor it such that only authorized auditors (like a hospital’s internal review board or an external regulator in case of incidents) can view patient-specific logs. Patients themselves might be given access to their own data on a patient portal which could present the data from the blockchain (effectively giving them an immutable copy of their device history, which some patients may appreciate for personal record-keeping or for second opinions).
Another advantage is compliance with medical device regulations [
39]. Our system can largely automate that—each adverse event detected by the AI is logged immutably, which can feed into the manufacturer’s required reporting to authorities. If a recall or safety notice is needed, the manufacturer can query the ledger to find all affected decisions or instances. Additionally, if an AI model is updated (perhaps to fix a bug that caused misdiagnosis), the effect of the update can be tracked by comparing logged decisions before and after across patients.
7.3. Case Study 2: Industrial IoT (Manufacturing and Supply Chain)
7.3.1. Scenario
In an industrial IoT setting (often dubbed Industry 4.0), factories are equipped with sensors and AI-driven control systems for automation. Examples: a predictive maintenance AI monitors vibrations in a machine via IoT sensors [
40] and decides when to schedule maintenance; a robotics controller AI decides speed or action adjustments on an assembly line based on sensor inputs; or in supply chains, an AI might reroute shipments or adjust inventory based on IoT tracker data. Here, AI decisions directly impact physical processes, product quality, and safety. If a machine fails or a product defect occurs, companies need to trace back what went wrong. Also, these environments value uptime and efficiency, so quick diagnostics via logs are valuable. Moreover, there can be compliance needs (OSHA safety and ISO standards for quality) that require maintaining logs of operations.
7.3.2. Application of Framework
Each critical machine in a factory has IoT sensors and perhaps an edge controller running AI. All these controllers log their key decisions to a blockchain that is maintained within the factory (and possibly accessible to machine vendors or auditors). For instance, if an AI controlling a chemical process adjusts a valve, it logs device: ValveController42, model: ChemProcessAIv1.2, input: temp+pressure hash, decision: valve 10 percent open, timestamp. If later an out-of-spec batch is found, the quality engineers can review logs and see that at a certain time, the valve was opened incorrectly by the AI (or conversely, everything was normal, pointing to a raw material issue). This provides traceability in production, akin to a black box recorder for manufacturing lines. In the supply chain, imagine IoT trackers on cargo and an AI that decides to redirect a shipment due to predicted delays (like an AI logistic system). The decision “reroute container 123 via Route B” would be logged and shared among parties (supplier, carrier, and buyer). This ensures no disputes like “we didn’t authorize that reroute”—the log would show which AI or party did it, under what conditions. This case study is based on a conceptual simulation of industrial IoT control systems and designed to demonstrate how the blockchain logging framework would function in practice. While no actual system deployment was conducted, we use representative data flows and decision structures to illustrate the framework’s integration potential.
7.3.3. Benefits in Industrial IoT
The blockchain audit trail strongly enhances traceability and quality control. Many industries require a traceability chain for products (from raw materials to final product) especially in sectors like automotive or aerospace. By augmenting the traditional traceability with AI decision logs, one can not only trace which machine processed a part, but also what decision that machine’s AI made during processing [
41,
42]. This can reveal deeper insights; for example, noticing that an AI vision system rejected 5 percent of parts as defective might indicate an upstream issue if logged properly. Immutable logs also protect against any internal malpractice—e.g., if someone tries to alter maintenance records to hide negligence, the blockchain record would contradict them. This fosters a culture of accountability in operations. From a safety perspective, consider a factory accident: investigators will look at machine logs. With our system, they get reliable logs that can not be doctored (unlike some cases where companies have been found to falsify logs after the fact to avoid liability). Knowing this immutability is in place may also act as a deterrent against cutting corners, because the record will endure. In terms of performance and scalability, factory networks are usually LAN-scale, and blockchains can be run efficiently. Modern factories already have historians (databases logging sensor data). We are complementing those with a tamper-proof layer for decisions. The volume of logged AI decisions might be much smaller than raw sensor data, so it is feasible to record them all on-chain within a factory consortium (perhaps a consortium including the manufacturer and equipment suppliers, so that both can see the logs—helpful if, for example, a machine under warranty fails, both the factory and the manufacturer have the same log evidence).
In the case studies above, the common thread is that the blockchain-based audit trail increases trust among stakeholders, whether they are consumers, patients, operators, or regulators, by providing a shared, indelible history of what the AI carried out. It also fulfills a documentary need that is otherwise cumbersome—replacing or supplementing manual logbooks or centralized databases that could be manipulated. The case studies show that, regardless of domain, the approach is flexible and valuable.
8. Discussion and Analysis
Implementing immutable blockchain-based audit trails for AI decisions in IoT systems offers clear benefits in terms of transparency and trust, as demonstrated in the case studies. However, it also raises important challenges and considerations. In
Table 4, we critically examine the key issues on the scalability and performance of the system, privacy concerns and data management, ethical implications of increased transparency, security aspects, and strategies to mitigate these challenges.
While our current implementation addresses many of these challenges, there are trade-offs. Batching and hierarchical approaches improve scalability but introduce complexity and some delays in finalizing logs. Off-chain storage protects privacy but means auditors must trust that off-chain data is managed correctly (we mitigate this with hashes and consortium oversight). Strong governance can prevent misuse, but it relies on human agreements and enforcement, which must be maintained. Security measures raise the bar but cannot guarantee a breach will never occur; rather, they ensure resilience (for example, if a device is compromised, it can not fake old logs or read others’ logs, and its misbehavior will be evident on-chain).
One limitation was the lack of empirical validation. A future step is to perform penetration testing on the entire system (e.g., have a red team attempt to compromise nodes or intercept log traffic) to further evaluate resilience. We have not presented formal proofs of security (which some academic works do); however, our design leverages well-vetted cryptographic practices and blockchain primitives known to provide integrity and non-repudiation.
Another discussion point is the interaction with Explainable AI (XAI). Our logs currently store decisions and basic metadata, but not full explanations. We acknowledge in the Future Work section that one could log explanation artifacts. This would increase storage and possibly expose intellectual property (the model’s logic) or sensitive attributes. The trade-off between richer logs and privacy will need careful balancing and perhaps new techniques (like logging explanations in a secure multiparty computed way or using zero-knowledge proofs to attest that a decision was made for certain reasons without revealing raw data).
Maintaining a blockchain is not free. In long-term deployment, nodes will incur operational costs, and someone must bear responsibility (likely shared by the consortium). We assume that the benefits (avoided compliance fines, faster audits, and fewer disputes) outweigh these costs. For instance, if a data breach or safety incident occurs, having this system could save millions in legal fees by quickly proving what happened. However, organizations will need to budget for the infrastructure. We have provided an estimate in the evaluation, but real deployments should conduct a detailed cost–benefit analysis. We also suggest possibly leveraging layer-2 solutions in production: for example, use a cheaper blockchain network for day-to-day logging and periodically anchor to a more secure network for ultimate safety, which can reduce cost and improve throughput. Recent advances in Ethereum Layer 2 (rollups) could be applicable: IoT gateways might send logs to a rollup, which then posts a single compressed proof to the consortium chain.
Finally, a critical discussion point is adoption and integration. Our framework is most useful if widely adopted across stakeholders (e.g., all device manufacturers agreeing on a logging standard). In practice, there may be reluctance—companies might fear exposing too much or increasing liability. We argue that regulation is moving in this direction regardless, so proactively adopting such a framework is beneficial. We have designed the system to be flexible: it could be run within a single organization at first (for internal accountability) and later opened to external auditors. Additionally, integration with existing systems is key: we provide APIs so that a hospital’s IT system or a factory’s MES (Manufacturing Execution System) can fetch blockchain logs and present them in familiar dashboards. This smooths adoption by not forcing users to learn a “new tool”—the blockchain works behind the scenes. While our analysis presents performance estimates and logging behavior based on system design and simulation, we did not perform a live prototype deployment. All results are based on theoretical modeling, pseudocode design, and benchmark-informed projections. A full-scale implementation and testbed evaluation is an important next step for future work.
To conclude this discussion, while challenges exist, they are accompanied by practical mitigation strategies. The combination of careful technical design (layering, off-chain hashing, etc.) and governance measures addresses most issues identified by reviewers. The remaining sections will conclude the paper and outline future enhancements to further strengthen the framework.
9. Conclusions and Future Directions
In this paper, we presented a comprehensive approach to establishing immutable audit trails for AI decisions in IoT systems using blockchain technology. Our work is driven by the increasing demand for trust, transparency, and accountability in AI-driven IoT deployments—a demand shaped not only by practical needs such as diagnosing system failures and enforcing ethical behavior, but also by emerging regulations like the EU Artificial Intelligence Act and the GDPR. By combining the strengths of IoT, AI, and blockchain, our framework enables a synergistic system: IoT provides data and actuation, AI enables autonomous decision-making, and blockchain ensures that each decision is verifiably recorded and preserved.
We proposed a novel framework that, to our knowledge, is among the first to explicitly integrate blockchain-based logging with AI decision-making in IoT. Unlike the existing literature that primarily secures IoT data or uses blockchain for general data integrity, our focus is on the provenance of AI decisions. We developed a layered architecture, implemented smart contracts, and demonstrated how decisions flow from edge devices to the blockchain. Each decision is logged with structured metadata—such as device ID, model version, input hashes, and outputs—creating a transparent and tamper-resistant audit trail that is accessible to authorized stakeholders.
The implications of our work are significant for both industry and academia. For practitioners—including IoT system architects, AI engineers, and compliance officers—our framework offers a proactive way to build accountability-ready systems. Rather than retrofitting audit mechanisms in response to incidents or regulatory pressure, our design embeds traceability from the start, reducing compliance burden and enhancing stakeholder confidence. This requires interdisciplinary collaboration across blockchain design, AI interpretability, and legal governance to ensure that logs are both technically robust and legally defensible. As AI systems increasingly drive critical infrastructure and services, the need for such trust-enabling mechanisms will only intensify.
Our findings demonstrate that blockchain-based audit trails are a powerful tool for converting opaque AI decisions into transparent, verifiable, and immutable records. We have shown not only the feasibility of this approach, but also its value for governance, regulation, and operational integrity. As AI and IoT continue to evolve, we foresee that algorithmic audit systems will become as essential as financial accounting systems—mandatory for operational trust. This work lays the foundation for such a future, and we invite both academic and industry stakeholders to extend, adapt, and implement these ideas in real-world systems.
Future Work
While this paper establishes the foundational framework, several important directions remain for further research and development including the integration of explainable AI (XAI) techniques so that not only the outcomes but also the rationale behind AI decisions can be logged. This shift—from “black box” to “glass box” audit trails—could involve storing key features, decision rules, or links to external explanation artifacts. Such enhancements would need to balance information richness against blockchain storage constraints.
To scale the system for high-frequency decision environments, we plan to explore layer-2 scaling solutions tailored for IoT, such as state channels or rollups. These approaches allow decisions to be processed off-chain while periodically anchoring summaries to the main blockchain for security and auditability.
Cross-domain interoperability is another promising area. Linking AI decision logs in IoT with those from other domains (e.g., smart city infrastructure, industrial automation) could enable end-to-end auditing across complex ecosystems. A key next step is the development and deployment of a functioning prototype based on our framework. This would allow for the real-world validation of system assumptions, performance metrics, and integration with domain-specific workflows (e.g., in smart healthcare). Our current work provides the conceptual and architectural groundwork for such an implementation.
Finally, future enhancements could include continuous device authentication, such as biometric verification mechanisms, to ensure that only legitimate devices log decisions. This would further strengthen the integrity of the audit trail. The framework could also be extended to emerging domains such as space-based IoT, where trust and communication resilience are critical.
Through these future explorations, we aim to evolve this work into a fully deployable, real-world solution for accountable and transparent AI in IoT—moving from theory to practice in building intelligent systems that are not only autonomous but also explainable and trustworthy by design.