Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (14)

Search Parameters:
Keywords = high-stakes AI systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 8330 KB  
Article
iBANDA: A Blockchain-Assisted Defense System for Authentication in Drone-Based Logistics
by Simeon Okechukwu Ajakwe, Ikechi Saviour Igboanusi, Jae-Min Lee and Dong-Seong Kim
Drones 2025, 9(8), 590; https://doi.org/10.3390/drones9080590 - 20 Aug 2025
Viewed by 274
Abstract
Background: The increasing deployment of unmanned aerial vehicles (UAVs) for logistics in smart cities presents pressing challenges related to identity spoofing, unauthorized payload transport, and airspace security. Existing drone defense systems (DDSs) struggle to verify both drone identity and payload authenticity in real [...] Read more.
Background: The increasing deployment of unmanned aerial vehicles (UAVs) for logistics in smart cities presents pressing challenges related to identity spoofing, unauthorized payload transport, and airspace security. Existing drone defense systems (DDSs) struggle to verify both drone identity and payload authenticity in real time, while blockchain-assisted solutions are often hindered by high latency and limited scalability. Methods: To address these challenges, we propose iBANDA, a blockchain- and AI-assisted DDS framework. The system integrates a lightweight You Only Look Once 5 small (YOLOv5s) object detection model with a Snowball-based Proof-of-Stake consensus mechanism to enable dual-layer authentication of drones and their attached payloads. Authentication processes are coordinated through an edge-deployable decentralized application (DApp). Results: The experimental evaluation demonstrates that iBANDA achieves a mean average precision of 99.5%, recall of 100%, and an F1-score of 99.8% at an inference time of 0.021 s, validating its suitability for edge devices. Blockchain integration achieved an average network latency of 97.7 ms and an end-to-end transaction latency of 1.6 s, outperforming Goerli, Sepolia, and Polygon Mumbai testnets in scalability and throughput. Adversarial testing further confirmed resilience to Sybil attacks and GPS spoofing, maintaining a false acceptance rate below 2.5% and continuity above 96%. Conclusions: iBANDA demonstrates that combining AI-based visual detection with blockchain consensus provides a secure, low-latency, and scalable authentication mechanism for UAV-based logistics. Future work will explore large-scale deployment in heterogeneous UAV networks and formal verification of smart contracts to strengthen resilience in safety-critical environments. Full article
Show Figures

Figure 1

14 pages, 257 KB  
Article
Artificial Intelligence Anxiety and Patient Safety Attitudes Among Operating Room Professionals: A Descriptive Cross-Sectional Study
by Pinar Ongun, Burcak Sahin Koze and Yasemin Altinbas
Healthcare 2025, 13(16), 2021; https://doi.org/10.3390/healthcare13162021 - 16 Aug 2025
Viewed by 505
Abstract
Background/Objectives: The adoption of artificial intelligence (AI) in healthcare, particularly in high-stakes environments such as operating rooms (ORs), is expanding rapidly. While AI has the potential to enhance patient safety and clinical efficiency, it may also trigger anxiety among healthcare professionals due to [...] Read more.
Background/Objectives: The adoption of artificial intelligence (AI) in healthcare, particularly in high-stakes environments such as operating rooms (ORs), is expanding rapidly. While AI has the potential to enhance patient safety and clinical efficiency, it may also trigger anxiety among healthcare professionals due to uncertainties around job displacement, ethical concerns, and system reliability. This study aimed to examine the relationship between AI-related anxiety and patient safety attitudes among OR professionals. Methods: A descriptive, cross-sectional research design was employed. The sample included 155 OR professionals from a university and a city hospital in Turkey. Data were collected using a demographic questionnaire, the Artificial Intelligence Anxiety Scale (AIAS), and the Safety Attitudes Questionnaire–Operating Room version (SAQ-OR). Statistical analyses included t-tests, ANOVA, Pearson correlation, and multiple regression. Results: The mean AIAS score was 3.25 ± 0.8, and the mean SAQ score was 43.2 ± 10.5. Higher AI anxiety was reported by males and those with postgraduate education. Participants who believed AI could improve patient safety scored significantly higher on AIAS subscales related to learning, job change, and AI configuration. No significant correlation was found between AI anxiety and safety attitudes (r = −0.064, p > 0.05). Conclusions: Although no direct association was found between AI anxiety and patient safety attitudes, belief in AI’s potential was linked to greater openness to change. These findings suggest a need for targeted training and policy support to promote safe and confident AI adoption in surgical practice. Full article
(This article belongs to the Section Perioperative Care)
17 pages, 479 KB  
Article
Analyzing LLM Sentencing Variability in Theft Indictments Across Gender, Family Status and the Value of the Stolen Item
by Karol Struniawski, Ryszard Kozera and Aleksandra Konopka
Appl. Sci. 2025, 15(16), 8860; https://doi.org/10.3390/app15168860 - 11 Aug 2025
Viewed by 417
Abstract
As large language models (LLMs) increasingly enter high-stakes decision-making contexts, questions arise about their suitability in domains requiring normative judgment, such as judicial sentencing. This study investigates whether LLMs exhibit bias when tasked with sentencing decisions in Polish criminal law, despite clear legal [...] Read more.
As large language models (LLMs) increasingly enter high-stakes decision-making contexts, questions arise about their suitability in domains requiring normative judgment, such as judicial sentencing. This study investigates whether LLMs exhibit bias when tasked with sentencing decisions in Polish criminal law, despite clear legal norms that prohibit considering extralegal factors. The simulated sentencing scenarios for theft offenses use two leading open-source LLMs (LLaMA and Mixtral) and systematically vary three defendant characteristics: gender, number of children, and the value of the stolen item. While none of these variables should legally affect sentence length under Polish law, our results reveal statistically significant disparities, particularly in how female defendants with children are treated. The non-parametric tests (Kruskal–Wallis and Mann–Whitney U) and correlation analysis were applied to quantify these effects. Our findings raise concerns about the normative reliability of LLMs and their alignment with principles of fairness and legality. From a jurisprudential perspective, we contrast the implicit logic of LLM sentencing with theoretical models of adjudication, including Dworkin’s moral interpretivism and Posner’s pragmatism. This work contributes to ongoing debates on the integration of AI in legal systems, highlighting both the empirical risks and the philosophical limitations of computational legal reasoning. Full article
Show Figures

Figure 1

22 pages, 557 KB  
Article
Using Blockchain Ledgers to Record AI Decisions in IoT
by Vikram Kulothungan
IoT 2025, 6(3), 37; https://doi.org/10.3390/iot6030037 - 3 Jul 2025
Viewed by 1455
Abstract
The rapid integration of AI into IoT systems has outpaced the ability to explain and audit automated decisions, resulting in a serious transparency gap. We address this challenge by proposing a blockchain-based framework to create immutable audit trails of AI-driven IoT decisions. In [...] Read more.
The rapid integration of AI into IoT systems has outpaced the ability to explain and audit automated decisions, resulting in a serious transparency gap. We address this challenge by proposing a blockchain-based framework to create immutable audit trails of AI-driven IoT decisions. In our approach, each AI inference comprising key inputs, model ID, and output is logged to a permissioned blockchain ledger, ensuring that every decision is traceable and auditable. IoT devices and edge gateways submit cryptographically signed decision records via smart contracts, resulting in an immutable, timestamped log that is tamper-resistant. This decentralized approach guarantees non-repudiation and data integrity while balancing transparency with privacy (e.g., hashing personal data on-chain) to meet data protection norms. Our design aligns with emerging regulations, such as the EU AI Act’s logging mandate and GDPR’s transparency requirements. We demonstrate the framework’s applicability in two domains: healthcare IoT (logging diagnostic AI alerts for accountability) and industrial IoT (tracking autonomous control actions), showing its generalizability to high-stakes environments. Our contributions include the following: (1) a novel architecture for AI decision provenance in IoT, (2) a blockchain-based design to securely record AI decision-making processes, and (3) a simulation informed performance assessment based on projected metrics (throughput, latency, and storage) to assess the approach’s feasibility. By providing a reliable immutable audit trail for AI in IoT, our framework enhances transparency and trust in autonomous systems and offers a much-needed mechanism for auditable AI under increasing regulatory scrutiny. Full article
(This article belongs to the Special Issue Blockchain-Based Trusted IoT)
Show Figures

Figure 1

29 pages, 1812 KB  
Article
Innovative Guardrails for Generative AI: Designing an Intelligent Filter for Safe and Responsible LLM Deployment
by Olga Shvetsova, Danila Katalshov and Sang-Kon Lee
Appl. Sci. 2025, 15(13), 7298; https://doi.org/10.3390/app15137298 - 28 Jun 2025
Viewed by 1888
Abstract
This paper proposes a technological framework designed to mitigate the inherent risks associated with the deployment of artificial intelligence (AI) in decision-making and task execution within the management processes. The Agreement Validation Interface (AVI) functions as a modular Application Programming Interface (API) Gateway [...] Read more.
This paper proposes a technological framework designed to mitigate the inherent risks associated with the deployment of artificial intelligence (AI) in decision-making and task execution within the management processes. The Agreement Validation Interface (AVI) functions as a modular Application Programming Interface (API) Gateway positioned between user applications and LLMs. This gateway architecture is designed to be LLM-agnostic, meaning it can operate with various underlying LLMs without requiring specific modifications for each model. This universality is achieved by standardizing the interface for requests and responses and applying a consistent set of validation and enhancement processes irrespective of the chosen LLM provider, thus offering a consistent governance layer across a diverse LLM ecosystem. AVI facilitates the orchestration of multiple AI subcomponents for input–output validation, response evaluation, and contextual reasoning, thereby enabling real-time, bidirectional filtering of user interactions. A proof-of-concept (PoC) implementation of AVI was developed and rigorously evaluated using industry-standard benchmarks. The system was tested for its effectiveness in mitigating adversarial prompts, reducing toxic outputs, detecting personally identifiable information (PII), and enhancing factual consistency. The results demonstrated that AVI reduced successful fast injection attacks by 82%, decreased toxic content generation by 75%, and achieved high PII detection performance (F1-score ≈ 0.95). Furthermore, the contextual reasoning module significantly improved the neutrality and factual validity of model outputs. Although the integration of AVI introduced a moderate increase in latency, the overall framework effectively enhanced the reliability, safety, and interpretability of LLM-driven applications. AVI provides a scalable and adaptable architectural template for the responsible deployment of generative AI in high-stakes domains such as finance, healthcare, and education, promoting safer and more ethical use of AI technologies. Full article
Show Figures

Figure 1

34 pages, 20058 KB  
Article
Image First or Text First? Optimising the Sequencing of Modalities in Large Language Model Prompting and Reasoning Tasks
by Grant Wardle and Teo Sušnjak
Big Data Cogn. Comput. 2025, 9(6), 149; https://doi.org/10.3390/bdcc9060149 - 3 Jun 2025
Viewed by 1464
Abstract
Our study investigates how the sequencing of text and image inputs within multi-modal prompts affects the reasoning performance of Large Language Models (LLMs). Through empirical evaluations of three major commercial LLM vendors—OpenAI, Google, and Anthropic—alongside a user study on interaction strategies, we develop [...] Read more.
Our study investigates how the sequencing of text and image inputs within multi-modal prompts affects the reasoning performance of Large Language Models (LLMs). Through empirical evaluations of three major commercial LLM vendors—OpenAI, Google, and Anthropic—alongside a user study on interaction strategies, we develop and validate practical heuristics for optimising multi-modal prompt design. Our findings reveal that modality sequencing is a critical factor influencing reasoning performance, particularly in tasks with varying cognitive load and structural complexity. For simpler tasks involving a single image, positioning the modalities directly impacts model accuracy, whereas in complex, multi-step reasoning scenarios, the sequence must align with the logical structure of inference, often outweighing the specific placement of individual modalities. Furthermore, we identify systematic challenges in multi-hop reasoning within transformer-based architectures, where models demonstrate strong early-stage inference but struggle with integrating prior contextual information in later reasoning steps. Building on these insights, we propose a set of validated, user-centred heuristics for designing effective multi-modal prompts, enhancing both reasoning accuracy and user interaction with AI systems. Our contributions inform the design and usability of interactive intelligent systems, with implications for applications in education, medical imaging, legal document analysis, and customer support. By bridging the gap between intelligent system behaviour and user interaction strategies, this study provides actionable guidance on how users can effectively structure prompts to optimise multi-modal LLM reasoning within real-world, high-stakes decision-making contexts. Full article
Show Figures

Figure 1

22 pages, 4445 KB  
Article
Trustworthiness of Deep Learning Under Adversarial Attacks in Power Systems
by Dowens Nicolas, Kevin Orozco, Steve Mathew, Yi Wang, Wafa Elmannai and George C. Giakos
Energies 2025, 18(10), 2611; https://doi.org/10.3390/en18102611 - 19 May 2025
Cited by 1 | Viewed by 1014
Abstract
Advanced as they are, DL models in cyber-physical systems remain vulnerable to attacks like the Fast Gradient Sign Method, DeepFool, and Jacobian-Based Saliency Map Attacks, rendering system trustworthiness impeccable in applications with high stakes like power systems. In power grids, DL models such [...] Read more.
Advanced as they are, DL models in cyber-physical systems remain vulnerable to attacks like the Fast Gradient Sign Method, DeepFool, and Jacobian-Based Saliency Map Attacks, rendering system trustworthiness impeccable in applications with high stakes like power systems. In power grids, DL models such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks are commonly utilized for tasks like state estimation, load forecasting, and fault detection, depending on their ability to learn complex, non-linear patterns in high-dimensional data such as voltage, current, and frequency measurements. Nevertheless, these models are susceptible to adversarial attacks, which could lead to inaccurate predictions and system failure. In this paper, the impact of these attacks on DL models is analyzed by employing the use of defensive countermeasures such as Adversarial Training, Gaussian Augmentation, and Feature Squeezing, to investigate vulnerabilities in industrial control systems with potentially disastrous real-world impacts. Emphasizing the inherent requirement of robust defense, this initiative lays the groundwork for follow-on initiatives to incorporate security and resilience into ML and DL algorithms and ensure mission-critical AI system dependability. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Smart Grids)
Show Figures

Figure 1

35 pages, 5913 KB  
Article
Embedding Fear in Medical AI: A Risk-Averse Framework for Safety and Ethics
by Andrej Thurzo and Vladimír Thurzo
AI 2025, 6(5), 101; https://doi.org/10.3390/ai6050101 - 14 May 2025
Cited by 2 | Viewed by 2566
Abstract
In today’s high-stakes arenas—from healthcare to defense—algorithms are advancing at an unprecedented pace, yet they still lack a crucial element of human decision-making: an instinctive caution that helps prevent harm. Inspired by both the protective reflexes seen in military robotics and the human [...] Read more.
In today’s high-stakes arenas—from healthcare to defense—algorithms are advancing at an unprecedented pace, yet they still lack a crucial element of human decision-making: an instinctive caution that helps prevent harm. Inspired by both the protective reflexes seen in military robotics and the human amygdala’s role in threat detection, we introduce a novel idea: an integrated module that acts as an internal “caution system”. This module does not experience emotion in the human sense; rather, it serves as an embedded safeguard that continuously assesses uncertainty and triggers protective measures whenever potential dangers arise. Our proposed framework combines several established techniques. It uses Bayesian methods to continuously estimate the likelihood of adverse outcomes, applies reinforcement learning strategies with penalties for choices that might lead to harmful results, and incorporates layers of human oversight to review decisions when needed. The result is a system that mirrors the prudence and measured judgment of experienced clinicians—hesitating and recalibrating its actions when the data are ambiguous, much like a doctor would rely on both intuition and expertise to prevent errors. We call on computer scientists, healthcare professionals, and policymakers to collaborate in refining and testing this approach. Through joint research, pilot projects, and robust regulatory guidelines, we aim to ensure that advanced computational systems can combine speed and precision with an inherent predisposition toward protecting human life. Ultimately, by embedding this cautionary module, the framework is expected to significantly reduce AI-induced risks and enhance patient safety and trust in medical AI systems. It seems inevitable for future superintelligent AI systems in medicine to possess emotion-like processes. Full article
Show Figures

Figure 1

18 pages, 14637 KB  
Article
Enhancing Bottleneck Concept Learning in Image Classification
by Xingfu Cheng, Zhaofeng Niu, Zhouqiang Jiang and Liangzhi Li
Sensors 2025, 25(8), 2398; https://doi.org/10.3390/s25082398 - 10 Apr 2025
Viewed by 959
Abstract
Deep neural networks (DNNs) have demonstrated exceptional performance in image classification. However, their “black-box” nature raises concerns about trust and transparency, particularly in high-stakes fields such as healthcare and autonomous systems. While explainable AI (XAI) methods attempt to address these concerns through feature- [...] Read more.
Deep neural networks (DNNs) have demonstrated exceptional performance in image classification. However, their “black-box” nature raises concerns about trust and transparency, particularly in high-stakes fields such as healthcare and autonomous systems. While explainable AI (XAI) methods attempt to address these concerns through feature- or concept-based explanations, existing approaches are often limited by the need for manually defined concepts, overly abstract granularity, or misalignment with human semantics. This paper introduces the Enhanced Bottleneck Concept Learner (E-BotCL), a self-supervised framework that autonomously discovers task-relevant, interpretable semantic concepts via a dual-path contrastive learning strategy and multi-task regularization. By combining contrastive learning to build robust concept prototypes, attention mechanisms for spatial localization, and feature aggregation to activate concepts, E-BotCL enables end-to-end concept learning and classification without requiring human supervision. Experiments conducted on the CUB200 and ImageNet datasets demonstrated that E-BotCL significantly enhanced interpretability while maintaining classification accuracy. Specifically, two interpretability metrics, the Concept Discovery Rate (CDR) and Concept Consistency (CC), improved by 0.6104 and 0.4486, respectively. This work advances the balance between model performance and transparency, offering a scalable solution for interpretable decision-making in complex vision tasks. Full article
Show Figures

Figure 1

21 pages, 3992 KB  
Article
Provable AI Ethics and Explainability in Medical and Educational AI Agents: Trustworthy Ethical Firewall
by Andrej Thurzo
Electronics 2025, 14(7), 1294; https://doi.org/10.3390/electronics14071294 - 25 Mar 2025
Cited by 4 | Viewed by 2297
Abstract
Rapid advances in artificial intelligence are transforming high-stakes fields like medicine and education while raising pressing ethical challenges. This paper introduces the Ethical Firewall Architecture—a comprehensive framework that embeds mathematically provable ethical constraints directly into AI decision-making systems. By integrating formal verification techniques, [...] Read more.
Rapid advances in artificial intelligence are transforming high-stakes fields like medicine and education while raising pressing ethical challenges. This paper introduces the Ethical Firewall Architecture—a comprehensive framework that embeds mathematically provable ethical constraints directly into AI decision-making systems. By integrating formal verification techniques, blockchain-inspired cryptographic immutability, and emotion-like escalation protocols that trigger human oversight when needed, the architecture ensures that every decision is rigorously certified to align with core human values before implementation. The framework also addresses emerging issues, such as biased value systems in large language models and the risks associated with accelerated AI learning. In addition, it highlights the potential societal impacts—including workforce displacement—and advocates for new oversight roles like the Ethical AI Officer. The findings suggest that combining rigorous mathematical safeguards with structured human intervention can deliver AI systems that perform efficiently while upholding transparency, accountability, and trust in critical applications. Full article
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)
Show Figures

Graphical abstract

35 pages, 2798 KB  
Article
Risk Analysis of Artificial Intelligence in Medicine with a Multilayer Concept of System Order
by Negin Moghadasi, Rupa S. Valdez, Misagh Piran, Negar Moghaddasi, Igor Linkov, Thomas L. Polmateer, Davis C. Loose and James H. Lambert
Systems 2024, 12(2), 47; https://doi.org/10.3390/systems12020047 - 1 Feb 2024
Cited by 4 | Viewed by 3584
Abstract
Artificial intelligence (AI) is advancing across technology domains including healthcare, commerce, the economy, the environment, cybersecurity, transportation, etc. AI will transform healthcare systems, bringing profound changes to diagnosis, treatment, patient care, data, medicines, devices, etc. However, AI in healthcare introduces entirely new categories [...] Read more.
Artificial intelligence (AI) is advancing across technology domains including healthcare, commerce, the economy, the environment, cybersecurity, transportation, etc. AI will transform healthcare systems, bringing profound changes to diagnosis, treatment, patient care, data, medicines, devices, etc. However, AI in healthcare introduces entirely new categories of risk for assessment, management, and communication. For this topic, the framing of conventional risk and decision analyses is ongoing. This paper introduces a method to quantify risk as the disruption of the order of AI initiatives in healthcare systems, aiming to find the scenarios that are most and least disruptive to system order. This novel approach addresses scenarios that bring about a re-ordering of initiatives in each of the following three characteristic layers: purpose, structure, and function. In each layer, the following model elements are identified: 1. Typical research and development initiatives in healthcare. 2. The ordering criteria of the initiatives. 3. Emergent conditions and scenarios that could influence the ordering of the AI initiatives. This approach is a manifold accounting of the scenarios that could contribute to the risk associated with AI in healthcare. Recognizing the context-specific nature of risks and highlighting the role of human in the loop, this study identifies scenario s.06—non-interpretable AI and lack of human–AI communications—as the most disruptive across all three layers of healthcare systems. This finding suggests that AI transparency solutions primarily target domain experts, a reasonable inclination given the significance of “high-stakes” AI systems, particularly in healthcare. Future work should connect this approach with decision analysis and quantifying the value of information. Future work will explore the disruptions of system order in additional layers of the healthcare system, including the environment, boundary, interconnections, workforce, facilities, supply chains, and others. Full article
Show Figures

Figure 1

20 pages, 917 KB  
Article
A Trustworthy Healthcare Management Framework Using Amalgamation of AI and Blockchain Network
by Dhairya Jadav, Nilesh Kumar Jadav, Rajesh Gupta, Sudeep Tanwar, Osama Alfarraj, Amr Tolba, Maria Simona Raboaca and Verdes Marina
Mathematics 2023, 11(3), 637; https://doi.org/10.3390/math11030637 - 27 Jan 2023
Cited by 24 | Viewed by 3556
Abstract
Over the last few decades, the healthcare industry has continuously grown, with hundreds of thousands of patients obtaining treatment remotely using smart devices. Data security becomes a prime concern with such a massive increase in the number of patients. Numerous attacks on healthcare [...] Read more.
Over the last few decades, the healthcare industry has continuously grown, with hundreds of thousands of patients obtaining treatment remotely using smart devices. Data security becomes a prime concern with such a massive increase in the number of patients. Numerous attacks on healthcare data have recently been identified that can put the patient’s identity at stake. For example, the private data of millions of patients have been published online, posing a severe risk to patients’ data privacy. However, with the advent of Industry 4.0, medical practitioners can digitally assess the patient’s condition and administer prompt prescriptions. However, wearable devices are also vulnerable to numerous security threats, such as session hijacking, data manipulation, and spoofing attacks. Attackers can tamper with the patient’s wearable device and relays the tampered data to the concerned doctor. This can put the patient’s life at high risk. Since blockchain is a transparent and immutable decentralized system, it can be utilized for securely storing patient’s wearable data. Artificial Intelligence (AI), on the other hand, utilizes different machine learning techniques to classify malicious data from an oncoming stream of patient’s wearable data. An amalgamation of these two technologies would make the possibility of tampering the patient’s data extremely difficult. To mitigate the aforementioned issues, this paper proposes a blockchain and AI-envisioned secure and trusted framework (HEART). Here, Long-Short Term Model (LSTM) is used to classify wearable devices as malicious or non-malicious. Then, we design a smart contract that allows only of those patients’ data having a wearable device to be classified as non-malicious to the public blockchain network. This information is then accessible to all involved in the patient’s care. We then evaluate the HEART’s performance considering various evaluation metrics such as accuracy, recall, precision, scalability, and network latency. On the training and testing sets, the model achieves accuracies of 93% and 92.92%, respectively. Full article
Show Figures

Figure 1

20 pages, 3436 KB  
Article
Deep Learning-Based Real Time Defect Detection for Optimization of Aircraft Manufacturing and Control Performance
by Imran Shafi, Muhammad Fawad Mazhar, Anum Fatima, Roberto Marcelo Alvarez, Yini Miró, Julio César Martínez Espinosa and Imran Ashraf
Drones 2023, 7(1), 31; https://doi.org/10.3390/drones7010031 - 1 Jan 2023
Cited by 32 | Viewed by 8082
Abstract
Monitoring tool conditions and sub-assemblies before final integration is essential to reducing processing failures and improving production quality for manufacturing setups. This research study proposes a real-time deep learning-based framework for identifying faulty components due to malfunctioning at different manufacturing stages in the [...] Read more.
Monitoring tool conditions and sub-assemblies before final integration is essential to reducing processing failures and improving production quality for manufacturing setups. This research study proposes a real-time deep learning-based framework for identifying faulty components due to malfunctioning at different manufacturing stages in the aerospace industry. It uses a convolutional neural network (CNN) to recognize and classify intermediate abnormal states in a single manufacturing process. The manufacturing process for aircraft factory products comprises different phases; analyzing the components after the integration is labor-intensive and time-consuming, which often puts the company’s stake at high risk. To overcome these challenges, the proposed AI-based system can perform inspection and defect detection and alleviate the probability of components’ needing to be re-manufacturing after being assembled. In addition, it analyses the impact value, i.e., rework delays and costs, of manufacturing processes using a statistical process control tool on real-time data for various manufactured components. Defects are detected and classified using the CNN and teachable machine in the single manufacturing process during the initial stage prior to assembling the components. The results show the significance of the proposed approach in improving operational cost management and reducing rework-induced delays. Ground tests are conducted to calculate the impact value followed by the air tests of the final assembled aircraft. The statistical results indicate a 52.88% and 34.32% reduction in time delays and total cost, respectively. Full article
Show Figures

Figure 1

9 pages, 1842 KB  
Article
Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach
by Kaspars Sudars, Ivars Namatēvs and Kaspars Ozols
J. Imaging 2022, 8(2), 30; https://doi.org/10.3390/jimaging8020030 - 30 Jan 2022
Cited by 5 | Viewed by 3254
Abstract
Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier [...] Read more.
Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier of the Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels’ compression. Then, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating an original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high- and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network’s precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads to a 2% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail. Full article
(This article belongs to the Special Issue Formal Verification of Imaging Algorithms for Autonomous System)
Show Figures

Figure 1

Back to TopTop