Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (216)

Search Parameters:
Keywords = ISO/IEC 29192-2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 712 KB  
Review
AI Risk Governance for Advancing Digital Sovereignty in Data-Driven Systems: An Integrated Multi-Layer Framework
by Segun Odion and Santosh Reddy Addula
Future Internet 2026, 18(4), 209; https://doi.org/10.3390/fi18040209 - 15 Apr 2026
Viewed by 257
Abstract
The integration of algorithmic systems into critical digital infrastructure is no longer peripheral to governance, it is governance. As AI-mediated decisions influence credit access, clinical diagnoses, criminal risk scores, and infrastructure routing, the question of who controls these algorithms and whether that control [...] Read more.
The integration of algorithmic systems into critical digital infrastructure is no longer peripheral to governance, it is governance. As AI-mediated decisions influence credit access, clinical diagnoses, criminal risk scores, and infrastructure routing, the question of who controls these algorithms and whether that control is meaningful has become a central concern for states and institutions at every level of development. Existing frameworks, including the NIST AI Risk Management Framework, ISO/IEC 42001, and the EU AI Act, have made real progress toward structured AI governance. However, none treats digital sovereignty as a first-order goal, nor do they provide integrated cross-layer guidance applicable across the diverse institutional landscape found worldwide. From this synthesis, we develop the Integrated AI Risk Governance Framework (IARGF): a four-layer structure covering policy and regulations, institutional oversight, technical controls, and operational execution, organized around five risk categories—technical, ethical, security, systemic, and sovereignty-related. A comparative analysis with major existing frameworks highlights the IARGF’s unique contributions, especially its explicit focus on sovereignty, adaptability across different institutional capacities, and recursive feedback mechanisms that connect all four governance layers. The framework is analyzed across three domains—healthcare AI, financial services, and critical infrastructure—to demonstrate its practical utility. Results confirm that governance effectiveness is a system property, not just a feature of individual layers; that digital sovereignty is both a governance goal and a distinct risk dimension with specific technical and institutional needs; and that context-aware, capacity-scaled governance is a design requirement, not a political compromise. The IARGF is presented as a conceptual governance model based on a systematic literature review rather than an empirically validated tool, and it remains to be tested in actual organizational settings. Its main contribution is the comprehensive theoretical integration of sovereignty, institutional capacity, and inter-layer governance dynamics, rather than proven performance advantages over existing models. Future research should aim to validate this framework through longitudinal case studies, expert panels, and retrospective failure analyses. Full article
(This article belongs to the Special Issue Security and Privacy in AI-Powered Systems)
Show Figures

Graphical abstract

18 pages, 1088 KB  
Article
Validation of a Duplex Digital PCR Assay for the Quantification of the NK603 Maize Event Across Three dPCR Platforms
by Daniela Verginelli, Katia Spinella, Sara Ciuffa, Raffaele Carrano, Davide La Rocca, Elisa Pierboni, Monica Borghi, Silvana Farneti and Ugo Marchesi
Foods 2026, 15(8), 1366; https://doi.org/10.3390/foods15081366 - 14 Apr 2026
Viewed by 271
Abstract
In the European Union, mandatory labeling of food and feed products is required when authorized genetically modified organisms (GMOs) exceed 0.9% per ingredient, necessitating reliable analytical methods for official control laboratories. Event-specific PCR assays validated according to ISO/IEC 17025 are the reference approach [...] Read more.
In the European Union, mandatory labeling of food and feed products is required when authorized genetically modified organisms (GMOs) exceed 0.9% per ingredient, necessitating reliable analytical methods for official control laboratories. Event-specific PCR assays validated according to ISO/IEC 17025 are the reference approach for GMO detection, identification, and quantification. The growing use of digital PCR (dPCR) has encouraged the adaptation of real-time PCR methods to dPCR-based strategies, as dPCR enables absolute quantification without calibration standards, shows reduced sensitivity to inhibitors, and allows for the design of a multiplex assay. In this study, an in-house validation of a duplex dPCR assay targeting the maize GM event NK603 and the HMG reference gene was performed on three platforms: Bio-Rad QX200™ (Pleasanton, CA, USA), Qiagen QIAcuity (Venlo, The Netherlands), and Thermo Fisher QuantStudio Absolute Q (Waltham, MA, USA). All validation parameters met the Joint Research Centre (JRC) acceptance criteria. In particular, this assay demonstrated high specificity, sensitivity (limit of quantification or LOQ < 35 copies per reaction), precision, and trueness (RSDr and bias <25%). The data indicate that the duplex dPCR assay can be used for routine GMO analysis and future collaborative validation studies. Full article
(This article belongs to the Section Food Analytical Methods)
29 pages, 4028 KB  
Article
Selecting a Cybersecurity Risk Analysis Methodology for MSMEs Using a Multi-Criteria Method (AHP)
by Gabriel Enrique Taborda Blandon, Juan Fernando Hurtado Rivera, Javier Mauricio Durán Vásquez, Maria José Monsalve Ruiz, Marco Tulio Silva Castillo and Hector Fernando Vargas Montoya
Technologies 2026, 14(4), 227; https://doi.org/10.3390/technologies14040227 - 14 Apr 2026
Viewed by 255
Abstract
In the current context of digital transformation, Micro-, Small-, and Medium-Sized Enterprises (MSMEs) are increasingly exposed to cybersecurity risks. This exposure is intensified by the limited adoption of international standards for identifying impacts, low budgets, and shortages of trained personnel, which collectively result [...] Read more.
In the current context of digital transformation, Micro-, Small-, and Medium-Sized Enterprises (MSMEs) are increasingly exposed to cybersecurity risks. This exposure is intensified by the limited adoption of international standards for identifying impacts, low budgets, and shortages of trained personnel, which collectively result in the absence of structured control plans for mitigating cyber risks. (1) This study proposes a mechanism for selecting a cybersecurity risk analysis and management methodology suited to Colombian MSMEs by applying the multi-criteria Analytic Hierarchy Process (AHP) method. (2) The employed approach is qualitative and follows the AHP procedure to select the most suitable option that can be applied to cybersecurity. This selection process evaluated different criteria in five standards: ISO/IEC 27005:2022, NIST SP 800-30, OCTAVE-S, MAGERIT, and EBIOS-RM. (3) The AHP method enabled, in a practical manner, the selection of OCTAVE-S as the primary methodology, complemented with elements from other standards. Finally, the proposed methodology was implemented in a cloud-based web application called the Risk Analysis Module, integrated into the Keru IT security platform. It is concluded that the multi-criteria AHP method is effective and allows organizations to select the standards most appropriate to their needs, with potential applicability to other types of decisions. Full article
(This article belongs to the Special Issue Research on Security and Privacy of Data and Networks)
Show Figures

Figure 1

37 pages, 1352 KB  
Review
Stability and Degradation of Perovskite Solar Cells in Space Environments: Mechanisms and Protocols
by Aigerim Akylbayeva, Yerzhan Nussupov, Zhansaya Omarova, Yevgeniy Korshikov, Abdurakhman Aldiyarov and Darkhan Yerezhep
Int. J. Mol. Sci. 2026, 27(8), 3459; https://doi.org/10.3390/ijms27083459 - 12 Apr 2026
Viewed by 282
Abstract
Perovskite solar cells (PSCs) have quickly achieved certified energy conversion efficiency reaching a certified record of 27.3% for single-junction cells, while having a low mass, thin-film form factor and high specific power, which are attractive for space energy systems. However, their long-term reliability [...] Read more.
Perovskite solar cells (PSCs) have quickly achieved certified energy conversion efficiency reaching a certified record of 27.3% for single-junction cells, while having a low mass, thin-film form factor and high specific power, which are attractive for space energy systems. However, their long-term reliability in extraterrestrial environments is not adequately ensured by terrestrial qualification routes, and standardized space-related test protocols remain insufficiently developed. This review critically summarizes the current understanding of the degradation of PSCs under the influence of key environmental factors in space—ionizing and non-ionizing radiation, thermal vacuum exposure and thermal cycling, and ultraviolet radiation AM0, as well as atmospheric oxygen in low orbits. The central task of the work is to develop and justify the need to create specialized PSCs test protocols for space applications, since existing ground standards do not reflect the multifactorial nature and extreme orbital loads. It has been shown that thermal vacuum accelerates ion migration, interphase reactions, and degassing, while AM0 UV and atomic oxygen introduce additional photochemical and oxidative mechanisms of destruction; at the same time, stressors often act synergistically and are not detected by single-factor tests. Next, the limitations of the current IEC and ISOS are discussed and an approach to their expansion is formulated through the ISOS-T-Space and ISOS-LC-Space protocols, which integrate high vacuum, AM0 lighting, extended temperature ranges and controlled particle irradiation. It is concluded that the development and interlaboratory validation of such space-oriented protocols is a key condition for the correct qualification of PSCs and targeted optimization of materials and interfaces to meet the requirements of space energy. Full article
23 pages, 3318 KB  
Article
Effectiveness Assessment of a Multi-Functional Neonatal Incubator in the NICU
by Hyeonkyeong Choi and Wonseuk Jang
Healthcare 2026, 14(7), 949; https://doi.org/10.3390/healthcare14070949 - 4 Apr 2026
Viewed by 355
Abstract
Background/Objectives: Preterm and critically ill neonates in neonatal intensive care units (NICUs) require multiple medical devices, including incubators, radiant warmers, phototherapy systems, and patient monitors. The coexistence of standalone devices without interoperability increases cognitive and operational burdens for healthcare providers and leads [...] Read more.
Background/Objectives: Preterm and critically ill neonates in neonatal intensive care units (NICUs) require multiple medical devices, including incubators, radiant warmers, phototherapy systems, and patient monitors. The coexistence of standalone devices without interoperability increases cognitive and operational burdens for healthcare providers and leads to spatial inefficiency. This study aimed to develop and evaluate a multi-functional neonatal incubator integrating these core functions into a single platform, using user-centered design (UCD) and usability engineering principles. Methods: By synthesizing and analyzing international standards (ISO 13485, IEC 62366-1, IEC 62366-2, and ISO 9241-210), a four-phase design process was established. Following the development of the monitoring system, the design was iteratively refined and validated through repeated formative usability evaluations. A summative usability evaluation was then conducted with 20 NICU clinicians in a simulated NICU environment, using 13 scenarios comprising 39 tasks. Outcome measures included task success rate, the After-Scenario Questionnaire (ASQ), the NASA Task Load Index (NASA-TLX), and the System Usability Scale (SUS). Results: The overall task success rate was 95.64%. When analyzed by function, success rates were 94.63% for incubator-related tasks, 98.33% for patient monitoring, 96.67% for radiant warmer tasks, and 98.33% for phototherapy tasks. The mean SUS score was 78.63, exceeding the benchmark score of 68 that indicates good usability. In addition, no statistically significant differences were observed in workload (NASA-TLX) or usability (SUS) scores according to clinical role or length of clinical experience. Conclusions: The multi-functional neonatal incubator developed in this study demonstrated high usability despite the integration of multiple medical device functions. The findings suggest that this integrated system has the potential to enhance clinical workflow efficiency, optimize spatial utilization, and improve patient safety in NICU settings. Full article
Show Figures

Figure 1

17 pages, 1873 KB  
Article
Assessment of Air Permeability and Watertightness of Commercial Windows and Doors from the Perspective of Building Envelope Performance
by Milda Jucienė, Jurga Kumžienė, Vaida Dobilaitė and Karolis Banionis
Buildings 2026, 16(7), 1421; https://doi.org/10.3390/buildings16071421 - 3 Apr 2026
Viewed by 288
Abstract
This research investigates the air permeability and watertightness performance of commercially available windows and doors based on laboratory tests conducted in accordance with the EN 1026 and EN 1027 standards. All tests were carried out under controlled environmental conditions, and the results were [...] Read more.
This research investigates the air permeability and watertightness performance of commercially available windows and doors based on laboratory tests conducted in accordance with the EN 1026 and EN 1027 standards. All tests were carried out under controlled environmental conditions, and the results were validated following relevant ISO procedures to ensure reliability and consistency. The tests are essential for evaluating the air permeability and watertightness of commercial windows and doors to ensure the overall performance, energy efficiency, and durability of the building envelope. The results provided consist of 244 samples (93 doors and 151 windows) tested between 2018 and 2025 in an accredited laboratory complying with EN ISO/IEC 17025. The results show that most doors achieved the highest air permeability class (Class 4) according to EN 12207, with shares ranging from 50% to 80% and exceeding 65% in most years. Window performance was similarly strong, with more than 74% of samples classified as Class 4, indicating consistently high airtightness and compliance with stringent energy efficiency requirements. Watertightness tests revealed that 59% of products were resistant to water penetration, while 41% were permeable. Among watertight products, windows predominated (67%), while doors accounted for a larger share of water-permeable cases. The results support informed decision making in manufacturing, construction practices, and early-stage building design, contributing to improved building durability and energy efficiency. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

45 pages, 3695 KB  
Article
Towards a Reference Architecture for Machine Learning Operations
by Miguel Ángel Mateo-Casalí, Andrés Boza and Francisco Fraile
Computers 2026, 15(4), 218; https://doi.org/10.3390/computers15040218 - 1 Apr 2026
Viewed by 540
Abstract
Industrial organisations increasingly rely on machine learning (ML) to improve quality, maintenance, and planning in Industry 4.0/5.0 ecosystems. However, turning experimental models into reliable services on the production floor remains complex due to the heterogeneity of operational technologies (OTs) and information technologies (ITs), [...] Read more.
Industrial organisations increasingly rely on machine learning (ML) to improve quality, maintenance, and planning in Industry 4.0/5.0 ecosystems. However, turning experimental models into reliable services on the production floor remains complex due to the heterogeneity of operational technologies (OTs) and information technologies (ITs), including implementation constraints, latency in edge-fog-cloud scenarios, governance requirements, and continuous performance degradation caused by data drift. Although Machine Learning Operations (MLOps) provides lifecycle practices for deployment, monitoring, and retraining, the evidence is fragmented across tool-centric descriptions, case-specific pipelines, and conceptual architectures, offering limited guidance on which industrial constraints should inform architectural decisions and how to evaluate solutions. This work addresses that gap through a PRISMA-guided systematic review of 49 studies on industrial MLOps (with the search and screening primarily targeting Industry 4.0/IIoT operationalisation contexts, as reflected in the search strategy and corpus) and an evidence-based synthesis of principles, challenges, lifecycle practices, and enabling technologies. From this synthesis, industrial requirements are derived that encompass OT/IT integration, edge-fog-cloud orchestration, security and traceability, and observability-based lifecycle control. On this basis, a reference architecture is proposed that maps these requirements to functional layers, data and control flows, and verifiable responsibilities. To support reproducibility and practical inspectability, the article also presents an open-source architectural instantiation aligned with the proposed decomposition. Finally, the evaluation is illustrated through a predictive maintenance use case (tool breakage) in a single CNC machining cell, where the objective is to demonstrate end-to-end feasibility under realistic operational constraints rather than cross-scenario superiority or broad industrial generalisability. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
Show Figures

Figure 1

25 pages, 1873 KB  
Article
An Empirical Assessment of Digital Forensic Process Reliability Using Integrated ISO/IEC 27037 and 27041 Standards
by Zlatan Morić, Vedran Dakić and Ivana Ogrizek Biškupić
J. Cybersecur. Priv. 2026, 6(2), 57; https://doi.org/10.3390/jcp6020057 - 30 Mar 2026
Viewed by 581
Abstract
The escalating scale and complexity of cybercrime necessitate standardized digital forensic protocols to ensure the integrity and admissibility of digital evidence. This study empirically assesses the use of ISO/IEC 27037 and ISO/IEC 27041 through three real-world digital forensic case studies conducted in organizational [...] Read more.
The escalating scale and complexity of cybercrime necessitate standardized digital forensic protocols to ensure the integrity and admissibility of digital evidence. This study empirically assesses the use of ISO/IEC 27037 and ISO/IEC 27041 through three real-world digital forensic case studies conducted in organizational settings. A multi-case methodology was employed, encompassing a multinational corporate criminal investigation, an internal employee misbehaviour probe, and an examination into mobile- and cloud-based data leaks. The effect of synchronized standard implementation was evaluated using audit-based and quantitative indicators that measure forensic process quality as a system attribute. The findings demonstrate that the systematic implementation of ISO/IEC 27037 and ISO/IEC 27041 improves investigative traceability, documentation quality, and evidentiary robustness. In the worldwide case study, documentation completeness increased by 18%, and all digital evidence was deemed admissible in judicial proceedings, surpassing the institutional baseline admissibility rate of 82%. In other instances, evidence gathered within the same framework was acknowledged in organizational or disciplinary review processes, resulting in similar enhancements in documentation quality and procedural consistency, notwithstanding technological and organizational limitations. The paper develops and empirically substantiates an integrated procedural validation model that connects evidence-handling practices with method and instrument validation. The results indicate that the synchronized implementation of ISO/IEC forensic standards improves the transparency, dependability, and auditability of digital forensic investigations. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

23 pages, 7096 KB  
Article
Research and Application of Functional Model Construction Method for Production Equipment Operation Management and Control Oriented to Diversified and Personalized Scenarios
by Jun Li, Keqin Dou, Jinsong Liu, Qing Li and Yong Zhou
Machines 2026, 14(4), 368; https://doi.org/10.3390/machines14040368 - 27 Mar 2026
Viewed by 324
Abstract
As complex system engineering involving multiple stakeholders, multi-objective collaboration, and multi-spatiotemporal scales, the components, logical structure, and functional mechanisms of production equipment operation management and control (PEOMC) can be generalized through functional modelling to support dynamic analysis and intelligent decision-making of PEOMC in [...] Read more.
As complex system engineering involving multiple stakeholders, multi-objective collaboration, and multi-spatiotemporal scales, the components, logical structure, and functional mechanisms of production equipment operation management and control (PEOMC) can be generalized through functional modelling to support dynamic analysis and intelligent decision-making of PEOMC in the industrial internet environment. To address the diversity of scenarios and objectives of PEOMC, a hierarchical construction method for the functional model of PEOMC based on IDEF0 is proposed. By analysing relevant international standards, such as ISO 55010, ISO/IEC 62264, and OSA-CBM, the generic functional modules for the first and second layers of the functional model are identified and defined. On the basis of semi-supervised machine learning, topic clustering is used to extract the components, functional mechanisms, and logical relationships of production equipment operation management and control from approximately 200 standard texts and to construct a reference resource pool for the third-layer functional module. On this basis, an interface matching and recursive traversal algorithm for functional modules is designed, and a composition and orchestration strategy of functional modules for specific scenarios is provided to support the flexible construction of diversified and personalized PEOMC scenarios. The proposed construction and application method was validated through an engineering case study in an aero-engine transmission unit manufacturing workshop: the average process capability index of the enterprise’s production equipment steadily increased from 1.28 to approximately 1.60, the mean time to repair (MTTR) of production equipment failures significantly decreased from 8 h to 3 h, and the average overall equipment effectiveness (OEE) increased from 56.43% to a stable 68.57%, demonstrating its effectiveness and practicality. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

22 pages, 1384 KB  
Article
Deriving Empirically Grounded NFR Specifications from Practitioner Discourse: A Validated Methodology Applied to Trustworthy APIs in the AI Era
by Apitchaka Singjai
Information 2026, 17(3), 304; https://doi.org/10.3390/info17030304 - 22 Mar 2026
Cited by 1 | Viewed by 309
Abstract
Specifying non-functional requirements (NFRs) for rapidly evolving domains such as trustworthy APIs in the AI era is challenging as best practices emerge through practitioner discourse faster than traditional requirements engineering can capture them. We present a systematic methodology for deriving prioritized NFR specifications [...] Read more.
Specifying non-functional requirements (NFRs) for rapidly evolving domains such as trustworthy APIs in the AI era is challenging as best practices emerge through practitioner discourse faster than traditional requirements engineering can capture them. We present a systematic methodology for deriving prioritized NFR specifications from multimedia practitioner discourse combining AI-assisted transcript analysis, grounded theory principles, and Theme Coverage Score (TCS) validation. Our five-task approach integrates purposive sampling, automated transcription with speaker diarization, grounded theory coding extracting stakeholder-specific themes with TCS quantification, MoSCoW prioritization using empirically derived thresholds (Must Have ≥85%, Should Have 65–84%, Could Have 45–64%, and Won’t Have <45%), and NFR specification consistent with ISO/IEC 25010:2023 principles of stakeholder perspective, measurable quality criteria, and explicit rationale. Applying this methodology to 22 expert presentations on trustworthy APIs yields Weighted Coverage Score of 0.71 and 30 prioritized NFR specifications across five trustworthiness dimensions. MoSCoW classification produces 11 Must Have requirements (Robustness and Transparency), 9 Should Have, 6 Could Have, and 4 Won’t Have. The analysis reveals systematic disparities where Fairness contributes zero Must Have or Should Have requirements due to insufficient practitioner consensus. Each NFR emphasizes stakeholder perspective, measurable quality criteria, and explicit rationale, enabling systematic verification. The validated methodology with complete replication package enables empirically grounded, prioritized NFR derivation from practitioner discourse in any rapidly evolving domain. Full article
Show Figures

Graphical abstract

18 pages, 785 KB  
Article
Bayesian Networks for Cybersecurity Decision Support: Enhancing Human-Machine Interaction in Technical Systems
by Karla Maradova, Petr Blecha, Vendula Samelova, Tomáš Marada and Daniel Zuth
Appl. Sci. 2026, 16(6), 3053; https://doi.org/10.3390/app16063053 - 21 Mar 2026
Viewed by 291
Abstract
The increasing digitization of manufacturing and the integration of CNC and industrial control systems into the industry 4.0 environment have introduced new cybersecurity risks that directly affect operational reliability. Traditional deterministic risk-assessment methods used for securing ICS—such as SCADA, PLC, and CNC systems—struggle [...] Read more.
The increasing digitization of manufacturing and the integration of CNC and industrial control systems into the industry 4.0 environment have introduced new cybersecurity risks that directly affect operational reliability. Traditional deterministic risk-assessment methods used for securing ICS—such as SCADA, PLC, and CNC systems—struggle to address uncertainty, dynamic operating conditions, and complex dependencies between technical and organizational factors. To overcome these limitations, this study develops a Bayesian Network (BN) model that captures probabilistic relationships between machine-level configuration parameters, network conditions, and potential security incidents. The model is applied to a CNC machining center (ZPS MCG1000i), where it supports scenario-based prediction of cybersecurity risks and provides interpretable outputs suitable for operator decision-making and human–machine interaction. The results demonstrate that BNs are effective in environments with limited data availability and high uncertainty, offering transparent and quantifiable insights into how specific misconfigurations—such as active remote access or irregular firmware updates—elevate overall system exposure. The proposed approach aligns with current regulatory and standardization requirements, including the NIS2 Directive (EU 2022/2555), ISO/IEC 27001:2022, ISO/IEC 27005:2022, and Regulation (EU) 2024/2847 (Cyber Resilience Act), which define cybersecurity obligations for products with digital elements. The study provides a reproducible and future-oriented methodology for integrating cybersecurity into machinery-safety evaluation in modern industrial environments. Full article
(This article belongs to the Special Issue New Advances in Cybersecurity Technology and Cybersecurity Management)
Show Figures

Figure 1

25 pages, 769 KB  
Article
Standard-Oriented Architecture for AI-Powered Information Security Risk Management
by Oleksii Chalyi, Kęstutis Driaunys, Šarūnas Grigaliūnas and Rasa Brūzgienė
Electronics 2026, 15(6), 1282; https://doi.org/10.3390/electronics15061282 - 19 Mar 2026
Viewed by 428
Abstract
This paper presents a standard-oriented architecture for automating information security risk management (ISRM) using artificial intelligence. The study first evaluates eight international frameworks (including COBIT 2019, NIST SP 800-53, and ISO 31000) for automation suitability, identifying ISO/IEC 27005 as the optimal structural foundation. [...] Read more.
This paper presents a standard-oriented architecture for automating information security risk management (ISRM) using artificial intelligence. The study first evaluates eight international frameworks (including COBIT 2019, NIST SP 800-53, and ISO 31000) for automation suitability, identifying ISO/IEC 27005 as the optimal structural foundation. Based on these findings, an architecture integrating Natural Language Processing and machine learning to automate risk identification, assessment, and treatment is proposed. A core component is a decision-making module that combines expert reasoning with a Multi-LLM consensus mechanism to ensure reliability. To provide exploratory support for the proposed architecture, a comparative study using five state-of-the-art Large Language Models (ChatGPT, Gemini Advanced, Grok, Microsoft Copilot, and DeepSeek Chat) was conducted on a standardized risk identification task. The results highlight strong cross-model consensus patterns, providing exploratory evidence that LLMs may support expert-informed risk identification and reasoning tasks while acknowledging the current limitations in complex reasoning. This approach proposes a transparent architectural foundation for AI-driven ISRM whose scalability must be established through future prototype-based evaluation, thereby bridging the gap between rigid compliance standards and generative AI capabilities. Full article
Show Figures

Figure 1

33 pages, 1175 KB  
Article
Security Compliance as a Catalyst for Sustainable Partnerships: A Design Science Approach for SMEs
by Francisco Conceição, Manuel Rocha and Fernando Almeida
J. Cybersecur. Priv. 2026, 6(2), 53; https://doi.org/10.3390/jcp6020053 - 13 Mar 2026
Viewed by 629
Abstract
Small-and-medium-sized enterprises (SMEs) increasingly depend on business partnerships to access markets and scale operations, yet they often face trust barriers during contract formation due to the complexity of the verification of their cybersecurity posture and compliance status by their partners. This problem is [...] Read more.
Small-and-medium-sized enterprises (SMEs) increasingly depend on business partnerships to access markets and scale operations, yet they often face trust barriers during contract formation due to the complexity of the verification of their cybersecurity posture and compliance status by their partners. This problem is intensified by rising regulatory expectations, notably the EU Cyber Resilience Act (CRA), which many SMEs struggle to interpret and operationalize under constraints of budget, skills, and fragmented responsibilities. This study adopts a Design Science Research approach to blueprint and evaluate a lightweight mapping framework that links commonly implemented security controls to CRA requirements and to widely recognized benchmarks (ISO/IEC 27001 and CIS). Grounded in Institutional Theory and Socio-Technical Systems Theory, the artefact translates regulatory obligations into actionable, evidence-backed controls and produces partner-facing outputs that support transparency in negotiations and service level agreements. The framework is iteratively co-created with a multidisciplinary expert community. Expected contributions include a practical mechanism for making cybersecurity maturity visible, accelerating partnership formation, and enabling sustainable interorganizational relationships while remaining feasible for resource-constrained SMEs. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

40 pages, 5583 KB  
Article
Traceable Time-Domain Photovoltaic Module Modeling with Plane-of-Array Irradiance and Solar Geometry Coupling: White-Box Simulink Implementation and Experimental Validation
by Ciprian Popa, Florențiu Deliu, Adrian Popa, Narcis Octavian Volintiru, Andrei Darius Deliu, Iancu Ciocioi and Petrică Popov
Energies 2026, 19(6), 1437; https://doi.org/10.3390/en19061437 - 12 Mar 2026
Viewed by 315
Abstract
Accurate time-domain photovoltaic (PV) models are needed to evaluate performance under outdoor variability beyond STC datasheet conditions. This paper presents a traceable modeling workflow based on the standard single-diode formulation, implemented in MATLAB/Simulink (R2023a) as a modular white-box architecture that explicitly resolves photocurrent [...] Read more.
Accurate time-domain photovoltaic (PV) models are needed to evaluate performance under outdoor variability beyond STC datasheet conditions. This paper presents a traceable modeling workflow based on the standard single-diode formulation, implemented in MATLAB/Simulink (R2023a) as a modular white-box architecture that explicitly resolves photocurrent generation and loss mechanisms (diode recombination, shunt leakage, and series resistance effects) with temperature-consistent propagation through VT(T) and saturation-current terms. The method couples optical boundary conditions to the electrical model by embedding plane-of-array (POA) excitation via the incidence angle θ(t) and roof albedo directly into the photocurrent source term, preserving the causal chain from mounting geometry to electrical response. Calibration is separated from prediction by initializing key parameters using the standard Simulink PV block and then freezing them for time-domain evaluation. The workflow is validated on a 395 W rooftop prototype using 1 min resolved POA irradiance (ISO 9060:2018 Class A radiometric chain) and module temperature (IEC 60751 Class A Pt100), synchronized with electrical measurements. Over a multi-week campaign, the model exhibits high fidelity, with a worst-case relative current error of ~1.1% and a consistently low bias and dispersion, quantified by ME, MAE, RMSE, σe, and thresholded MAPE. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

24 pages, 1959 KB  
Article
LLM-Augmented Algorithmic Management: A Governance-Oriented Architecture for Explainable Organizational Decision Systems
by Nikolay Hinov and Maria Ivanova
AI 2026, 7(3), 102; https://doi.org/10.3390/ai7030102 - 10 Mar 2026
Viewed by 1047
Abstract
Algorithmic management systems increasingly coordinate work, allocate resources, and support decisions in corporate, public sector, and research environments. Yet many such systems remain opaque: they optimize and score effectively but struggle to communicate rationales that are contextual, auditable, and defensible under emerging governance [...] Read more.
Algorithmic management systems increasingly coordinate work, allocate resources, and support decisions in corporate, public sector, and research environments. Yet many such systems remain opaque: they optimize and score effectively but struggle to communicate rationales that are contextual, auditable, and defensible under emerging governance expectations. Large language models (LLMs) can help bridge this gap by translating quantitative signals into human-readable explanations and enabling interactive clarification. However, LLM integration also introduces new risks—hallucinated rationales, bias amplification, prompt-based security failures, and automation dependence—that must be governed rather than merely engineered. This article proposes a governance-oriented architecture for LLM-augmented algorithmic management. The model combines the following elements: an algorithmic decision core; an LLM-based cognitive interface for explanation and dialogue, and a verification and governance layer that enforces policy constraints, provenance, audit trails, and human-in-command oversight. The framework is developed through targeted conceptual synthesis and normative alignment with key governance instruments (e.g., the EU AI Act, GDPR, and ISO/IEC 42001). It is illustrated through cross-domain scenarios and complemented by a demonstrative synthetic-trace simulation that highlights transparency–latency trade-offs under verification controls. Using the demonstrative simulation (n = 120 decision events), the framework illustrates a mean baseline latency of 100.3 ms and a mean LLM-augmented latency of 115.8 ms (≈15.5% increase), a mean explanation validity proxy of 85.6%, and a simulated constraint-satisfaction rate of 94.2% (113/120 events), with failed cases routed to review. These values are presented as design-level indicators of operational plausibility and governance trade-offs, not empirical performance benchmarks or state-of-the-art comparisons. The paper contributes a conceptual and governance-oriented architectural blueprint for integrating generative AI into organisational decision systems without sacrificing accountability, compliance, or operational reliability. Full article
Show Figures

Figure 1

Back to TopTop