Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,625)

Search Parameters:
Keywords = automated connection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 22466 KB  
Article
Automatic SEA Substructuring on Shell Meshes Using Physical Discontinuity Detection
by Yifan Xue, Li Tang, Hao Zan and Chen Qiang
Appl. Sci. 2026, 16(6), 2941; https://doi.org/10.3390/app16062941 (registering DOI) - 18 Mar 2026
Abstract
Statistical Energy Analysis (SEA) requires a physically meaningful subsystem definition, whereas manual partitioning of complex shell structures is often time-consuming and strongly dependent on engineering experience. To address this issue, this study proposes an automatic initial subsystem partitioning framework for shell FE models [...] Read more.
Statistical Energy Analysis (SEA) requires a physically meaningful subsystem definition, whereas manual partitioning of complex shell structures is often time-consuming and strongly dependent on engineering experience. To address this issue, this study proposes an automatic initial subsystem partitioning framework for shell FE models based on explicit prior attributes available in the model definition. The method unifies four classes of physical discontinuities—geometric discontinuity, thickness discontinuity, material/property discontinuity, and topological discontinuity—within a single adjacency evaluation procedure. The shell FE mesh is represented through element adjacencies, and adjacencies crossing any identified physical discontinuity are removed so that the remaining connected components define the partitioned subsystems. In this way, the framework generates partitioning results with explicit boundaries and traceable origins without relying on posterior response-field analysis or manually prescribed subsystem boundaries. Because the procedure operates directly on existing large-scale shell FE models and does not require additional response-feature construction or complex pre-partitioning, it provides a lightweight, repeatable, and practically executable automation path for SEA-related front-end modeling. The resulting partitions are intended as physically explicit initial partitioning results that provide a reliable boundary basis for higher-level statistical modeling objectives. When a coarser subsystem representation is required for subsequent modeling, further aggregation may be introduced as an optional enhancement according to the modeling objective, rather than as a prerequisite for the validity of the present method. Full article
(This article belongs to the Section Acoustics and Vibrations)
17 pages, 266 KB  
Article
The Engineered Messiah: Islamic Theology as Source Code in the Post-Cybernetic Universe of Dune
by Nimetullah Aldemir and Emrullah Ataseven
Religions 2026, 17(3), 372; https://doi.org/10.3390/rel17030372 - 17 Mar 2026
Abstract
Frank Herbert’s Dune (1965) establishes a universe defined by the “Butlerian Jihad”, a historical crusade that banned artificial intelligence and created a vacuum filled by religious engineering. This paper argues that in this post-cybernetic setting, religion functions as a sociological operating system designed [...] Read more.
Frank Herbert’s Dune (1965) establishes a universe defined by the “Butlerian Jihad”, a historical crusade that banned artificial intelligence and created a vacuum filled by religious engineering. This paper argues that in this post-cybernetic setting, religion functions as a sociological operating system designed for political control rather than a metaphysical connection to the divine. The study analyzes the Missionaria Protectiva to demonstrate how the Bene Gesserit order creates belief systems by co-opting and re-engineering Islamic theology. It suggests that the order’s manual of superstitions serves as a library of cultural scripts that primes the indigenous population to accept a manufactured Messiah, specifically the Mahdi. Consequently, the protagonist Paul Atreides is reinterpreted not as a traditional “White Savior” or authentic religious prophet but as a “hacker” who utilizes these pre-planted Islamic codes to access and manipulate the social infrastructure of Arrakis. His prescience functions as a form of biological predictive analytics that traps him in a deterministic loop of his own calculation. Ultimately, this reading suggests that Dune offers a critique of “techno-theology” by showing how the instrumentalization of the Mahdi figure transforms the concept of Jihad from a spiritual struggle into an unstoppable, automated algorithm of violence. Full article
(This article belongs to the Special Issue Religion in 20th- and 21st-Century Fictional Narratives)
16 pages, 1557 KB  
Article
A Graph-Theoretical and Machine Learning Approach for Predicting Physicochemical Properties of Anti-Cancer Drugs
by Haseeb Ahmad and Alaa Altassan
Mathematics 2026, 14(6), 1003; https://doi.org/10.3390/math14061003 - 16 Mar 2026
Abstract
Topological graph theory provides a quantitative approach to understanding the structural complexities of sulfonamide compounds, which are prominent for their therapeutic importance in cancer treatment. A new computational scheme to predict the physicochemical and biological functions of sulfonamide derivatives, based on connection numbers [...] Read more.
Topological graph theory provides a quantitative approach to understanding the structural complexities of sulfonamide compounds, which are prominent for their therapeutic importance in cancer treatment. A new computational scheme to predict the physicochemical and biological functions of sulfonamide derivatives, based on connection numbers and connection-based topological indices as alternatives to the theoretically overt degree-based index, is proposed. A set of structurally diverse sulfonamide compounds as chemical graphs is considered, and the relevant graph descriptors are computed using different connection numbers. Due to the complexity of the calculations involved in connectivity and other such indices, algorithms were developed in Python 3.12.12 to automate the extraction and calculation of these indices. QSPR analysis, with the help of supervised machine learning models like linear regression, among others, and various statistical techniques, was employed to obtain insight into the relationships existing between the structural properties and the molecular properties measured, such as melting point, molecular weight, etc. These results demonstrate the great predictive capability of connection-based indices in assessing pharmacologic efficacy or molecular behavior. The holistic setting thus links topological modeling to data-driven prediction and provides a window into the rational design and optimization of sulfonamide-based cancer therapeutics. Full article
(This article belongs to the Special Issue Graph Theory and Applications, 3rd Edition)
Show Figures

Figure 1

23 pages, 831 KB  
Article
Security Aspects of Zones and Conduits in IEC 62443
by Martin Gilje Jaatun, Mary Ann Lundteigen, Christoph Thieme, Lars Halvdan Flå, Karin Bernsmed, Roald Lygre and Fredrik Gratte
J. Cybersecur. Priv. 2026, 6(2), 52; https://doi.org/10.3390/jcp6020052 - 12 Mar 2026
Viewed by 95
Abstract
The IEC 62443 standard defines that, based on risk assessment, different parts of an Industrial Automation and Control System (IACS) may have different security levels, and that parts with the same security level can be designated as separate zones. Furthermore, communication between different [...] Read more.
The IEC 62443 standard defines that, based on risk assessment, different parts of an Industrial Automation and Control System (IACS) may have different security levels, and that parts with the same security level can be designated as separate zones. Furthermore, communication between different zones, both intra-IACS and inter-IACS, can be done via conduits. In this article, we argue that zones and particularly conduits can benefit from more detailed discussions of their architecture and implementation. Consequently, as novel contributions we (1) describe detailed principles for implementing conduits; (2) outline a process for connecting zones with potentially different Security Levels (SLs), expressed in the form of a flow chart; and (3) discuss challenges related to the application of zones and conduits in practice. Full article
(This article belongs to the Special Issue Building Community of Good Practice in Cybersecurity)
Show Figures

Figure 1

19 pages, 4400 KB  
Article
Enhancing Fire Safety Education Through PLC and HMI-Driven Interactive Learning
by Musa Al-Yaman, Miral AlMashayeikh, Majd AlFedailat, Ahmad M. A. Malkawi and Majid Al-Taee
Fire 2026, 9(3), 121; https://doi.org/10.3390/fire9030121 - 12 Mar 2026
Viewed by 195
Abstract
Fire safety plays a vital role in protecting lives, property, and the environment, and it keeps communities and organizations running safely. Many existing fire pump control systems fall short in educational and small-to-medium industrial settings: they often control only one pump at a [...] Read more.
Fire safety plays a vital role in protecting lives, property, and the environment, and it keeps communities and organizations running safely. Many existing fire pump control systems fall short in educational and small-to-medium industrial settings: they often control only one pump at a time, rely heavily on manual monitoring, and come with high costs that limit accessibility. To address these gaps, we developed an affordable, hands-on educational kit that brings real-world fire safety systems into the classroom using modern automation technology. The system is built around a Delta DVP12SA211R PLC chosen for its built-in real-time clock, integrated RS-232/RS-485 ports for reliable communication, and expanded with DVP16SP11R digital I/O and DVP04AD-S2 analog input modules to interface with simulated sensors mimicking smoke detection and water pressure. Students interact with the system through a Delta DOP-110IS HMI, which features Ethernet connectivity for remote observation, electrical isolation for safe operation, and a 200 ms screen update rate to ensure responsive, realistic feedback. The kit enables learners to explore critical emergency scenarios, including automatic switching between jockey and main pumps, low-pressure alerts, and system failover, transforming theoretical concepts into tangible skills. In user evaluations, 57.1% of students with no prior experience reported that the simulations closely mirrored real-world systems, while 80% of those with a fire safety background found the kit reinforced their existing knowledge; notably, 57.1% of instructors rated it as highly effective for teaching core fire safety principles across diverse learner profiles. By integrating industrial-grade hardware with scenario-based learning, this tool not only deepens understanding of fire protection systems but also better prepares future engineers for the practical demands of fire safety and industrial automation careers. Full article
Show Figures

Graphical abstract

19 pages, 1106 KB  
Article
Clinical Prediction of Functional Decline in Multiple Sclerosis Using Volumetry-Based Synthetic Brain Networks
by Alin Ciubotaru, Alexandra Maștaleru, Thomas Gabriel Schreiner, Cristiana Filip, Roxana Covali, Laura Riscanu, Robert-Valentin Bilcu, Laura-Elena Cucu, Sofia Alexandra Socolov-Mihaita, Diana Lăcătușu, Florina Crivoi, Albert Vamanu, Ioana Martu, Lucia Corina Dima-Cozma, Romica Sebastian Cozma and Oana-Roxana Bitere-Popa
Life 2026, 16(3), 459; https://doi.org/10.3390/life16030459 - 11 Mar 2026
Viewed by 159
Abstract
Background: Disability progression in multiple sclerosis (MS) is increasingly recognized as a consequence of large-scale brain network disruption rather than isolated regional damage. Although diffusion tensor imaging (DTI) is the reference method for assessing structural connectivity, its limited availability restricts widespread clinical application. [...] Read more.
Background: Disability progression in multiple sclerosis (MS) is increasingly recognized as a consequence of large-scale brain network disruption rather than isolated regional damage. Although diffusion tensor imaging (DTI) is the reference method for assessing structural connectivity, its limited availability restricts widespread clinical application. There is therefore a critical need for alternative approaches capable of capturing network-level alterations using routinely acquired MRI data. Objective: This study aimed to determine whether synthetic structural connectivity matrices derived from standard regional volumetric MRI can capture clinically meaningful network alterations in MS and predict subsequent functional progression, particularly upper limb decline. Methods: Regional brain volumetry was obtained from routine T1-weighted MRI using an automated, clinically approved volumetric pipeline. Synthetic structural connectivity matrices were generated by integrating principles of structural covariance, distance-dependent connectivity, and disease-specific vulnerability patterns. Graph-theoretical network metrics were extracted to characterize global and regional topology. Machine learning models including logistic regression, support vector machines, random forests, and gradient boosting were trained to predict clinical progression defined by worsening on the 9-Hole Peg Test. Dimensionality reduction was performed using principal component analysis, and model performance was evaluated using balanced accuracy, AUC-ROC, and resampling-based validation. Feature importance analyses were conducted to identify network vulnerability patterns. Results: Synthetic connectivity networks exhibited biologically plausible properties, including preserved but attenuated small-world organization. Global efficiency showed a strong inverse correlation with disability severity (EDSS). Patients with clinical progression demonstrated marked reductions in network integration and segregation, alongside increased characteristic path length. Machine learning models achieved robust prediction of upper limb functional decline, with ensemble-based methods performing best (balanced accuracy > 80%, AUC-ROC up to 0.85). A limited subset of connections accounted for a disproportionate share of predictive power, predominantly involving frontoparietal associative networks, thalamocortical pathways, and inter-hemispheric connections. In a longitudinal subset, network-level alterations preceded measurable clinical deterioration by several months. Conclusions: Synthetic structural connectivity derived from routine volumetric MRI captures clinically relevant network-level disruption in multiple sclerosis and enables accurate prediction of functional progression. By bridging network neuroscience with widely accessible imaging data, this framework provides a pragmatic alternative for connectomic analysis when diffusion imaging is unavailable and supports a network-based understanding of disease evolution in MS. Full article
Show Figures

Figure 1

31 pages, 28983 KB  
Article
Safety Validation of Connected Autonomous Driving Systems in Urban Intersections Using the SUNRISE Safety Assurance Framework
by Mohammed Shabbir Ali, Alexis Warsemann, Pierre Merdrignac, Mohamed-Cherif Rahal, Amar Mokrani and Wael Jami
Vehicles 2026, 8(3), 55; https://doi.org/10.3390/vehicles8030055 - 11 Mar 2026
Viewed by 162
Abstract
Ensuring the safety of Autonomous Driving Systems (ADS) at urban intersections remains challenging due to complex interactions between vehicles and traffic management infrastructure. This study validates an ADS equipped with connected perception using Infrastructure-to-Vehicle (I2V) communication within a combined virtual and hybrid testing [...] Read more.
Ensuring the safety of Autonomous Driving Systems (ADS) at urban intersections remains challenging due to complex interactions between vehicles and traffic management infrastructure. This study validates an ADS equipped with connected perception using Infrastructure-to-Vehicle (I2V) communication within a combined virtual and hybrid testing approach. The validation follows the overall structure and methodology of the SUNRISE Safety Assurance Framework (SAF), which is applied in detail where required by the scope of the study. Five representative urban intersection scenarios, covering both nominal driving conditions and safety-critical edge cases, are evaluated using virtual simulations in MATLAB/Simulink (2014b) and hybrid experiments integrating OMNeT++ (5.7.1)/Veins (5.2)/SUMO (1.12.0) with real-world components. Key Performance Indicators (KPIs) related to safety, decision-making, longitudinal control, passenger comfort, and V2X communication performance are analyzed. The results show strong consistency between virtual and hybrid testing, with ego vehicle speed deviations below 2 km/h and trigger distance differences under 3 m. V2X communication achieves a near-perfect Cooperative Awareness Message (CAM) delivery ratio, with an average latency of approximately 142 ms. While this latency remains within the tolerance of the deployed ADS, the overall end-to-end delay highlights opportunities for further optimization. The study demonstrates how the SUNRISE SAF can effectively structure ADS validation, identifies critical scenarios such as right-of-way violations by non-priority obstacles, and provides insights into improving connectivity handling and low-speed braking behavior for Cooperative, Connected, and Automated Mobility (CCAM) systems in urban environments. Full article
Show Figures

Figure 1

27 pages, 5110 KB  
Article
HAIS-SegFormer: A Lightweight Underwater Crack Segmentation Network Based on Hybrid Attention and Feature Inhibition
by Gang Li, Junchi Zhang and Kun Hu
J. Mar. Sci. Eng. 2026, 14(6), 526; https://doi.org/10.3390/jmse14060526 - 10 Mar 2026
Viewed by 241
Abstract
Underwater crack detection is critical for the structural health monitoring of concrete dams; however, complex turbid environments and limited computational resources on underwater robots pose significant challenges. This study proposes HAIS-SegFormer, a lightweight segmentation network utilizing a Mix Transformer backbone. We introduce a [...] Read more.
Underwater crack detection is critical for the structural health monitoring of concrete dams; however, complex turbid environments and limited computational resources on underwater robots pose significant challenges. This study proposes HAIS-SegFormer, a lightweight segmentation network utilizing a Mix Transformer backbone. We introduce a tandem Hybrid Attention mechanism—cascading Coordinate Attention (CoordAtt) and Convolutional Block Attention Modules (CBAM)—to preserve long-range topological connectivity and refine local edge details. Furthermore, a Feature Inhibition Module (FIM), modeled after biological lateral inhibition, is designed to actively suppress high-frequency background noise such as water plants. Experimental results on an underwater crack dataset demonstrate that HAIS-SegFormer achieves a favorable trade-off between segmentation accuracy (71.66% mIoU) and computational efficiency (73 FPS, 3.80 M parameters). The proposed framework provides a robust and resource-efficient solution for automated underwater inspections. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

13 pages, 707 KB  
Review
Smart Solutions for Small Ruminants: The Role of Artificial Intelligence (AI) and Precision Livestock Farming in Smallholder Goat Husbandry
by Nelly Kichamu, Putri Kusuma Astuti and Szilvia Kusza
AgriEngineering 2026, 8(3), 103; https://doi.org/10.3390/agriengineering8030103 - 9 Mar 2026
Viewed by 187
Abstract
Goats are important livestock species in most rural households and were amongst the first species to be domesticated. Despite this, their production is based on extensive systems, exposing them to numerous challenges affecting their productivity. This review examines the applications of precision livestock [...] Read more.
Goats are important livestock species in most rural households and were amongst the first species to be domesticated. Despite this, their production is based on extensive systems, exposing them to numerous challenges affecting their productivity. This review examines the applications of precision livestock farming (PLF) and AI-driven technologies in goat management, focusing on their impacts on productivity, welfare, genetic potential, health monitoring, feeding efficiency and sustainability outcomes and identifying challenges for their adoption in smallholder and extensive systems. Unlike previous reviews that focus mainly on cattle raised under intensive systems, this review synthesizes their use in goat production and highlights technological, socio-economic and infrastructural constraints. A conventional literature review approach is used, with studies retrieved from major databases using relevant keywords. The selected studies are evaluated to assess technological applications, benefits and adoption challenges, followed by a SWOT analysis. Engineering aspects of precision livestock farming—including sensors, data connectivity, system integration, automation and scalability—are also discussed. Ideally, these technologies operate as integrated decision-support systems that jointly improve productivity, animal welfare and sustainability, rather than performing isolated tasks. However, many PLF solutions remain at low technology-readiness levels and are constrained by infrastructure gaps, sensor reliability and compatibility issues, which collectively limit adoption in smallholder systems. Future research should focus on the development of cost-effective, reliable PLF systems for smallholder producers, while policy and capacity-building initiatives are needed to enhance infrastructure, training and technology adoption for scalable implementation. Full article
Show Figures

Figure 1

29 pages, 1565 KB  
Article
Integer Intelligence: A Reproducible Path from Training to FPGA
by Manjusha Shanker and Tee Hui Teo
Electronics 2026, 15(5), 1117; https://doi.org/10.3390/electronics15051117 - 8 Mar 2026
Viewed by 141
Abstract
A transparent, end-to-end pathway from learning-level training to deployable fixed-point hardware is presented and framed as gradients to gates. A didactic XOR convolutional network is first employed so that backpropagation, post-training quantization in INT8, and fixed-point arithmetic can be made concrete and verified [...] Read more.
A transparent, end-to-end pathway from learning-level training to deployable fixed-point hardware is presented and framed as gradients to gates. A didactic XOR convolutional network is first employed so that backpropagation, post-training quantization in INT8, and fixed-point arithmetic can be made concrete and verified with exact checks. The same methodology was applied to a compact LeNet-5 case study. On the software side, the training-to-export flow was formalized, and a bit-accurate Python reference was constructed for the quantized network. On the hardware side, a synthesizable INT8 datapath was implemented in Verilog, including multiply–accumulate units, sigmoid activation stages, and per-layer requantization with rounding and saturation. Test benches are provided so that the exported weights and activations can be ingested, and layer-wise matches can be reported. A co-simulation harness was used to coordinate framework inference, quantization, file conversion, HDL simulation, and regression checks, which enabled deterministic comparisons of the activations, partial sums and outputs. The complete loop was mapped to Artix-7 on the CMOD A7 development board, and the resource usage, maximum clock frequency, inference latency, and throughput were determined. The approach aligns with an educational HDL-to-Caffe pipeline by using reusable parameterized Verilog primitives for convolution, pooling, activation, and fully connected layers, training in Colab with AccDNN, Caffe, quantization, and an automated bit-for-bit verification regime before FPGA synthesis. Methodological contributions are provided, including a minimal and auditable XOR CNN that exposes scales, shifts, and saturation; a practical quantization recipe with INT32 accumulation and unit tests that guarantee agreement within one least significant bit between RTL and the INT8 reference; and a scalable mapping to LeNet-5 using a row-stationary and line-buffered dataflow on an Artix-7 FPGA. Empirical evidence shows feasibility at 100 MHz with representative utilization, millisecond-scale latency and zero mismatches across large test sets, which validates the quantization configuration and the verification strategy. Full article
(This article belongs to the Special Issue Recent Advances in AI Hardware Design)
Show Figures

Figure 1

27 pages, 2780 KB  
Review
The Evolving Landscape of NMR Structural Elucidation
by Josep Saurí
Molecules 2026, 31(5), 888; https://doi.org/10.3390/molecules31050888 - 7 Mar 2026
Viewed by 635
Abstract
Nuclear Magnetic Resonance (NMR) spectroscopy has long been a cornerstone in the structural elucidation of molecules, offering unique insights into atomic-level connectivity, conformation, and dynamics. Over the past decades, methodological and technological advances have significantly expanded its capabilities and applications. This manuscript charts [...] Read more.
Nuclear Magnetic Resonance (NMR) spectroscopy has long been a cornerstone in the structural elucidation of molecules, offering unique insights into atomic-level connectivity, conformation, and dynamics. Over the past decades, methodological and technological advances have significantly expanded its capabilities and applications. This manuscript charts the evolution of NMR from classical 1D/2D experiments to modern methods empowered by ultrahigh magnetic fields, cryogenic probes, non-uniform sampling, new methodologies, and hyperpolarization. We emphasize the growing synergy between experiment and computation, where automated analysis, quantum chemical calculations, and machine learning are dramatically enhancing the accuracy and efficiency of structure determination. We also highlight NMR’s broadening scope in areas ranging from complex mixtures and natural products to biomolecular and materials science. Full article
(This article belongs to the Special Issue A Theme Issue in Honor of Professor Gary E. Martin's 75th Birthday)
Show Figures

Graphical abstract

25 pages, 357 KB  
Article
AI-Enabled Management of Transfer Pricing Documentation: A Sustainable Governance Framework Integrating Compliance, Digitalization, and CSRD Requirements
by Marius Boiță, Florin Cornel Dumiter, Erika Loučanová, Luminița Păiușan, Gheorghe Pribeanu and Ionela Mihaela Milutin
Sustainability 2026, 18(5), 2528; https://doi.org/10.3390/su18052528 - 5 Mar 2026
Viewed by 183
Abstract
Tax administrations are undergoing rapid digitalisation, while sustainability requirements are increasingly embedded in corporate governance frameworks. These parallel transformations are raising new expectations for transfer pricing (TP) documentation, which must be accurate, transparent, and audit-ready. This paper investigates the extent to which artificial [...] Read more.
Tax administrations are undergoing rapid digitalisation, while sustainability requirements are increasingly embedded in corporate governance frameworks. These parallel transformations are raising new expectations for transfer pricing (TP) documentation, which must be accurate, transparent, and audit-ready. This paper investigates the extent to which artificial intelligence (AI)—specifically natural language processing (NLP), robotic process automation (RPA), and machine-learning techniques—can support a sustainability-oriented governance framework for TP documentation in multinational enterprises. Using a longitudinal case study of the OMEGA Group, operating across 21 jurisdictions, we analyse an AI-enabled documentation architecture that streamlines data extraction, enhances comparability analysis, and strengthens audit preparedness, in line with the OECD Transfer Pricing Guidelines and relevant European Union regulatory requirements. The empirical evidence indicates substantial improvements in documentation efficiency (−68.3%), a significant reduction in processing errors (−81.5%), and higher audit acceptance rates (+27%). Beyond compliance, AI-driven digital workflows contribute to sustainability objectives by reducing resource consumption, improving data traceability, and facilitating alignment with CSRD-related reporting requirements. Overall, the findings demonstrate that AI-enabled TP documentation can evolve into a strategic pillar of sustainable tax governance, provided that its outputs remain explainable, auditable, and grounded in professional judgment. The study proposes an integrated governance framework that connects digital transformation, regulatory compliance, and sustainability within contemporary TP management practices. Full article
(This article belongs to the Section Sustainable Management)
32 pages, 5003 KB  
Article
A Novel Hybrid IK Architecture for Robotic Arms: Iterative Refinement of Soft-Computing Approximations with Validation on ABB IRB-1200 Robotic Arm
by Meenalochani Jayabalan, Karunamoorthy Loganathan and Palanikumar Kayaroganam
Machines 2026, 14(3), 292; https://doi.org/10.3390/machines14030292 - 4 Mar 2026
Viewed by 262
Abstract
Adaptive Neuro-Fuzzy Inference System (ANFIS)-based inverse kinematics (IK) is highly accurate for trained poses but often yields approximations for unseen inputs due to non-standardized training data. This research addresses these limitations through two novel contributions designed for any generic Degrees of Freedom (DoF) [...] Read more.
Adaptive Neuro-Fuzzy Inference System (ANFIS)-based inverse kinematics (IK) is highly accurate for trained poses but often yields approximations for unseen inputs due to non-standardized training data. This research addresses these limitations through two novel contributions designed for any generic Degrees of Freedom (DoF) serial revolute robotic arm. First, A structured training methodology is introduced using workspace decomposition and cubic path planning. Instead of random sampling, the workspace is partitioned into cubic regions where 28 unique trajectories (12 edges, 12 face diagonals, four space diagonals) connect the eight vertices using cubic polynomial interpolation. This ensures physically consistent data mirroring real world point to point (PTP) movements. Even though validated on an ABB IRB-1200 robotic arm, this modular design is inherently scalable, allowing the local cubic expertise to be extended to cover the entire reachable workspace. Second, a two-stage hybrid IK framework is proposed, where an initial ANFIS approximation is refined via Jacobian-based iterative methods. Three Hybrid Frame works were evaluated, Framework-1 (ANFIS + Jacobian Gradient), Framework-2 (ANFIS + Jacobian Pseudoinverse/Newton–Raphson), and Framework-3 (ANFIS + Damped Least Squares). The results show that all three hybrid IK frameworks achieve reliable convergence, while the DLS-based hybrid provides the best trade-off between accuracy, convergence speed, and numerical stability. This generic, analytical free architecture provides a computationally efficient solution even in a hybrid scenario, bridging the gap between offline structured training and online, real-time refinement for digital twin synchronization and industrial automation. Full article
Show Figures

Figure 1

21 pages, 14880 KB  
Article
Beyond the Black Box: Interpretable Multi-Trait Essay Scoring with Trait-Aware Transformer
by Xiaoyi Tang
Electronics 2026, 15(5), 1066; https://doi.org/10.3390/electronics15051066 - 4 Mar 2026
Viewed by 214
Abstract
The rapid advancement of automated essay scoring (AES) has been constrained by a representation bottleneck, where monolithic models collapse diverse facets of writing constructs into a single, uninterpretable signal, undermining the pedagogical value of multi-dimensional rating traits. To address this limitation, the RoBERTa-based [...] Read more.
The rapid advancement of automated essay scoring (AES) has been constrained by a representation bottleneck, where monolithic models collapse diverse facets of writing constructs into a single, uninterpretable signal, undermining the pedagogical value of multi-dimensional rating traits. To address this limitation, the RoBERTa-based Trait-Aware Transformer (RoBERTa-TAT) is introduced. This architectural reframing replaces unified pooling with parallel, trait-specific attention streams, preserving and disentangling critical features such as conceptual depth and mechanical precision. Tested on the ASAP Dataset-7, RoBERTa-TAT attains a new state-of-the-art Quadratic Weighted Kappa (QWK) of 0.936, outperforming sequential baselines and conventional Transformer variants. Beyond gains in accuracy, this trait-specialized architecture recasts scoring from a black-box prediction into a transparent diagnostic tool, enabling actionable, fine-grained feedback at different rating traits. High-resolution inspection reveals that the model’s internal representations correlate with specific linguistic markers—such as discourse connectives for organization—suggesting a degree of structural alignment with expert judgment. By aligning high-capacity representation learning with the granular demands of formative assessment, RoBERTa-TAT provides a practical, interpretable blueprint for deploying accountable AI in education and broadening access to expert diagnostic insight. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 1126 KB  
Article
Semi-Supervised Vertebra Segmentation and Identification in CT Images
by You Fu, Jiasen Feng and Hanlin Cheng
Tomography 2026, 12(3), 33; https://doi.org/10.3390/tomography12030033 - 3 Mar 2026
Viewed by 160
Abstract
Background/Objectives: Automatic segmentation and identification of vertebrae in spinal CT are essential for assisting diagnosis of spinal disorders and for preoperative planning. The task is challenging due to the high structural similarity between adjacent vertebrae and the morphological variability of vertebrae. Most [...] Read more.
Background/Objectives: Automatic segmentation and identification of vertebrae in spinal CT are essential for assisting diagnosis of spinal disorders and for preoperative planning. The task is challenging due to the high structural similarity between adjacent vertebrae and the morphological variability of vertebrae. Most existing methods rely on fully supervised deep learning and, constrained by limited annotations, struggle to remain robust in complex scenarios. Methods: We propose a semi-supervised approach built on a dual-branch 3D U-Net. Mamba modules are inserted between the encoder and decoder to model long-range dependencies along the cranio–caudal axis. The identification branch employs a 3D convolutional block attention module (3D-CBAM) to enhance class discriminability. A unified semi-supervised objective is formulated via teacher–student consistency: for each unlabeled sample, weakly and strongly augmented views are generated, and cross-branch consistency is enforced, together with confidence-based filtering and class-frequency reweighting. In addition, a connected-component analysis is used to enforce anatomically plausible sequential continuity of vertebral indices in the outputs. Results: Experiments on VerSe 2019 and 2020 show that, on the public VerSe 2019 test set (with VerSe 2020 scans used as unlabeled training data), the supervised baseline achieved a Dice score of 89.8% and an identification accuracy of 92.3%. Incorporating unlabeled data improved performance to 91.6% Dice and 97.5% identification accuracy (relative gains of +1.8 and +5.2 percentage points). Compared with competing methods, the proposed semi-supervised model attains higher or comparable segmentation accuracy and the highest identification accuracy. Conclusions: Without additional annotation cost, the proposed method markedly improves the overall performance of vertebra segmentation and identification, offering more robust automated support for clinical workflows. Full article
Show Figures

Figure 1

Back to TopTop