Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,053)

Search Parameters:
Keywords = semantic model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 7135 KB  
Article
Evolutionary Multi-Objective Prompt Learning for Synthetic Text Data Generation with Black-Box Large Language Models
by Diego Pastrián, Nicolás Hidalgo, Víctor Reyes and Erika Rosas
Appl. Sci. 2026, 16(8), 3623; https://doi.org/10.3390/app16083623 (registering DOI) - 8 Apr 2026
Abstract
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are [...] Read more.
High-quality training data are essential for the performance and generalization of artificial intelligence systems, particularly in dynamic environments such as adaptive stream processing for disaster response. However, constructing large and representative datasets remains costly and time-consuming, especially in domains where real data are scarce or difficult to obtain. Large Language Models (LLMs) provide powerful capabilities for synthetic text generation, yet the quality of generated data strongly depends on the design of input prompts. Prompt engineering is therefore critical, but it remains largely manual and difficult to scale, particularly in black-box settings where model internals are inaccessible. This work introduces EVOLMD-MO, a multi-objective evolutionary framework for automated prompt learning aimed at generating high-quality synthetic text datasets using black-box LLMs. The proposed approach formulates prompt optimization as a multi-objective search problem in which candidate prompts evolve through genetic operators guided by two complementary objectives: semantic fidelity to reference data and generative diversity of the produced samples. To support scalable optimization, the framework integrates a modular multi-agent architecture that decouples prompt evolution, LLM interaction, and evaluation mechanisms. The evolutionary process is implemented using the NSGA-II algorithm, enabling the discovery of diverse Pareto-optimal prompts that balance semantic preservation and diversity. Experimental evaluation using large-scale disaster-related social media data demonstrates that the proposed approach consistently improves prompt quality across generations while maintaining a stable trade-off between fidelity and diversity. Compared with a single-objective baseline, EVOLMD-MO explores a significantly broader semantic search space and produces more diverse yet semantically coherent synthetic datasets. These results indicate that multi-objective evolutionary prompt learning constitutes a promising strategy for black-box LLM-driven data generation, with potential applicability to adaptive data analytics and real-time decision-support systems in highly dynamic environments, pending broader validation across domains and models. Full article
(This article belongs to the Special Issue Resource Management for AI-Centric Computing Systems)
Show Figures

Figure 1

25 pages, 7549 KB  
Article
Unseen-Crop Plant Disease Classification via Disentangled Representation Learning
by Zhenzhen Wu, Jianli Guo, Wei Hou, Kun Zhou, Kerang Cao and Hoekyung Jung
Electronics 2026, 15(8), 1553; https://doi.org/10.3390/electronics15081553 (registering DOI) - 8 Apr 2026
Abstract
Deep learning has accelerated progress in plant disease recognition, providing strong technical support for early diagnosis and precision management. However, models often lack robustness and generalization when confronted with novel crops absent from the training set, leading to a marked performance drop in [...] Read more.
Deep learning has accelerated progress in plant disease recognition, providing strong technical support for early diagnosis and precision management. However, models often lack robustness and generalization when confronted with novel crops absent from the training set, leading to a marked performance drop in cross-unseen-crop scenarios. Cross-crop generalization for plant disease recognition requires models to identify known disease categories in crop domains never observed during training. A central challenge is that disease symptoms are strongly coupled with crop-specific appearance cues, which severely degrades generalization. Here, TDC (Text-guided feature Disentanglement Contrast) is introduced as a feature-disentanglement framework for cross-crop plant disease recognition. The proposed method employs a dual-branch visual encoder to separately capture disease semantic representations and crop-domain representations, and it leverages a frozen CLIP text encoder to use disease and crop prompts for text-guided semantic anchoring. A semantic-anchor-only contrastive disentanglement strategy is further formulated under a hybrid label space, where crop-branch features are incorporated as stop-gradient hard negatives to suppress semantic–domain information leakage and strengthen the intra-class aggregation of the same disease across crops. Residual domain-discriminative cues are mitigated via domain-adversarial learning. During inference, only the disease branch is retained for classification, improving generalization while reducing deployment overhead. Experiments demonstrate that under the PlantVillage cross-crop setting, the method achieves 98.04% and 74.29% Top-1 accuracy on seen and unseen crop domains, respectively. Moreover, it attains 81.99% on a real-world field dataset of strawberry powdery mildew and 76.31% on a low-illumination degradation set, validating robustness under realistic imaging distribution shifts. Full article
(This article belongs to the Special Issue Advances in Data-Driven Artificial Intelligence, 2nd Edition)
Show Figures

Figure 1

21 pages, 320 KB  
Article
Xenoepistemics
by Jordi Vallverdú
Philosophies 2026, 11(2), 57; https://doi.org/10.3390/philosophies11020057 (registering DOI) - 8 Apr 2026
Abstract
Epistemology remains tacitly anthropocentric: it treats knowledge as something produced and validated through human cognitive capacities such as understanding, intuition, and transparent justification. Yet contemporary science and artificial intelligence increasingly depend on non-human systems that generate mathematically valid results, empirically successful models, and [...] Read more.
Epistemology remains tacitly anthropocentric: it treats knowledge as something produced and validated through human cognitive capacities such as understanding, intuition, and transparent justification. Yet contemporary science and artificial intelligence increasingly depend on non-human systems that generate mathematically valid results, empirically successful models, and operationally reliable inferences that no human can fully survey or interpret. This article develops xenoepistemics, a structural theory of non-anthropocentric knowledge. The central claim is that epistemic evaluation must be reformulated in terms of system-level properties—reliability, robustness, counterfactual sensitivity, and domain transfer—rather than mentalistic notions such as belief or understanding. I offer (i) a definition of xenoepistemic systems as systems that track structure in a target domain without requiring human-style semantic access; (ii) a minimal account of epistemic agency without minds that avoids trivialization; and (iii) a non-circular trust framework that distinguishes empirical success from epistemic legitimacy using independent validation regimes. This paper addresses a reflexive worry—that a human-authored theory cannot dethrone human epistemology—by separating standpoint from object: xenoepistemics is articulated by humans but is not about human cognition. I discuss the pragmatic value of xenoepistemic knowledge production, the limits of independent verification for opaque systems, domain-relative thresholds for xenoepistemic authority, and the problem of constitutionally human-inaccessible knowledge. Finally, I diagnose and formalize the Marcusian regress paradox: recurrent goalpost-shifting, whereby every machine competence is reclassified as irrelevant once achieved. Xenoepistemics reframes this debate by treating non-human knowledge as a present reality requiring new norms, not as a future curiosity. Full article
(This article belongs to the Special Issue Intelligent Inquiry into Intelligence)
Show Figures

Figure 1

17 pages, 6586 KB  
Article
Harnessing Foundation Models for Optical–SAR Object Detection via Gated–Guided Fusion
by Qianyin Jiang, Jianshang Liao, Qiuyu Lin and Junkang Zhang
ISPRS Int. J. Geo-Inf. 2026, 15(4), 160; https://doi.org/10.3390/ijgi15040160 - 8 Apr 2026
Abstract
Remote sensing object detection is fundamental to Earth observation, yet remains challenging when relying on a single sensing modality. While optical imagery provides rich spatial and textural details, it is highly sensitive to illumination and adverse weather; conversely, Synthetic Aperture Radar (SAR) offers [...] Read more.
Remote sensing object detection is fundamental to Earth observation, yet remains challenging when relying on a single sensing modality. While optical imagery provides rich spatial and textural details, it is highly sensitive to illumination and adverse weather; conversely, Synthetic Aperture Radar (SAR) offers robust all-weather acquisition but suffers from speckle noise and limited semantic interpretability. To address these limitations, we leverage the potential of foundation models for optical–SAR object detection via a novel gated–guided fusion approach. By integrating transferable and generalizable representations from foundation models into the detection pipeline, we enhance semantic expressiveness and cross-environment robustness. Specifically, a gated–guided fusion mechanism is designed to selectively merge cross-modal features with foundational priors, enabling the network to prioritize informative cues while suppressing unreliable signals in complex scenes. Furthermore, we propose a dual-stream architecture incorporating attention mechanisms and State Space Models (SSMs) to simultaneously capture local and long-range dependencies. Extensive experiments on the large-scale M4-SAR dataset demonstrate that our method achieves state-of-the-art performance, significantly improving detection accuracy and robustness under challenging sensing conditions. Full article
Show Figures

Figure 1

18 pages, 11149 KB  
Article
LRES-YOLO: Target Detection Algorithm for Landslides on Reservoir Embankment Slopes
by Xiaohua Xu, Xuecai Bao, Zhongxi Wang, Haijing Wang and Xin Wen
Water 2026, 18(8), 889; https://doi.org/10.3390/w18080889 - 8 Apr 2026
Abstract
To address the urgent need for enhancing landslide risk monitoring in reservoir embankment slopes, a core component of water conservancy projects, this paper proposes the LRES-YOLO algorithm for real-time landslide detection on reservoir embankments. In LRES-YOLO, we first integrate coordinate attention into basic [...] Read more.
To address the urgent need for enhancing landslide risk monitoring in reservoir embankment slopes, a core component of water conservancy projects, this paper proposes the LRES-YOLO algorithm for real-time landslide detection on reservoir embankments. In LRES-YOLO, we first integrate coordinate attention into basic feature extraction convolutional blocks to form the CACBS attention module, which enhances the model’s ability to identify and locate landslide targets in complex reservoir terrain, overcoming positional information insensitivity in deep networks. Second, we add novel downsampling DP modules and ELAN-W modules to the backbone network, improving feature recognition efficiency for embankment slopes with diverse hydrological and topographical interference. Third, we optimize the feature fusion network with targeted concatenation and pooling operations, balancing semantic information enhancement with computational load reduction to mitigate overfitting in variable reservoir environments. Finally, we adopt Focal Loss and EIoU Loss to accelerate training convergence and strengthen target feature representation for small or obscured landslides on embankments. Experimental results show that LRES-YOLO outperforms traditional algorithms in detecting landslides across diverse reservoir embankment scenarios: it achieves an average improvement of 8.4 percentage points in mean mAP over the best-performing baseline across five independent trials, a detection speed of 8.2 ms per image, and memory usage of 139 MB. This lightweight design makes it suitable for edge computing devices, providing robust technical support for intelligent monitoring systems in water conservancy projects. Full article
Show Figures

Figure 1

31 pages, 3254 KB  
Article
Working Memory, Attention Control, and Vocabulary Retention in AI (ChatGPT)-Assisted Foreign Language Learning: A Structural Cognitive Modelling Approach
by Mohammad Hamad Al-khresheh, Mayez Almayez and Shatha F. Alruwaili
J. Intell. 2026, 14(4), 62; https://doi.org/10.3390/jintelligence14040062 - 8 Apr 2026
Abstract
This study examined how working memory, attention control, and frequency of ChatGPT-4 use are structurally associated with vocabulary retention in foreign language learning. A quantitative cross-sectional survey design was employed, with data collected from 1002 EFL learners via stratified random sampling. Validated self-report [...] Read more.
This study examined how working memory, attention control, and frequency of ChatGPT-4 use are structurally associated with vocabulary retention in foreign language learning. A quantitative cross-sectional survey design was employed, with data collected from 1002 EFL learners via stratified random sampling. Validated self-report instruments measured working memory, attention control, frequency of ChatGPT use, and vocabulary retention (immediate recall, delayed retention, semantic integration, and productive use). Structural equation modelling was used to test the proposed model. The results showed that working memory was strongly associated with attention control and exerted a direct effect on vocabulary retention across all dimensions. Attention control explained a substantial share of the relationship between working memory and retention, indicating that regulatory allocation of attention, rather than memory capacity alone, governs whether lexical information is stabilised during ChatGPT-assisted learning. The frequency of ChatGPT use conditioned these cognitive pathways by strengthening links between working memory and attention control, and between attention control and vocabulary retention, at higher levels of engagement. Frequency did not predict retention independently, indicating that repeated use supports learning only to the extent that it reinforces cognitive regulation rather than increasing exposure. Vocabulary learning with AI relies more on cognitive regulation and engagement than exposure. Full article
(This article belongs to the Section Studies on Cognitive Processes)
Show Figures

Figure 1

24 pages, 2056 KB  
Article
Study on the Public Perception Characteristics of Intangible Cultural Heritage in China from the Perspective of Social Media
by Xing Tu and Yu Xia
ISPRS Int. J. Geo-Inf. 2026, 15(4), 159; https://doi.org/10.3390/ijgi15040159 - 7 Apr 2026
Abstract
Exploring public awareness, participation, and emotional inclination toward intangible cultural heritage (ICH) clarifies public attitudes and demands toward traditional culture, providing a crucial basis for targeted ICH protection and inheritance. Based on ICH text big data collected from China’s mainstream social media platform [...] Read more.
Exploring public awareness, participation, and emotional inclination toward intangible cultural heritage (ICH) clarifies public attitudes and demands toward traditional culture, providing a crucial basis for targeted ICH protection and inheritance. Based on ICH text big data collected from China’s mainstream social media platform Weibo, this study improves the TF-IDF algorithm, integrates LDA topic analysis for semantic feature mining, and trains a new sentiment analysis model to explore public emotional attitudes and their formation mechanisms. The study is geographically limited to China and covers the entire year of 2023. The results show that: (1) Public ICH perception is multi-dimensional, with close attention to crafts like paper-cutting and traditional Chinese medicine; action-oriented terms reflect dynamic inheritance demands. Public discussions focus on three dimensions: ICH inheritance and development (39%), introduction and promotion (45%), and public experience and participation (16%), with the latter accounting for a low proportion. (2) Public sentiment toward ICH is predominantly positive, with all regions scoring above 0.730 (full score = 1), and Zhejiang (0.751) and Jiangsu (0.750) ranking significantly higher. (3) Spatial econometric analysis reveals marked regional differences in ICH sentiment distribution, mainly affected by three key factors—the number of ICH projects, the number of inheritors, and regional GDP—with regression coefficients of 0.699, 0.632, and 0.458 (p < 0.01). This finding provides a basis for formulating targeted ICH protection strategies. Full article
(This article belongs to the Topic 3D Documentation of Natural and Cultural Heritage)
33 pages, 3919 KB  
Article
BiLSTM Guided LPA Planning, Re-Planning, and Backtracking for Effective and Efficient Emergency Evacuation
by Ramzi Djemai, Hamza Kheddar, Mohamed Chahine Ghanem, Karim Ouazzane and Erivelton Nepomuceno
Smart Cities 2026, 9(4), 65; https://doi.org/10.3390/smartcities9040065 - 7 Apr 2026
Abstract
Emergency evacuation in complex and dynamic building environments requires robust and adaptive routing strategies capable of responding to evolving hazards, blocked passages, and changing crowd behaviour. Most existing evacuation planners rely on static geometric representations and lack semantic awareness of the environment, limiting [...] Read more.
Emergency evacuation in complex and dynamic building environments requires robust and adaptive routing strategies capable of responding to evolving hazards, blocked passages, and changing crowd behaviour. Most existing evacuation planners rely on static geometric representations and lack semantic awareness of the environment, limiting their ability to perform informed re-planning and backtracking when routes become unsafe. This paper proposes a neuro-symbolic evacuation planning framework that integrates Lifelong Planning A* (LPA*) with ontology-driven semantic reasoning and a Bidirectional Long Short-Term Memory (BiLSTM) prediction model. The building’s spatial and semantic knowledge is represented using the Web Ontology Language (OWL) and Resource Description Framework (RDF), enabling automated inference of implicit connections and enforcement of safety policies. The BiLSTM model learns temporal patterns from ontology-consistent evacuation trajectories and provides guidance for remaining-cost estimation and early prediction of routes likely to require backtracking, which is combined with a bounded semantic heuristic to preserve admissibility and optimality guarantees. Simulation results in a multi-floor academic building show that the proposed BiLSTM-guided semantic LPA* framework reduces average evacuation time by up to 9.6%, decreases node expansions by up to 32%, and increases evacuation success rates to 96.2% compared with a purely semantic baseline. The BiLSTM model also achieves strong predictive performance, with a test AUC of 0.92 for backtracking prediction and a next-state accuracy of 87.1%. The proposed framework is designed to support explainable, policy-compliant, and incrementally adaptable evacuation guidance under rapidly evolving emergency conditions. Full article
35 pages, 10124 KB  
Article
An Integrated BIM–NLP Framework for Design-Informed Automated Construction Schedule Generation
by Mahmoud Donia, Emad Elbeltagi, Ahmed Elhakeem and Hossam Wefki
Designs 2026, 10(2), 43; https://doi.org/10.3390/designs10020043 - 7 Apr 2026
Abstract
Artificial intelligence has attracted increasing attention in the construction industry; however, automated time scheduling remains limited in practical applications. Schedule development remains manual, requiring planners to analyze project documents, define activities, estimate durations, and identify relationships based on logical sequence. This process primarily [...] Read more.
Artificial intelligence has attracted increasing attention in the construction industry; however, automated time scheduling remains limited in practical applications. Schedule development remains manual, requiring planners to analyze project documents, define activities, estimate durations, and identify relationships based on logical sequence. This process primarily depends on individual experience and skills, making it both time-consuming and prone to human error. From an engineering design perspective, delayed or inconsistent schedule development weakens design-to-construction feedback, limiting the ability to evaluate constructability and time implications of alternative design decisions during early-stage planning. This study proposes an integrated BIM–Natural Language Processing (NLP) framework to automate activity identification, duration estimation, and logical sequencing for construction scheduling. The framework extracts project data from Revit, organizes it into a bill of quantities format, and then generates an activity list, each activity with a unique ID. Using Sentence-BERT (SBERT) embeddings, the framework estimates activity durations based on semantic similarity. The same semantic process is combined with rule-based reasoning to identify logical relationships, including sequences, supported by an Excel-based reference dictionary that includes logical relationships, productivity, and ID structure. Finally, the framework incorporates a crashing module that proportionally adjusts the duration of activities on the longest path to target the project’s completion time without violating relationships. The proposed framework was validated using real construction project data and produced reliable results. By producing a tool-ready schedule directly from design-model information, the proposed workflow enables earlier schedule feedback loops and supports design-informed planning by allowing designers and planners to assess the time consequences of model-driven scope changes. The results demonstrate that integrating BIM and NLP can transform conventional schedules into faster, more consistent processes, thereby supporting the construction industry. Full article
24 pages, 4332 KB  
Article
Depth-Aware Adversarial Domain Adaptation for Cross-Domain Remote Sensing Segmentation
by Lulu Niu, Xiaoxuan Liu, Enze Zhu, Yidan Zhang, Hanru Shi, Xiaohe Li, Hong Wang, Jie Jia and Lei Wang
Remote Sens. 2026, 18(7), 1099; https://doi.org/10.3390/rs18071099 - 7 Apr 2026
Abstract
As a key task in remote sensing analysis, semantic segmentation of remote sensing images (RSI) underpins many practical applications. Despite its importance, obtaining dense pixel-wise annotations remains labor-intensive and time-consuming. Unsupervised domain adaptation (UDA) offers a promising solution by utilizing knowledge from labeled [...] Read more.
As a key task in remote sensing analysis, semantic segmentation of remote sensing images (RSI) underpins many practical applications. Despite its importance, obtaining dense pixel-wise annotations remains labor-intensive and time-consuming. Unsupervised domain adaptation (UDA) offers a promising solution by utilizing knowledge from labeled source domains for unlabeled target domains, yet its effectiveness is often compromised by significant distribution shifts arising from variations in imaging conditions. To address this challenge, we propose a depth-aware adaptation network (DAAN), a novel two-branch network that explicitly leverages complementary depth information from a digital surface model (DSM) to enhance cross-domain remote sensing segmentation. Unlike conventional UDA methods that primarily focus on semantic features, DAAN incorporates depth data to build a more generalized feature space. This network introduces three key components: an adaptive feature aggregator (AFA) for progressive semantic-depth feature fusion, a gated prediction selection unit (GPSU) that selectively integrates predictions to mitigate the impact of noisy depth measurements, and misalignment-focused residual refinement (MFRR) module that emphasizes poorly aligned target regions during training. Experiments on the ISPRS and GAMUS datasets demonstrate the effectiveness of the proposed method. In particular, DAAN achieves an mIoU of 50.53% and an F1 score of 65.75% for cross-domain segmentation on ISPRS to GAMUS, outperforming models without depth information by 9.17% and 8.99%, respectively. These results demonstrate the advantage of integrating auxiliary geometric information to improve model generalization on unlabeled remote sensing datasets, contributing to higher mapping accuracy, more reliable automated analysis, and enhanced decision-making support. Full article
Show Figures

Figure 1

27 pages, 2665 KB  
Review
Toward Knowledge-Enhanced Geohazard Intelligence: A Review of Knowledge Graphs and Large Language Models
by Wenjia Li and Yongzhang Zhou
GeoHazards 2026, 7(2), 40; https://doi.org/10.3390/geohazards7020040 - 7 Apr 2026
Abstract
Geohazards such as landslides, earthquakes, debris flows, and floods are governed by complex interactions among geological, hydrological, and human processes. Traditional data-driven models have improved hazard prediction but often lack interpretability and adaptability. This review examines the evolution of knowledge-guided approaches in geohazard [...] Read more.
Geohazards such as landslides, earthquakes, debris flows, and floods are governed by complex interactions among geological, hydrological, and human processes. Traditional data-driven models have improved hazard prediction but often lack interpretability and adaptability. This review examines the evolution of knowledge-guided approaches in geohazard research, highlighting how knowledge representation and artificial intelligence have progressively converged to enhance understanding, reasoning, and model transparency. A bibliometric analysis of 1410 publications indexed in the Web of Science reveals an evolution from early ontology-based knowledge engineering for expert reasoning to knowledge graphs (KG), frameworks enabling multi-source data integration and relational inference, and more recently, to large language model (LLM), augmented systems for automated knowledge extraction and cognitive geoscience. This review synthesizes advances in knowledge representation, knowledge graphs, and LLM-based reasoning, demonstrating how hybrid models that embed physical laws and expert knowledge can improve the interpretability and generalization of machine learning. These developments enable new forms of knowledge-driven geohazard intelligence and support applications in hazard monitoring, early warning, and risk communication. There are challenges we still face, including semantic fragmentation, limited causal reasoning, and sparse data for extreme events. Future directions require unified knowledge–data–mechanism architectures, causality-aware modeling, and interoperable standards to advance trustworthy and explainable geohazard intelligence. Full article
(This article belongs to the Topic Big Data and AI for Geoscience)
Show Figures

Figure 1

27 pages, 26065 KB  
Article
AEFOP: Adversarial Energy Field Optimization for Adversarial Example Purification
by Heqi Peng, Shengpeng Xiao and Yuanfang Guo
Appl. Sci. 2026, 16(7), 3588; https://doi.org/10.3390/app16073588 - 7 Apr 2026
Abstract
As AI-driven educational systems increasingly rely on deep neural networks, their vulnerability to adversarial perturbations raises concerns about assessment integrity, fairness, and reliability. Adversarial example purification is attractive for such deployments because it removes input perturbations without modifying the already deployed models. However, [...] Read more.
As AI-driven educational systems increasingly rely on deep neural networks, their vulnerability to adversarial perturbations raises concerns about assessment integrity, fairness, and reliability. Adversarial example purification is attractive for such deployments because it removes input perturbations without modifying the already deployed models. However, most existing purification methods are inherently goal-free: denoising-based approaches apply blind heuristic operators, while reconstruction-based methods rely on stochastic sampling guided by natural image priors. These methods typically suppress perturbations at the cost of weakening semantic details or inducing structural distortions. To address this limitation, we propose a novel goal-directed purification framework, termed adversarial energy field optimization for adversarial example purification (AEFOP). AEFOP formulates purification as a constrained optimization problem by defining a learnable adversarial energy which quantifies how far an input deviates from the benign region. This allows adversarial examples to be explicitly pushed from high-energy regions toward low-energy benign regions along an interpretable descent trajectory. Specifically, we build an adversarial energy network and optimize the energy field via a two-stage strategy: adversarial energy field shaping, which enforces distance-like energy behavior and correct gradient directions, and task-driven energy field calibration, which unrolls the descent process to calibrate the field with classification-consistency and semantic-preservation objectives. Extensive experiments across multiple attack scenarios demonstrate that AEFOP achieves superior purification accuracy and high visual quality while requiring only a few gradient steps during inference, offering a practical and efficient robustness layer for vision-based AI services in education. Full article
Show Figures

Figure 1

26 pages, 991 KB  
Article
Experimental Quantification of Authentication Enforcement Correctness and ACL Misconfiguration Impact in Standards-Compliant MQTT Deployments
by Nael M. Radwan and Frederick T. Sheldon
Appl. Sci. 2026, 16(7), 3583; https://doi.org/10.3390/app16073583 - 7 Apr 2026
Abstract
Message Queuing Telemetry Transport (MQTT) is a lightweight publish–subscribe protocol widely deployed in Internet of Things (IoT) systems. Although MQTT defines authentication and authorization mechanisms, their enforcement accuracy, configuration sensitivity, and operational cost under controlled misconfiguration conditions remain insufficiently quantified. This study experimentally [...] Read more.
Message Queuing Telemetry Transport (MQTT) is a lightweight publish–subscribe protocol widely deployed in Internet of Things (IoT) systems. Although MQTT defines authentication and authorization mechanisms, their enforcement accuracy, configuration sensitivity, and operational cost under controlled misconfiguration conditions remain insufficiently quantified. This study experimentally quantifies authentication enforcement behavior and Access Control List (ACL) misconfiguration impact within a standards-compliant MQTT deployment under controlled laboratory conditions. Rather than benchmarking a specific software product, the work measures protocol-defined security behavior—including authentication success rate, false acceptance rate (FAR), false rejection rate (FRR), privilege-boundary preservation, authentication latency, and broker CPU utilization—across systematically constructed operational and failure scenarios. Username/password and mutual TLS authentication were evaluated under valid and stress-induced connection conditions, alongside structured ACL policies incorporating wildcard over-permission. Across repeated trials, username/password authentication achieved higher observed connection reliability (≈0.95), while TLS-based authentication provided stronger cryptographic identity assurance at the cost of increased authentication latency (≈42.6 ms vs. 14.8 ms) and higher CPU utilization (≈23.7% vs. 9.4%). No false acceptances were observed within 100 unauthorized trials per configuration, corresponding to a 95% confidence upper bound of <3% for FAR under a binomial model. Under controlled ACL misconfiguration, 22 of 100 evaluated authorization operations accessed topics beyond the originally intended least-privilege scope, yielding a reproducible privilege expansion rate of 0.22. This expansion resulted from wildcard policy semantics rather than an enforcement malfunction. The results provide controlled empirical quantification of reliability–security trade-offs and configuration-driven privilege-boundary behavior within a standards-compliant MQTT deployment. While the findings reflect enforcement behavior as realized in the evaluated implementation and laboratory environment, the proposed measurement framework establishes reproducible criteria for assessing MQTT security enforcement accuracy under controlled conditions. Full article
Show Figures

Figure 1

20 pages, 3455 KB  
Article
FocusMamba: A Local–Global Mamba Framework Inspired by Visual Observation for Brain Tumor Segmentation
by Qiang Li, Tao Ni, Xueyan Wang and Hengxin Liu
Appl. Sci. 2026, 16(7), 3571; https://doi.org/10.3390/app16073571 - 6 Apr 2026
Abstract
Accurate brain tumor segmentation from magnetic resonance imaging (MRI) is crucial for brain tumor diagnosis, clinical treatment decisions, and advancing research. CNNs and Transformers have dominated this area, but CNNs struggle with long-range modeling, whereas Transformers are limited by the high computational costs [...] Read more.
Accurate brain tumor segmentation from magnetic resonance imaging (MRI) is crucial for brain tumor diagnosis, clinical treatment decisions, and advancing research. CNNs and Transformers have dominated this area, but CNNs struggle with long-range modeling, whereas Transformers are limited by the high computational costs of self-attention. Recently, Mamba has garnered significant attention due to its remarkable performance in long sequence modeling. However, the original Mamba architecture, designed primarily for 1D sequence modeling, fails to effectively capture the spatial and structural relationships essential for brain tumor segmentation. In this paper, we propose FocusMamba, a Mamba-based model inspired by human visual observation patterns, which jointly enhances local detail modeling and global contextual understanding. FocusMamba consists of three components: (i) a novel hierarchical and tri-directional Mamba unit that elevates attention from the global to the window level, reinforcing local semantic feature extraction, while simultaneously achieving window-level interactions to maintain broader global awareness, (ii) a large kernel convolution unit that captures long-range dependencies within whole-volume features, overcoming the limitations of Mamba’s single-scale context modeling, and (iii) a fusion unit that enhances the overall feature representation by fusing information from different levels. Extensive experiments on the BraTS 2023 and BraTS 2020 datasets demonstrate that FocusMamba achieves superior segmentation performance compared with several advanced methods. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

35 pages, 4925 KB  
Article
Defect-Mask2Former: An Improved Semantic Segmentation Model for Precise Small-Sized Defect Detection on Large-Sized Timbers
by Mingming Qin, Hongxu Li, Yuxiang Huang, Xingyu Tong and Zhihong Liang
Sensors 2026, 26(7), 2254; https://doi.org/10.3390/s26072254 - 6 Apr 2026
Viewed by 38
Abstract
The precise segmentation of small-sized defects on wood surfaces is critical for the quality grading of glued laminated timber (GLT). Existing semantic segmentation models face core bottlenecks in this context: high miss rates, blurred boundary localization, and excessive size measurement errors. To address [...] Read more.
The precise segmentation of small-sized defects on wood surfaces is critical for the quality grading of glued laminated timber (GLT). Existing semantic segmentation models face core bottlenecks in this context: high miss rates, blurred boundary localization, and excessive size measurement errors. To address these issues, this paper proposes an improved Defect-Mask2Former model that integrates an Attention-Guided Pyramid Enhancement (AGPE) module and a Defect Boundary Calibration and Correction (DBCC) module. Through synergistic optimization, the model achieved pixel-level precise segmentation. To support model training and validation, a custom image acquisition device was designed, and the PlankDefSeg dataset was constructed, comprising 3500 pixel-level annotated images covering five defect types across six industrial wood species. Experimental results demonstrate that on the PlankDefSeg dataset, Defect-Mask2Former achieved a mean Intersection over Union (mIoU) of 85.34% for small-sized defects, a 17.84% improvement over the baseline Mask2Former. The miss rate was reduced from 20.78% to 5.83%, and the size measurement error was only 2.86%, strictly meeting the ≤3% accuracy requirement of the GB/T26899-2022 standard. The model achieved an inference speed of 27.6 FPS, satisfying real-time detection needs. By integrating the model into the GLT grading workflow, a grading accuracy of 94.3% was achieved, and the processing time per timber was reduced from 30 s to 1.5 s, a 20-fold efficiency improvement. This study provides reliable technical support for intelligent GLT quality grading and offers a reference solution for other industrial surface defect segmentation tasks. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Back to TopTop