Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (532)

Search Parameters:
Keywords = entropy segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4293 KiB  
Article
Temperature Compensation Method for MEMS Ring Gyroscope Based on PSO-TVFEMD-SE-TFPF and FTTA-LSTM
by Hongqiao Huang, Wen Ye, Li Liu, Wenjing Wang, Yan Wang and Huiliang Cao
Micromachines 2025, 16(5), 507; https://doi.org/10.3390/mi16050507 - 26 Apr 2025
Viewed by 210
Abstract
This study proposes a novel parallel denoising and temperature compensation fusion algorithm for MEMS ring gyroscopes. First, the particle swarm optimization (PSO) algorithm is used to optimize the time-varying filter-based empirical mode decomposition (TVFEMD), obtaining optimal decomposition parameters. Then, TVFEMD decomposes the gyroscope [...] Read more.
This study proposes a novel parallel denoising and temperature compensation fusion algorithm for MEMS ring gyroscopes. First, the particle swarm optimization (PSO) algorithm is used to optimize the time-varying filter-based empirical mode decomposition (TVFEMD), obtaining optimal decomposition parameters. Then, TVFEMD decomposes the gyroscope output signal into a series of product function (PF) signals and a residual signal. Next, sample entropy (SE) is employed to classify the decomposed signals into three categories: noise segment, mixed segment, and feature segment. According to the parallel model structure, the noise segment is directly discarded. Meanwhile, time–frequency peak filtering (TFPF) is applied to denoise the mixed segment, while the feature segment undergoes compensation. For compensation, the football team training algorithm (FTTA) is used to optimize the parameters of the long short-term memory (LSTM) neural network, forming a novel FTTA-LSTM architecture. Both simulations and experimental results validate the effectiveness of the proposed algorithm. After processing the MEMS gyroscope output signal using the PSO-TVFEMD-SE-TFPF denoising algorithm and the FTTA-LSTM temperature drift compensation model, the angular random walk (ARW) of the MEMS gyroscope is reduced to 0.02°/√h, while the bias instability (BI) decreases to 2.23°/h. Compared to the original signal, ARW and BI are reduced by 99.43% and 97.69%, respectively. The proposed fusion-based temperature compensation method significantly enhances the temperature stability and noise performance of the gyroscope. Full article
(This article belongs to the Section A:Physics)
Show Figures

Figure 1

22 pages, 1077 KiB  
Article
SECrackSeg: A High-Accuracy Crack Segmentation Network Based on Proposed UNet with SAM2 S-Adapter and Edge-Aware Attention
by Xiyin Chen, Yonghua Shi and Junjie Pang
Sensors 2025, 25(9), 2642; https://doi.org/10.3390/s25092642 - 22 Apr 2025
Viewed by 288
Abstract
Crack segmentation is essential for structural health monitoring and infrastructure maintenance, playing a crucial role in early damage detection and safety risk reduction. Traditional methods, including digital image processing techniques have limitations in complex environments. Deep learning-based methods have shown potential, but still [...] Read more.
Crack segmentation is essential for structural health monitoring and infrastructure maintenance, playing a crucial role in early damage detection and safety risk reduction. Traditional methods, including digital image processing techniques have limitations in complex environments. Deep learning-based methods have shown potential, but still face challenges, such as poor generalization with limited samples, insufficient extraction of fine-grained features, feature loss during upsampling, and inadequate capture of crack edge details. This study proposes SECrackSeg, a high-accuracy crack segmentation network that integrates an improved UNet architecture, Segment Anything Model 2 (SAM2), MI-Upsampling, and an Edge-Aware Attention mechanism. The key innovations include: (1) using a SAM2 S-Adapter with a frozen backbone to enhance generalization in low-data scenarios; (2) employing a Multi-Scale Dilated Convolution (MSDC) module to promote multi-scale feature fusion; (3) introducing MI-Upsampling to reduce feature loss during upsampling; and (4) implementing an Edge-Aware Attention mechanism to improve crack edge segmentation precision. Additionally, a custom loss function incorporating weighted binary cross-entropy and weighted IoU loss is utilized to emphasize challenging pixels. This function also applies Multi-Granularity Supervision by optimizing segmentation outputs at three different resolution levels, ensuring better feature consistency and improved model robustness across varying image scales. Experimental results show that SECrackSeg achieves higher precision, recall, F1-score, and mIoU scores on the CFD, Crack500, and DeepCrack datasets compared to state-of-the-art models, demonstrating its excellent performance in fine-grained feature recognition, edge segmentation, and robustness. Full article
(This article belongs to the Collection Sensors and Sensing Technology for Industry 4.0)
Show Figures

Figure 1

13 pages, 5722 KiB  
Article
Entropy-Assisted Quality Pattern Identification in Finance
by Rishabh Gupta, Shivam Gupta, Jaskirat Singh and Sabre Kais
Entropy 2025, 27(4), 430; https://doi.org/10.3390/e27040430 - 16 Apr 2025
Viewed by 332
Abstract
Short-term patterns in financial time series form the cornerstone of many algorithmic trading strategies, yet extracting these patterns reliably from noisy market data remains a formidable challenge. In this paper, we propose an entropy-assisted framework for identifying high-quality, non-overlapping patterns that exhibit consistent [...] Read more.
Short-term patterns in financial time series form the cornerstone of many algorithmic trading strategies, yet extracting these patterns reliably from noisy market data remains a formidable challenge. In this paper, we propose an entropy-assisted framework for identifying high-quality, non-overlapping patterns that exhibit consistent behavior over time. We ground our approach in the premise that historical patterns, when accurately clustered and pruned, can yield substantial predictive power for short-term price movements. To achieve this, we incorporate an entropy-based measure as a proxy for information gain: patterns that lead to high one-sided movements in historical data yet retain low local entropy are more “informative” in signaling future market direction. Compared to conventional clustering techniques such as K-means and Gaussian Mixture Models (GMMs), which often yield biased or unbalanced groupings, our approach emphasizes balance over a forced visual boundary, ensuring that quality patterns are not lost due to over-segmentation. By emphasizing both predictive purity (low local entropy) and historical profitability, our method achieves a balanced representation of Buy and Sell patterns, making it better suited for short-term algorithmic trading strategies. This paper offers an in-depth illustration of our entropy-assisted framework through two case studies on Gold vs. USD and GBPUSD. While these examples demonstrate the method’s potential for extracting high-quality patterns, they do not constitute an exhaustive survey of all possible asset classes. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 3318 KiB  
Article
A Cross-Modal Attention-Driven Multi-Sensor Fusion Method for Semantic Segmentation of Point Clouds
by Huisheng Shi, Xin Wang, Jianghong Zhao and Xinnan Hua
Sensors 2025, 25(8), 2474; https://doi.org/10.3390/s25082474 - 14 Apr 2025
Viewed by 471
Abstract
To bridge the modality gap between camera images and LiDAR point clouds in autonomous driving systems—a critical challenge exacerbated by current fusion methods’ inability to effectively integrate cross-modal features—we propose the Cross-Modal Fusion (CMF) framework. This attention-driven architecture enables hierarchical multi-sensor data fusion, [...] Read more.
To bridge the modality gap between camera images and LiDAR point clouds in autonomous driving systems—a critical challenge exacerbated by current fusion methods’ inability to effectively integrate cross-modal features—we propose the Cross-Modal Fusion (CMF) framework. This attention-driven architecture enables hierarchical multi-sensor data fusion, achieving state-of-the-art performance in semantic segmentation tasks.The CMF framework first projects point clouds onto the camera coordinates through the use of perspective projection to provide spatio-depth information for RGB images. Then, a two-stream feature extraction network is proposed to extract features from the two modalities separately, and multilevel fusion of the two modalities is realized by a residual fusion module (RCF) with cross-modal attention. Finally, we design a perceptual alignment loss that integrates cross-entropy with feature matching terms, effectively minimizing the semantic discrepancy between camera and LiDAR representations during fusion. The experimental results based on the SemanticKITTI and nuScenes benchmark datasets demonstrate that the CMF method achieves mean intersection over union (mIoU) scores of 64.2% and 79.3%, respectively, outperforming existing state-of-the-art methods in regard to accuracy and exhibiting enhanced robustness in regard to complex scenarios. The results of the ablation studies further validate that enhancing the feature interaction and fusion capabilities in semantic segmentation models through cross-modal attention and perceptually guided cross-entropy loss (Pgce) is effective in regard to improving segmentation accuracy and robustness. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

23 pages, 21374 KiB  
Article
ACMSlE: A Novel Framework for Rolling Bearing Fault Diagnosis
by Shiqian Wu, Weiming Zhang, Jiangkun Qian, Zujue Yu, Wei Li and Lisha Zheng
Processes 2025, 13(4), 1167; https://doi.org/10.3390/pr13041167 - 12 Apr 2025
Viewed by 292
Abstract
Precision rolling bearings serve as critical components in a range of diverse industrial applications, where their continuous health monitoring is essential for preventing costly downtime and catastrophic failures. Early-stage bearing defects present significant diagnostic challenges, as they manifest as weak, nonlinear, and non-stationary [...] Read more.
Precision rolling bearings serve as critical components in a range of diverse industrial applications, where their continuous health monitoring is essential for preventing costly downtime and catastrophic failures. Early-stage bearing defects present significant diagnostic challenges, as they manifest as weak, nonlinear, and non-stationary transient features embedded within high-amplitude random noise. While entropy-based methods have evolved substantially since Shannon’s pioneering work—from approximate entropy to multiscale variants—existing approaches continue to face limitations in their computational efficiency and information preservation. This paper introduces the Adaptive Composite Multiscale Slope Entropy (ACMSlE) framework, which overcomes these constraints through two innovative mechanisms: a time-window shifting strategy, generating overlapping coarse-grained sequences that preserve critical signal information traditionally lost in non-overlapping segmentation, and an adaptive scale optimization algorithm that dynamically selects discriminative scales through entropy variation coefficients. In a comparative analysis against recent innovations, our integrated fault diagnosis framework—combining Fast Ensemble Empirical Mode Decomposition (FEEMD) preprocessing with Particle Swarm Optimization-Extreme Learning Machine (PSO-ELM) classification—achieves 98.7% diagnostic accuracy across multiple bearing defect types and operating conditions. Comprehensive validation through a multidimensional stability analysis, complexity discrimination testing, and data sensitivity analysis confirms this framework’s robust fault separation capability. Full article
(This article belongs to the Section Automation Control Systems)
Show Figures

Figure 1

24 pages, 24154 KiB  
Article
Multistage Threshold Segmentation Method Based on Improved Electric Eel Foraging Optimization
by Yunlong Hu, Liangkuan Zhu and Hongyang Zhao
Mathematics 2025, 13(7), 1212; https://doi.org/10.3390/math13071212 - 7 Apr 2025
Viewed by 177
Abstract
Multi-threshold segmentation of color images is a critical component of modern image processing. However, as the number of thresholds increases, traditional multi-threshold image segmentation methods face challenges such as low accuracy and slow convergence speed. To optimize threshold selection in color image segmentation, [...] Read more.
Multi-threshold segmentation of color images is a critical component of modern image processing. However, as the number of thresholds increases, traditional multi-threshold image segmentation methods face challenges such as low accuracy and slow convergence speed. To optimize threshold selection in color image segmentation, this paper proposes a multi-strategy improved Electric Eel Foraging Optimization (MIEEFO). The proposed algorithm integrates Differential Evolution and Quasi-Opposition-Based Learning strategies into the Electric Eel Foraging Optimization, enhancing its search capability, accelerating convergence, and preventing the population from falling into local optima. To further boost the algorithm’s search performance, a Cauchy mutation strategy is applied to mutate the best individual, improving convergence speed. To evaluate the segmentation performance of the proposed MIEEFO, 15 benchmark functions are used, and comparisons are made with seven other algorithms. Experimental results show that the MIEEFO algorithm outperforms other algorithms in at least 75% of cases and exhibits similar performance in up to 25% of cases. To further explore its application potential, a multi-level Kapur entropy-based MIEEFO threshold segmentation method is proposed and applied to different types of benchmark images and forest fire images. Experimental results indicate that the improved MIEEFO achieves higher segmentation quality and more accurate thresholds, providing a more effective method for color image segmentation. Full article
Show Figures

Figure 1

13 pages, 3411 KiB  
Article
The Ongoing Epidemics of Seasonal Influenza A(H3N2) in Hangzhou, China, and Its Viral Genetic Diversity
by Xueling Zheng, Feifei Cao, Yue Yu, Xinfen Yu, Yinyan Zhou, Shi Cheng, Xiaofeng Qiu, Lijiao Ao, Xuhui Yang, Zhou Sun and Jun Li
Viruses 2025, 17(4), 526; https://doi.org/10.3390/v17040526 - 4 Apr 2025
Viewed by 374
Abstract
This study examined the genetic and evolutionary features of influenza A/H3N2 viruses in Hangzhou (2010–2022) by analyzing 28,651 influenza-like illness samples from two sentinel hospitals. Influenza A/H3N2 coexisted with other subtypes, dominating seasonal peaks (notably summer). Whole-genome sequencing of 367 strains was performed [...] Read more.
This study examined the genetic and evolutionary features of influenza A/H3N2 viruses in Hangzhou (2010–2022) by analyzing 28,651 influenza-like illness samples from two sentinel hospitals. Influenza A/H3N2 coexisted with other subtypes, dominating seasonal peaks (notably summer). Whole-genome sequencing of 367 strains was performed on GridION platforms. Phylogenetic analysis showed they fell into 16 genetic groups, with multiple clades circulating simultaneously. Shannon entropy indicated HA, NA, and NS gene segments exhibited significantly higher variability than other genomic segments, with HA glycoprotein mutations concentrated in antigenic epitopes A–E. Antiviral resistance showed no inhibitor resistance mutations in PA, PB1, or PB2, but NA mutations were detected in some strains, and most strains harbored M2 mutations. A Bayesian molecular clock showed the HA segment exhibited the highest nucleotide substitution rate (3.96 × 10−3 substitutions/site/year), followed by NA (3.77 × 10−3) and NS (3.65 × 10−3). Selective pressure showed A/H3N2 strains were predominantly under purifying selection, with only sporadic positive selection at specific sites. The Pepitope model demonstrated that antigenic epitope mismatches between circulating H3N2 variants and vaccine strains led to a significant decline in influenza vaccine effectiveness (VE), particularly in 2022. Overall, the study underscores the complex circulation patterns of influenza in Hangzhou and the global importance of timely vaccine strain updates. Full article
(This article belongs to the Section Human Virology and Viral Diseases)
Show Figures

Figure 1

17 pages, 42731 KiB  
Article
ClipQ: Clipping Optimization for the Post-Training Quantization of Convolutional Neural Network
by Yiming Chen, Hui Zhang, Chen Zhang and Yi Liu
Appl. Sci. 2025, 15(7), 3980; https://doi.org/10.3390/app15073980 - 4 Apr 2025
Viewed by 358
Abstract
In response to the issue that post-training quantization leads to performance degradation in mobile deployment, as well as the problem that the balanced consideration of quantization deviation by Clipping optimization techniques limits the improvement of quantization accuracy, this article proposes a novel clipping [...] Read more.
In response to the issue that post-training quantization leads to performance degradation in mobile deployment, as well as the problem that the balanced consideration of quantization deviation by Clipping optimization techniques limits the improvement of quantization accuracy, this article proposes a novel clipping optimization method named ClipQ, which pays different attention to the parameters, aiming to preferentially reduce the quantization deviation of important parameters. The attention of the weight is positively related to its absolute value. Channel information entropy and principal component analysis are used to characterize the channel attention and spatial attention of activations, respectively. In addition, the particle swarm algorithm is applied in weight clipping to adjust the search step size and direction adaptively. ClipQ achieves high-precision quantization with very few calibration samples (<=50) and low time cost. Meanwhile, it does not bring extra computation, which is friendly to hardware. The experimental evaluation on image classification, semantic segmentation, and object detection shows that ClipQ outperforms other state-of-the-art clipping techniques, such as KL, ACIQ, and MSE. In 8-bit quantization, the average precision loss is 0.31% for image classification and 0.22% for object detection. More notably, it achieves almost lossless accuracy in semantic segmentation tasks. Full article
(This article belongs to the Special Issue Big Data Analysis and Management Based on Deep Learning: 2nd Edition)
Show Figures

Figure 1

43 pages, 37541 KiB  
Article
Hybrid Adaptive Crayfish Optimization with Differential Evolution for Color Multi-Threshold Image Segmentation
by Honghua Rao, Heming Jia, Xinyao Zhang and Laith Abualigah
Biomimetics 2025, 10(4), 218; https://doi.org/10.3390/biomimetics10040218 - 2 Apr 2025
Viewed by 236
Abstract
To better address the issue of multi-threshold image segmentation, this paper proposes a hybrid adaptive crayfish optimization algorithm with differential evolution for color multi-threshold image segmentation (ACOADE). Due to the insufficient convergence ability of the crayfish optimization algorithm in later stages, it is [...] Read more.
To better address the issue of multi-threshold image segmentation, this paper proposes a hybrid adaptive crayfish optimization algorithm with differential evolution for color multi-threshold image segmentation (ACOADE). Due to the insufficient convergence ability of the crayfish optimization algorithm in later stages, it is challenging to find a more optimal solution for optimization. ACOADE optimizes the maximum foraging quantity parameter p and introduces an adaptive foraging quantity adjustment strategy to enhance the randomness of the algorithm. Furthermore, the core formula of the differential evolution (DE) algorithm is incorporated to balance ACOADE’s exploration and exploitation capabilities better. To validate the optimization performance of ACOADE, the IEEE CEC2020 test function was selected for experimentation, and eight other algorithms were chosen for comparison. To verify the effectiveness of ACOADE for threshold image segmentation, the Kapur entropy method and Otsu method were used as objective functions for image segmentation and compared with eight other algorithms. Subsequently, the peak signal-to-noise ratio (PSNR), feature similarity index measure (FSIM), structural similarity index measure (SSIM), and Wilcoxon test were employed to evaluate the quality of the segmented images. The results indicated that ACOADE exhibited significant advantages in terms of objective function value, image quality metrics, convergence, and robustness. Full article
Show Figures

Graphical abstract

19 pages, 6858 KiB  
Article
Application Possibilities of Orthophoto Data Based on Spectral Fractal Structure Containing Boundary Conditions
by József Berke
Remote Sens. 2025, 17(7), 1249; https://doi.org/10.3390/rs17071249 - 1 Apr 2025
Viewed by 317
Abstract
The self-similar structure-based analysis of digital images offers many new practical possibilities. The fractal dimension is one of the most frequently measured parameters if we want to use image data in measurable analyses in metric spaces. In practice, the fractal dimension can be [...] Read more.
The self-similar structure-based analysis of digital images offers many new practical possibilities. The fractal dimension is one of the most frequently measured parameters if we want to use image data in measurable analyses in metric spaces. In practice, the fractal dimension can be measured well in simple files containing only image data. In the case of complex image data structures defined in different metric spaces, their measurement in metric space encounters many difficulties. In this work, we provide a practical solution for the measurement of ortho-photos—as complex image data structures—based on the spectral fractal structure based on boundary conditions (height, time, and temperature), presenting the further development of the related theoretical foundations. We will discuss the optimal flight altitude determination in detail through practical examples. For this, in addition to the structural measurements on the images, we also use the well-known image entropy in information theory. The data obtained in this way can facilitate the optimal UAS operation execution that best suits further image processing tasks (e.g., classification, segmentation, and index analysis). Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

19 pages, 6626 KiB  
Article
Action Recognition with 3D Residual Attention and Cross Entropy
by Yuhao Ouyang and Xiangqian Li
Entropy 2025, 27(4), 368; https://doi.org/10.3390/e27040368 - 31 Mar 2025
Viewed by 363
Abstract
This study proposes a three-dimensional (3D) residual attention network (3DRFNet) for human activity recognition by learning spatiotemporal representations from motion pictures. Core innovation integrates the attention mechanism into the 3D ResNet framework to emphasize key features and suppress irrelevant ones. In each 3D [...] Read more.
This study proposes a three-dimensional (3D) residual attention network (3DRFNet) for human activity recognition by learning spatiotemporal representations from motion pictures. Core innovation integrates the attention mechanism into the 3D ResNet framework to emphasize key features and suppress irrelevant ones. In each 3D ResNet block, channel and spatial attention mechanisms generate attention maps for tensor segments, which are then multiplied by the input feature mapping to emphasize key features. Additionally, the integration of Fast Fourier Convolution (FFC) enhances the network’s capability to effectively capture temporal and spatial features. Simultaneously, we used the cross-entropy loss function to describe the difference between the predicted value and GT to guide the model’s backpropagation. Subsequent experimental results have demonstrated that 3DRFNet achieved SOTA performance in human action recognition. 3DRFNet achieved accuracies of 91.7% and 98.7% on the HMDB-51 and UCF-101 datasets, respectively, which highlighted 3DRFNet’s advantages in recognition accuracy and robustness, particularly in effectively capturing key behavioral features in videos using both attention mechanisms. Full article
Show Figures

Figure 1

24 pages, 16730 KiB  
Article
LV-FeatEx: Large Viewpoint-Image Feature Extraction
by Yukai Wang, Yinghui Wang, Wenzhuo Li, Yanxing Liang, Liangyi Huang and Xiaojuan Ning
Mathematics 2025, 13(7), 1111; https://doi.org/10.3390/math13071111 - 27 Mar 2025
Viewed by 360
Abstract
Maintaining stable image feature extraction under viewpoint changes is challenging, particularly when the angle between the camera’s reverse direction and the object’s surface normal exceeds 40 degrees. Such conditions can result in unreliable feature detection. Consequently, this hinders the performance of vision-based systems. [...] Read more.
Maintaining stable image feature extraction under viewpoint changes is challenging, particularly when the angle between the camera’s reverse direction and the object’s surface normal exceeds 40 degrees. Such conditions can result in unreliable feature detection. Consequently, this hinders the performance of vision-based systems. To address this, we propose a feature point extraction method named Large Viewpoint Feature Extraction (LV-FeatEx). Firstly, the method uses a dual-threshold approach based on image grayscale histograms and Kapur’s maximum entropy to constrain the AGAST (Adaptive and Generic Accelerated Segment Test) feature detector. Combined with the FREAK (Fast Retina Keypoint) descriptor, the method enables more effective estimation of camera motion parameters. Next, we design a longitude sampling strategy to create a sparser affine simulation model. Meanwhile, images undergo perspective transformation based on the camera motion parameters. This improves operational efficiency and aligns perspective distortions between two images, enhancing feature point extraction accuracy under large viewpoints. Finally, we verify the stability of the extracted feature points through feature point matching. Comprehensive experimental results show that, under large viewpoint changes, our method outperforms popular classical and deep learning feature extraction methods. The correct rate of feature point matching improves by an average of 40.1 percent, and speed increases by an average of 6.67 times simultaneously. Full article
Show Figures

Figure 1

17 pages, 27754 KiB  
Article
A Lightweight Entropy–Curvature-Based Attention Mechanism for Meningioma Segmentation in MRI Images
by Yifan Guan, Lei Zhang, Jiayi Li, Xiaolong Xu, Yu Yan and Leyi Zhang
Appl. Sci. 2025, 15(6), 3401; https://doi.org/10.3390/app15063401 - 20 Mar 2025
Viewed by 213
Abstract
Meningiomas are a common type of brain tumor. Due to their location within the cranial cavity, they can potentially cause irreversible damage to adjacent brain tissues. Clinical practice typically involves surgical resection for tumors that provoke symptoms and exhibit continued growth. Given the [...] Read more.
Meningiomas are a common type of brain tumor. Due to their location within the cranial cavity, they can potentially cause irreversible damage to adjacent brain tissues. Clinical practice typically involves surgical resection for tumors that provoke symptoms and exhibit continued growth. Given the variability in the size and location of meningiomas, achieving rapid and precise localization is critical in clinical practice. Typically, meningiomas are imaged using magnetic resonance imaging (MRI), which produces 3D images that require significant memory resources for the segmentation task. In this paper, a lightweight 3D attention mechanism based on entropy–curvature (ECA) is proposed, which significantly enhances both parameter efficiency and inference accuracy. This attention mechanism uses a pooling method and two spatial attention modules to effectively reduce computational complexity while capturing spatial feature information. In terms of pooling, a tri-axis pooling method is developed to maximize information retention during the dimensionality reduction process of meningioma data, allowing the application of two-dimensional attention techniques to 3D medical images. Subsequently, this mechanism utilizes information entropy and curvature filters to filter noise and enhance feature information. Moreover, to validate the proposed method, the meningioma dataset from West China Hospital’s Department of Neurosurgery and the BraTS2021 dataset are used in our experiments. The results demonstrated superior performance compared to the state-of-the-art methods. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 6151 KiB  
Article
REU-Net: A Remote Sensing Image Building Segmentation Network Based on Residual Structure and the Edge Enhancement Attention Module
by Tianen Yuan and Bo Hu
Appl. Sci. 2025, 15(6), 3206; https://doi.org/10.3390/app15063206 - 14 Mar 2025
Viewed by 544
Abstract
Building segmentation from high-resolution remote sensing images plays a crucial role in cadastral measurement, ecological monitoring, urban planning, and other applications. To address the current challenges in building segmentation from high-resolution remote sensing images, this paper proposes an improved deep learning-based network—REU-Net(2EEAM). The [...] Read more.
Building segmentation from high-resolution remote sensing images plays a crucial role in cadastral measurement, ecological monitoring, urban planning, and other applications. To address the current challenges in building segmentation from high-resolution remote sensing images, this paper proposes an improved deep learning-based network—REU-Net(2EEAM). The network replaces traditional convolutional blocks in U-Net with Residual Structures, deepening the network and alleviating the issue of vanishing gradients. Additionally, it substitutes the direct skip connections with two Edge Enhancement Attention Modules (EEAMs), enhancing the network’s ability to extract building edge information. Furthermore, a hybrid loss function combining edge consistency loss and binary cross-entropy loss is used to train the network, aiming to improve segmentation accuracy. Experimental results show that REU-Net(2EEAM) achieves optimal performance across multiple evaluation metrics (such as P, MPA, MIoU, and FWIoU), particularly excelling in the accurate recognition of building edges, significantly outperforming other network models. This method provides a reliable foundation for the further optimization of building segmentation algorithms for remote sensing images. Full article
Show Figures

Figure 1

20 pages, 4707 KiB  
Article
Entropy-Optimized Dynamic Text Segmentation and RAG-Enhanced LLMs for Construction Engineering Knowledge Base
by Haiyuan Wang, Deli Zhang, Jianmin Li, Zelong Feng and Feng Zhang
Appl. Sci. 2025, 15(6), 3134; https://doi.org/10.3390/app15063134 - 13 Mar 2025
Viewed by 711
Abstract
In the field of construction engineering, there exists a dynamic evolution of extensive technical standards and specifications (e.g., GB/T and ISO series) that permeate the entire lifecycle of design, construction, and operation–maintenance. These standards require continuous version iteration to adapt to technological innovations. [...] Read more.
In the field of construction engineering, there exists a dynamic evolution of extensive technical standards and specifications (e.g., GB/T and ISO series) that permeate the entire lifecycle of design, construction, and operation–maintenance. These standards require continuous version iteration to adapt to technological innovations. Engineers require specialized knowledge bases to assist in understanding and updating these standards. The advancement of large language models (LLMs) and Retrieval-Augmented Generation (RAG) technologies provides robust technical support for constructing domain-specific knowledge bases. This study developed and tested a vertical domain knowledge base construction scheme based on RAG architecture and LLMs, comprising three critical components: entropy-optimized dynamic text segmentation (EDTS), vector correlation-based chunk ranking, and iterative optimization of prompt engineering. This study employs an EDTS method to ensure information clarity and predictability within limited chunk lengths, followed by selecting 10 relevant chunks to form prompts for input into LLMs, thereby enabling efficient retrieval of vertical domain knowledge. Experimental validation using Qwen-series LLMs with a test set of 101 expert-verified questions from Chinese construction industry standard demonstrates that the overall test accuracy reaches 76%. The comparative experiments across model scales (1.5B, 3B, 7B, 14B, 32B, and 72B) quantitatively reveal the relationship between model size, answer accuracy, and execution time, providing decision-making guidance for computational resource-accuracy tradeoffs in engineering practice. Full article
(This article belongs to the Special Issue Natural Language Processing in the Era of Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop