Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,777)

Search Parameters:
Keywords = discriminant image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2521 KB  
Article
Genetic Algorithm Based Band Relevance Selection in Hyperspectral Imaging for Plastic Waste Material Discrimination
by Carolina Blanch-Perez-del-Notario and Murali Jayapala
Sustainability 2025, 17(18), 8123; https://doi.org/10.3390/su17188123 (registering DOI) - 9 Sep 2025
Abstract
Hyperspectral imaging, in combination with microscopy, can increase material discrimination compared to standard microscopy. We explored the potential of discriminating pellet microplastic materials using a hyperspectral short-wavelength infrared (SWIR) camera, providing 100 bands in the 1100–1650 nm range, in combination with reflection microscopy. [...] Read more.
Hyperspectral imaging, in combination with microscopy, can increase material discrimination compared to standard microscopy. We explored the potential of discriminating pellet microplastic materials using a hyperspectral short-wavelength infrared (SWIR) camera, providing 100 bands in the 1100–1650 nm range, in combination with reflection microscopy. The identification of the most relevant spectral bands helps to increase system cost efficiency. The use of fewer bands reduces memory and processing requirements, and can also steer the development of sustainable, cost-efficient sensors with fewer bands. For this purpose, we present a genetic algorithm to perform band relevance analysis and propose novel algorithm optimizations. The results show that a few spectral bands (between 6 and 9) are sufficient for accurate (>80%) pixel discrimination of all 22 types of microplastic waste, contributing to sustainable development goals (SDGs) such as SDG 6 (‘clean water and sanitation’) or SDG 9 (‘industry, innovation, and infrastructure’). In addition, we study the impact of the classifier method and the width of the spectral response on band selection, neither of which has been addressed in the current state-of-the-art. Finally, we propose a method to steer band selection towards a more balanced distribution of classification accuracy, increasing its applicability in multiclass applications. Full article
Show Figures

Figure 1

18 pages, 2596 KB  
Article
Research on CNC Machine Tool Spindle Fault Diagnosis Method Based on Deep Residual Shrinkage Network with Dynamic Convolution and Selective Kernel Attention Model
by Xiaoxu Li, Jixuan Wang, Jianqiang Wang, Jiahao Wang, Jiamin Liu, Jiaming Chen and Xuelian Yu
Algorithms 2025, 18(9), 569; https://doi.org/10.3390/a18090569 - 9 Sep 2025
Abstract
Rolling bearing vibration signals are often severely affected by strong external noise, which can obscure fault-related features and hinder accurate diagnosis. To address this challenge, this paper proposes an enhanced Deep Residual Shrinkage Network with Dynamic Convolution and Selective Kernel Attention (DDRSN-SKA). First, [...] Read more.
Rolling bearing vibration signals are often severely affected by strong external noise, which can obscure fault-related features and hinder accurate diagnosis. To address this challenge, this paper proposes an enhanced Deep Residual Shrinkage Network with Dynamic Convolution and Selective Kernel Attention (DDRSN-SKA). First, one-dimensional vibration signals are converted into two-dimensional time frequency images using the Continuous Wavelet Transform (CWT), providing richer input representations. Then, a dynamic convolution module is introduced to adaptively adjust kernel weights based on the input, enabling the network to better extract salient features. To improve feature discrimination, an Selective Kernel Attention (SKAttention) module is incorporated into the intermediate layers of the network. By applying a multi-receptive field channel attention mechanism, the network can emphasize critical information and suppress irrelevant features. The final classification layer determines the fault types. Experiments conducted on both the Case Western Reserve University (CWRU) dataset and a laboratory-collected bearing dataset demonstrate that DDRSN-SKA achieves diagnostic accuracies of 98.44% and 94.44% under −8 dB Gaussian and Laplace noise, respectively. These results confirm the model’s strong noise robustness and its suitability for fault diagnosis in noisy industrial environments. Full article
Show Figures

Figure 1

25 pages, 18797 KB  
Article
AEFusion: Adaptive Enhanced Fusion of Visible and Infrared Images for Night Vision
by Xiaozhu Wang, Chenglong Zhang, Jianming Hu, Qin Wen, Guifeng Zhang and Min Huang
Remote Sens. 2025, 17(18), 3129; https://doi.org/10.3390/rs17183129 - 9 Sep 2025
Abstract
Under night vision conditions, visible-spectrum images often fail to capture background details. Conventional visible and infrared fusion methods generally overlay thermal signatures without preserving latent features in low-visibility regions. This paper proposes a novel deep learning-based fusion algorithm to enhance visual perception in [...] Read more.
Under night vision conditions, visible-spectrum images often fail to capture background details. Conventional visible and infrared fusion methods generally overlay thermal signatures without preserving latent features in low-visibility regions. This paper proposes a novel deep learning-based fusion algorithm to enhance visual perception in night driving scenarios. Firstly, a local adaptive enhancement algorithm corrects underexposed and overexposed regions in visible images, thereby preventing oversaturation during brightness adjustment. Secondly, ResNet152 extracts hierarchical feature maps from enhanced visible and infrared inputs. Max pooling and average pooling operations preserve critical features and distinct information across these feature maps. Finally, Linear Discriminant Analysis (LDA) reduces dimensionality and decorrelates features. We reconstruct the fused image by the weighted integration of the source images. The experimental results on benchmark datasets show that our approach outperforms state-of-the-art methods in both objective metrics and subjective visual assessments. Full article
Show Figures

Figure 1

40 pages, 2253 KB  
Systematic Review
Airborne and Spaceborne Hyperspectral Remote Sensing in Urban Areas: Methods, Applications, and Trends
by José Antonio Gámez García, Giacomo Lazzeri and Deodato Tapete
Remote Sens. 2025, 17(17), 3126; https://doi.org/10.3390/rs17173126 - 8 Sep 2025
Abstract
This study provides a comprehensive and systematic review of hyperspectral remote sensing in urban areas, with a focus on the evolving roles of airborne and spaceborne platforms. The main objective is to assess the state of the art and identify current trends, challenges, [...] Read more.
This study provides a comprehensive and systematic review of hyperspectral remote sensing in urban areas, with a focus on the evolving roles of airborne and spaceborne platforms. The main objective is to assess the state of the art and identify current trends, challenges, and opportunities arising from the scientific literature (the gray literature was intentionally not included). Despite the proven potential of hyperspectral imaging to discriminate between urban materials with high spectral similarity, its application in urban environments remains underexplored compared to natural settings. A systematic review of 1081 peer-reviewed articles published between 1993 and 2024 was conducted using the Scopus database, resulting in 113 selected publications. Articles were categorized by scope (application, method development, review), sensor type, image processing technique, and target application. Key methods include Spectral Unmixing, Machine Learning (ML) approaches such as Support Vector Machines and Random Forests, and Deep Learning (DL) models like Convolutional Neural Networks. The review reveals a historical reliance on airborne data due to their higher spatial resolution and the availability of benchmark datasets, while the use of spaceborne data has increased notably in recent years. Major urban applications identified include land cover classification, impervious surface detection, urban vegetation mapping, and Local Climate Zone analysis. However, limitations such as lack of training data and underutilization of data fusion techniques persist. ML methods currently dominate due to their robustness with small datasets, while DL adoption is growing but remains constrained by data and computational demands. This review highlights the growing maturity of hyperspectral remote sensing in urban studies and its potential for sustainable urban planning, environmental monitoring, and climate adaptation. Continued improvements in satellite missions and data accessibility will be key to transitioning from theoretical research to operational applications. Full article
(This article belongs to the Special Issue Application of Photogrammetry and Remote Sensing in Urban Areas)
Show Figures

Figure 1

25 pages, 4406 KB  
Article
Multi-Scale Dual Discriminator Generative Adversarial Network for Gas Leakage Detection
by Saif H. A. Al-Khazraji, Hafsa Iqbal, Jesús Belmar Rubio, Fernando García and Abdulla Al-Kaff
Electronics 2025, 14(17), 3564; https://doi.org/10.3390/electronics14173564 - 8 Sep 2025
Abstract
Gas leakages pose significant safety risks in urban environments and industrial sectors like the Oil and Gas Industry (OGI), leading to accidents, fatalities, and economic losses. This paper introduces a novel generative AI framework, the Multi-Scale Dual Discriminator Generative Adversarial Network (MSDD-GAN), designed [...] Read more.
Gas leakages pose significant safety risks in urban environments and industrial sectors like the Oil and Gas Industry (OGI), leading to accidents, fatalities, and economic losses. This paper introduces a novel generative AI framework, the Multi-Scale Dual Discriminator Generative Adversarial Network (MSDD-GAN), designed to detect and localize gas leaks by generating thermal images from RGB input images. The proposed method integrates three key innovations: (1) Attention-Guided Masking (AttMask) for precise gas leakage localization using saliency maps and a circular Region of Interest (ROI), enabling pixel-level validation; (2) Multi-scale input processing to enhance feature learning with limited data; and (3) Dual Discriminator to validate the thermal image realism and leakage localization accuracy. A comprehensive dataset from laboratory and industrial environment has been collected using a FLIR thermal camera. The MSDD-GAN demonstrated robust performance by generating thermal images with the gas leakage indications at a mean accuracy of 81.6%, outperforming baseline cGANs by leveraging a multi-scale generator and dual adversarial losses. By correlating ice formation in RGB images with the leakage indications in thermal images, the model addresses critical challenges of OGI applications, including data scarcity and validation reliability, offering a robust solution for continuous gas leak monitoring in pipeline. Full article
23 pages, 2987 KB  
Article
Proteomic Profiling of EUS-FNA Samples Differentiates Pancreatic Adenocarcinoma from Mass-Forming Chronic Pancreatitis
by Casandra Teodorescu, Ioana-Ecaterina Pralea, Maria-Andreea Soporan, Rares Ilie Orzan, Maria Iacobescu, Andrada Seicean and Cristina-Adela Iuga
Biomedicines 2025, 13(9), 2199; https://doi.org/10.3390/biomedicines13092199 - 8 Sep 2025
Abstract
Background/Objectives: Mass-forming chronic pancreatitis (MFP) and pancreatic ductal adenocarcinoma (PDAC) can present with overlapping radiological, clinical, and serological features in patients with underlying chronic pancreatitis (CP), making differential diagnosis particularly challenging. Current diagnostic tools, including CA19-9 and endoscopic ultrasound (EUS) imaging, often lack [...] Read more.
Background/Objectives: Mass-forming chronic pancreatitis (MFP) and pancreatic ductal adenocarcinoma (PDAC) can present with overlapping radiological, clinical, and serological features in patients with underlying chronic pancreatitis (CP), making differential diagnosis particularly challenging. Current diagnostic tools, including CA19-9 and endoscopic ultrasound (EUS) imaging, often lack the specificity needed to reliably distinguish between these conditions. The objective of this study was to investigate whether the proteomic profiling of endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) samples could provide molecular-level discrimination between MFP and PDAC in patients with CP. Methods: Thirty CP patients with solid pancreatic lesions were prospectively enrolled: 15 with histologically confirmed PDAC and 15 with MFP. Traditional diagnostic parameters, including CA19-9 levels and EUS characteristics, were recorded but found insufficient for differentiation. EUS-FNA samples were analyzed using label-free mass spectrometry. A total of 928 proteins were identified in PDAC samples and 555 in MFP samples. Differential abundance analysis and pathway enrichment were performed. Results: Overall, 88 proteins showed significant differential abundance between PDAC and MFP samples, of which 26 met stringent statistical thresholds. Among these, Carboxylesterase 2 (CES2), Carcinoembryonic Antigen-Related Cell Adhesion Molecule 1 (CEACAM1), Lumican (LUM), Transmembrane Protein 205 (TMEM205), and NAD(P)H Quinone Dehydrogenase 1 (NQO1) emerged as key discriminatory proteins. Pathway enrichment analysis revealed distinct biological processes between the groups, including mitochondrial fatty acid β-oxidation, Rho GTPase signaling, and platelet degranulation. Conclusions: Proteomic signatures derived from EUS-FNA samples offer a promising molecular approach to distinguish inflammatory pseudotumoral lesions from malignant pancreatic tumors in CP patients. This minimally invasive strategy could enhance diagnostic accuracy where current methods fall short. Further validation in larger, multicenter cohorts is warranted to confirm these findings and evaluate their clinical applicability. Full article
(This article belongs to the Special Issue Cellular and Molecular Mechanisms in Gastrointestinal Tract Disease)
Show Figures

Figure 1

32 pages, 5663 KB  
Article
Static and Dynamic Malware Analysis Using CycleGAN Data Augmentation and Deep Learning Techniques
by Moses Ashawa, Robert McGregor, Nsikak Pius Owoh, Jude Osamor and John Adejoh
Appl. Sci. 2025, 15(17), 9830; https://doi.org/10.3390/app15179830 (registering DOI) - 8 Sep 2025
Abstract
The increasing sophistication of malware and the use of evasive techniques such as obfuscation pose significant challenges to traditional detection methods. This paper presents a deep convolutional neural network (CNN) framework that integrates static and dynamic analysis for malware classification using RGB image [...] Read more.
The increasing sophistication of malware and the use of evasive techniques such as obfuscation pose significant challenges to traditional detection methods. This paper presents a deep convolutional neural network (CNN) framework that integrates static and dynamic analysis for malware classification using RGB image representations. Binary and memory dump files are transformed into images to capture structural and behavioural patterns often missed in raw formats. The proposed system comprises two tailored CNN architectures: a static model with four convolutional blocks designed for binary-derived images and a dynamic model with three blocks optimised for noisy memory dump data. To enhance generalisation, we employed Cycle-Consistent Generative Adversarial Networks (CycleGANs) for cross-domain image augmentation, expanding the dataset to over 74,000 RGB images sourced from benchmark repositories (MaleVis and Dumpware10). The static model achieved 99.45% accuracy and perfect recall, demonstrating high sensitivity with minimal false positives. The dynamic model achieved 99.21% accuracy. Experimental results demonstrate that the fused approach effectively detects malware variants by learning discriminative visual patterns from both structural and runtime perspectives. This research contributes to a scalable and robust solution for malware classification unlike a single approach. Full article
Show Figures

Figure 1

14 pages, 2110 KB  
Article
NGBoost Classifier Using Deep Features for Pneumonia Chest X-Ray Classification
by Nagashree Satish Chandra, Shyla Raj and B. S. Mahanand
Appl. Sci. 2025, 15(17), 9821; https://doi.org/10.3390/app15179821 (registering DOI) - 8 Sep 2025
Viewed by 178
Abstract
Pneumonia remains a major global health concern, leading to significant mortality and morbidity. The identification of pneumonia by chest X-rays can be difficult due to its similarity to other lung disorders. In this paper, Natural Gradiant Boost (NGBoost) classifier is employed on deep [...] Read more.
Pneumonia remains a major global health concern, leading to significant mortality and morbidity. The identification of pneumonia by chest X-rays can be difficult due to its similarity to other lung disorders. In this paper, Natural Gradiant Boost (NGBoost) classifier is employed on deep features obtained from ResNet50 model to classify chest X-ray images as normal or pneumonia-affected. NGBoost classifier, a probabilistic machine learning model is used in this study to evaluate the discriminative power of handcrafted features like haar, shape and texture and deep features obtained from convolution neural network models like ResNet50, DenseNet121 and VGG16. The dataset used in this study is obtained from the pneumonia RSNA challenge, which consists of 26,684 chest X-ray images. The experimental results show that NGBoost classifier obtained an accuracy of 0.98 using deep features extracted from ResNet50 model. From the analysis, it is found that deep features play an important role in pneumonia chest X-ray classification. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1078 KB  
Article
Prototype-Based Two-Stage Few-Shot Instance Segmentation with Flexible Novel Class Adaptation
by Qinying Zhu, Yilin Zhang, Peng Xiao, Mengxi Ying, Lei Zhu and Chengyuan Zhang
Mathematics 2025, 13(17), 2889; https://doi.org/10.3390/math13172889 - 7 Sep 2025
Viewed by 459
Abstract
Few-shot instance segmentation (FSIS) is devised to address the intricate challenge of instance segmentation when labeled data for novel classes is scant. Nevertheless, existing methodologies encounter notable constraints in the agile expansion of novel classes and the management of memory overhead. The integration [...] Read more.
Few-shot instance segmentation (FSIS) is devised to address the intricate challenge of instance segmentation when labeled data for novel classes is scant. Nevertheless, existing methodologies encounter notable constraints in the agile expansion of novel classes and the management of memory overhead. The integration workflow for novel classes is inflexible, and given the necessity of retaining class exemplars during both training and inference stages, considerable memory consumption ensues. To surmount these challenges, this study introduces an innovative framework encompassing a two-stage “base training-novel class fine-tuning” paradigm. It acquires discriminative instance-level embedding representations. Concretely, instance embeddings are aggregated into class prototypes, and the storage of embedding vectors as opposed to images inherently mitigates the issue of memory overload. Via a Region of Interest (RoI)-level cosine similarity matching mechanism, the flexible augmentation of novel classes is realized, devoid of the requirement for supplementary training and independent of historical data. Experimental validations attest that this approach significantly outperforms state-of-the-art techniques in mainstream benchmark evaluations. More crucially, its memory-optimized attributes facilitate, for the first time, the conjoint assessment of FSIS performance across all classes within the COCO dataset. Visualized instances (incorporating colored masks and class annotations of objects across diverse scenarios) further substantiate the efficacy of the method in real-world complex contexts. Full article
(This article belongs to the Special Issue Structural Networks for Image Application)
Show Figures

Figure 1

19 pages, 2646 KB  
Article
A Comprehensive Study of MCS-TCL: Multi-Functional Sampling for Trustworthy Compressive Learning
by Fuma Kimishima, Jian Yang and Jinjia Zhou
Information 2025, 16(9), 777; https://doi.org/10.3390/info16090777 (registering DOI) - 7 Sep 2025
Viewed by 126
Abstract
Compressive Learning (CL) is an emerging paradigm that allows machine learning models to perform inference directly from compressed measurements, significantly reducing sensing and computational costs. While existing CL approaches have achieved competitive accuracy compared to traditional image-domain methods, they typically rely on reconstruction [...] Read more.
Compressive Learning (CL) is an emerging paradigm that allows machine learning models to perform inference directly from compressed measurements, significantly reducing sensing and computational costs. While existing CL approaches have achieved competitive accuracy compared to traditional image-domain methods, they typically rely on reconstruction to address information loss and often neglect uncertainty arising from ambiguous or insufficient data. In this work, we propose MCS-TCL, a novel and trustworthy CL framework based on Multi-functional Compressive Sensing Sampling. Our approach unifies sampling, compression, and feature extraction into a single operation by leveraging the compatibility between compressive sensing and convolutional feature learning. This joint design enables efficient signal acquisition while preserving discriminative information, leading to feature representations that remain robust across varying sampling ratios. To enhance the model’s reliability, we incorporate evidential deep learning (EDL) during training. EDL estimates the distribution of evidence over output classes, enabling the model to quantify predictive uncertainty and assign higher confidence to well-supported predictions. Extensive experiments on image classification tasks show that MCS-TCL outperforms existing CL methods, achieving state-of-the-art accuracy at a low sampling rate of 6%. Additionally, our framework reduces model size by 85.76% while providing meaningful uncertainty estimates, demonstrating its effectiveness in resource-constrained learning scenarios. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 25636 KB  
Article
SARFT-GAN: Semantic-Aware ARConv Fused Top-k Generative Adversarial Network for Remote Sensing Image Denoising
by Haotian Sun, Ruifeng Duan, Guodong Sun, Haiyan Zhang, Feixiang Chen, Feng Yang and Jia Cao
Remote Sens. 2025, 17(17), 3114; https://doi.org/10.3390/rs17173114 - 7 Sep 2025
Viewed by 230
Abstract
Optical remote sensing images play a pivotal role in numerous applications, notably feature recognition and scene semantic segmentation. Nevertheless, their efficacy is frequently compromised by various noise types, which detrimentally impact practical usage. We have meticulously crafted a novel attention module amalgamating Adaptive [...] Read more.
Optical remote sensing images play a pivotal role in numerous applications, notably feature recognition and scene semantic segmentation. Nevertheless, their efficacy is frequently compromised by various noise types, which detrimentally impact practical usage. We have meticulously crafted a novel attention module amalgamating Adaptive Rectangular Convolution (ARConv) with Top-k Sparse Attention. This design dynamically modifies feature receptive fields, effectively mitigating superfluous interference and enhancing multi-scale feature extraction. Concurrently, we introduce a Semantic-Aware Discriminator, leveraging visual-language prior knowledge derived from the Contrastive Language–Image Pretraining (CLIP) model, steering the generator towards a more realistic texture reconstruction. This research introduces an innovative image denoising model termed the Semantic-Aware ARConv Fused Top-k Generative Adversarial Network (SARFT-GAN). Addressing shortcomings in traditional convolution operations, attention mechanisms, and discriminator design, our approach facilitates a synergistic optimization between noise suppression and feature preservation. Extensive experiments on RRSSRD, SECOND, a private Jilin-1 set, and real-world NWPU-RESISC45 images demonstrate consistent gains. Across three noise levels and four scenarios, SARFT-GAN attains state-of-the-art perceptual quality—achieving the best FID in all 12 settings and strong LPIPS—while remaining competitive on PSNR/SSIM. Full article
Show Figures

Figure 1

27 pages, 3998 KB  
Article
Graph-Symmetry Cognitive Learning for Multi-Scale Cloud Imaging: An Uncertainty-Quantified Geometric Paradigm via Hierarchical Graph Networks
by Qing Xu, Zichen Zhang, Guanfang Wang and Yunjie Chen
Symmetry 2025, 17(9), 1477; https://doi.org/10.3390/sym17091477 - 7 Sep 2025
Viewed by 169
Abstract
Cloud imagery analysis from terrestrial observation points represents a fundamental capability within contemporary atmospheric monitoring infrastructure, serving essential functions in meteorological prediction, climatic surveillance, and hazard alert systems. However, traditional ground-based cloud image segmentation methods have fundamental limitations, particularly their inability to effectively [...] Read more.
Cloud imagery analysis from terrestrial observation points represents a fundamental capability within contemporary atmospheric monitoring infrastructure, serving essential functions in meteorological prediction, climatic surveillance, and hazard alert systems. However, traditional ground-based cloud image segmentation methods have fundamental limitations, particularly their inability to effectively model the graph structure and symmetry in cloud data. To address this, we propose G-CLIP, a ground-based cloud image segmentation method based on graph symmetry. G-CLIP synergistically integrates four innovative modules. First, the Prototype-Driven Asymmetric Attention (PDAA) module is designed to reduce complexity and enhance feature learning by leveraging permutation invariance and graph symmetry principles. Second, the Symmetry-Adaptive Graph Convolution Layer (SAGCL) is constructed, modeling pixels as graph nodes, using cosine similarity to build a sparse discriminative structure, and ensuring stability through symmetry and degree normalization. Third, the Multi-Scale Directional Edge Optimizer (MSDER) is developed to explicitly model complex symmetric relationships in cloud features from a graph theory perspective. Finally, the Uncertainty-Driven Loss Optimizer (UDLO) is proposed to dynamically adjust weights to address foreground–background imbalance and provide uncertainty quantification. Extensive experiments on four benchmark datasets demonstrate that our method achieves state-of-the-art performance across all evaluation metrics. Our work provides a novel theoretical framework and practical solution for applying graph neural networks (GNNs) to meteorology, particularly by integrating graph properties with uncertainty and leveraging symmetries from graph theory for complex spatial modeling. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry Study in Graph Theory)
Show Figures

Figure 1

16 pages, 12711 KB  
Article
Self-Learning-Based Fringe Domain Conversion for 3D Surface Measurement of Translucent Objects at the Mesoscopic Scale
by Wenqing Su, Tao Zou, Huankun Chen, Haipeng Niu, Zhaoshui He, Yumei Zhao, Zhuyun Chen and Ji Tan
Photonics 2025, 12(9), 898; https://doi.org/10.3390/photonics12090898 (registering DOI) - 7 Sep 2025
Viewed by 197
Abstract
Three-dimensional measurement of translucent objects using structured light techniques remained fundamentally challenging due to severe degradation of fringe patterns caused by subsurface scattering, which inevitably introduced phase errors and compromised measurement accuracy. Although deep learning had emerged as a powerful tool for fringe [...] Read more.
Three-dimensional measurement of translucent objects using structured light techniques remained fundamentally challenging due to severe degradation of fringe patterns caused by subsurface scattering, which inevitably introduced phase errors and compromised measurement accuracy. Although deep learning had emerged as a powerful tool for fringe analysis, its practical implementation was hindered by the impractical requirement for large-scale labeled datasets, particularly in scattering-dominant measurement scenarios. To overcome these limitations, we developed a self-learning-based fringe domain conversion method inspired by image style transfer principles, where degraded and ideal fringe patterns were treated as distinct domains for cyclic translation. The proposed framework employed dual generators and discriminators to establish cycle-consistency constraints while incorporating both numerical intensity-based and physical phase-derived optimization targets, effectively suppressing phase errors and improving fringe modulation without requiring paired training data. Experimental validation demonstrated superior performance in reconstructing high-fidelity 3D morphology of translucent objects, establishing this approach as a robust solution for precision metrology of complex scattering media. Full article
(This article belongs to the Special Issue Advancements in Optical Metrology and Imaging)
Show Figures

Figure 1

33 pages, 6850 KB  
Article
TWDTW-Based Maize Mapping Using Optimal Time Series Features of Sentinel-1 and Sentinel-2 Images
by Haoran Yan, Ruozhen Wang, Jiaqian Lian, Xinyue Duan, Liping Wan, Jiao Guo and Pengliang Wei
Remote Sens. 2025, 17(17), 3113; https://doi.org/10.3390/rs17173113 - 6 Sep 2025
Viewed by 1203
Abstract
Time-Weighted Dynamic Time Warping (TWDTW), adapted from speech recognition, is used in agricultural remote sensing to model crop growth, particularly under limited ground sample conditions. However, most related studies rely on full-season or empirically selected features, overlooking the systematic optimization of features at [...] Read more.
Time-Weighted Dynamic Time Warping (TWDTW), adapted from speech recognition, is used in agricultural remote sensing to model crop growth, particularly under limited ground sample conditions. However, most related studies rely on full-season or empirically selected features, overlooking the systematic optimization of features at each observation time to improve TWDTW’s performance. This often introduces a large amount of redundant information that is irrelevant to crop discrimination and increases computational complexity. Therefore, this study focused on maize as the target crop and systematically conducted mapping experiments using Sentinel-1/2 images to evaluate the potential of integrating TWDTW with optimally selected multi-source time series features. The optimal multi-source time series features for distinguishing maize from non-maize were determined using a two-step Jeffries Matusita (JM) distance-based global search strategy (i.e., twelve spectral bands, Normalized Difference Vegetation Index, Enhanced Vegetation Index, and the two microwave backscatter coefficients collected during the maize jointing to tasseling stages). Then, based on the full-season and optimal multi-source time series features, we compared TWDTW with two widely used temporal machine learning models in agricultural remote sensing community. The results showed that TWDTW outperformed traditional supervised temporal machine learning models. In particular, compared with TWDTW driven by the full-season optimal multi-source features, TWDTW using the optimal multi-source time series features improved user accuracy by 0.43% and 2.30%, and producer accuracy by 7.51% and 2.99% for the years 2020 and 2021, respectively. Additionally, it reduced computational costs to only 25% of those driven by the full-season scheme. Finally, maize maps of Yangling District from 2020 to 2023 were produced by optimal multi-source time series features-based TWDTW. Their overall accuracies remained consistently above 90% across the four years, and the average relative error between the maize area extracted from remote sensing images and that reported in the statistical yearbook was only 6.61%. This study provided guidance for improving the performance of TWDTW in large-scale crop mapping tasks, which is particularly important under conditions of limited sample availability. Full article
Show Figures

Figure 1

41 pages, 28333 KB  
Article
ACPOA: An Adaptive Cooperative Pelican Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation
by YuLong Zhang, Jianfeng Wang, Xiaoyan Zhang and Bin Wang
Biomimetics 2025, 10(9), 596; https://doi.org/10.3390/biomimetics10090596 - 6 Sep 2025
Viewed by 345
Abstract
Multi-threshold image segmentation plays an irreplaceable role in extracting discriminative structural information from complex images. It is one of the core technologies for achieving accurate target detection and regional analysis, and its segmentation accuracy directly affects the analysis quality and decision reliability in [...] Read more.
Multi-threshold image segmentation plays an irreplaceable role in extracting discriminative structural information from complex images. It is one of the core technologies for achieving accurate target detection and regional analysis, and its segmentation accuracy directly affects the analysis quality and decision reliability in key fields such as medical imaging, remote sensing interpretation, and industrial inspection. However, most existing image segmentation algorithms suffer from slow convergence speeds and low solution accuracy. Therefore, this paper proposes an Adaptive Cooperative Pelican Optimization Algorithm (ACPOA), an improved version of the Pelican Optimization Algorithm (POA), and applies it to global optimization and multilevel threshold image segmentation tasks. ACPOA integrates three innovative strategies: the elite pool mutation strategy guides the population toward high-quality regions by constructing an elite pool composed of the three individuals with the best fitness, effectively preventing the premature loss of population diversity; the adaptive cooperative mechanism enhances search efficiency in high-dimensional spaces by dynamically allocating subgroups and dimensions and performing specialized updates to achieve division of labor and global information sharing; and the hybrid boundary handling technique adopts a probabilistic hybrid approach to deal with boundary violations, balancing exploitation, exploration, and diversity while retaining more useful search information. Comparative experiments with eight advanced algorithms on the CEC2017 and CEC2022 benchmark test suites validate the superior optimization performance of ACPOA. Moreover, when applied to multilevel threshold image segmentation tasks, ACPOA demonstrates better accuracy, stability, and efficiency in solving practical problems, providing an effective solution for complex optimization challenges. Full article
Show Figures

Figure 1

Back to TopTop