Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,518)

Search Parameters:
Keywords = hyperspectral

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 4067 KiB  
Article
A Hyperspectral Method for Detection of the Three-Dimensional Spatial Distribution of Aerosol in Urban Areas for Emission Source Identification and Health Risk Assessment
by Shun Xia, Qihua Li, Jian Chen, Zhiguo Zhang and Qihou Hu
Atmosphere 2025, 16(9), 999; https://doi.org/10.3390/atmos16090999 - 24 Aug 2025
Abstract
Studying the vertical and horizontal distribution of particulate matter at the hectometer scale in the atmosphere is essential for understanding its sources, transportation, and transmission and its impact on human health. In this study, a method was developed based on hyperspectral instrumentation to [...] Read more.
Studying the vertical and horizontal distribution of particulate matter at the hectometer scale in the atmosphere is essential for understanding its sources, transportation, and transmission and its impact on human health. In this study, a method was developed based on hyperspectral instrumentation to obtain both vertical and horizontal distributions of aerosol extinction by employing multiple azimuth angles, selecting optimized elevation angles, and reducing the acquisition time of individual spectra. This method employed observations from different azimuth angles to represent particulate matter concentrations in various directions. The correlation coefficient between the hyperspectral observations and in-situ measurement was 0.627. Observations indicated that the aerosol extinction profile followed an exponential decay, with most aerosols confined below 1 km, implying a likely origin from local near-surface emissions. The horizontal distribution indicated that the northeastern urban areas and the eastern rural areas were the primary regions with high concentrations of particulate matter. The observational evidence suggests the presence of two potential emission sources within the study area. Moreover, health risk results indicated that even within the same town, differences of particulate matter concentration and population density could lead to varying health exposure risks. For instance, in the 200° and 210° directions, which represent adjacent urban areas less than 1 km apart, the number of PM2.5-related illness cases in the 210° direction was 20.83% higher than that in the 200° direction. Full article
(This article belongs to the Special Issue Application of Emerging Methods in Aerosol Research)
Show Figures

Figure 1

28 pages, 7366 KiB  
Article
Deep Fuzzy Fusion Network for Joint Hyperspectral and LiDAR Data Classification
by Guangen Liu, Jiale Song, Yonghe Chu, Lianchong Zhang, Peng Li and Junshi Xia
Remote Sens. 2025, 17(17), 2923; https://doi.org/10.3390/rs17172923 - 22 Aug 2025
Abstract
Recently, Transformers have made significant progress in the joint classification task of HSI and LiDAR due to their efficient modeling of long-range dependencies and adaptive feature learning mechanisms. However, existing methods face two key challenges: first, the feature extraction stage does not explicitly [...] Read more.
Recently, Transformers have made significant progress in the joint classification task of HSI and LiDAR due to their efficient modeling of long-range dependencies and adaptive feature learning mechanisms. However, existing methods face two key challenges: first, the feature extraction stage does not explicitly model category ambiguity; second, the feature fusion stage lacks a dynamic perception mechanism for inter-modal differences and uncertainties. To this end, this paper proposes a Deep Fuzzy Fusion Network (DFNet) for the joint classification of hyperspectral and LiDAR data. DFNet adopts a dual-branch architecture, integrating CNN and Transformer structures, respectively, to extract multi-scale spatial–spectral features from hyperspectral and LiDAR data. To enhance the model’s discriminative robustness in ambiguous regions, both branches incorporate fuzzy learning modules that model class uncertainty through learnable Gaussian membership functions. In the modality fusion stage, a Fuzzy-Enhanced Cross-Modal Fusion (FECF) module is designed, which combines membership-aware attention mechanisms with fuzzy inference operators to achieve dynamic adjustment of modality feature weights and efficient integration of complementary information. DFNet, through a hierarchical design, realizes uncertainty representation within and fusion control between modalities. The proposed DFNet is evaluated on three public datasets, and the extensive experimental results indicate that the proposed DFNet considerably outperforms other state-of-the-art methods. Full article
33 pages, 5718 KiB  
Article
Progressive Water Deficit Impairs Soybean Growth, Alters Metabolic Profiles, and Decreases Photosynthetic Efficiency
by Renan Falcioni, Caio Almeida de Oliveira, Nicole Ghinzelli Vedana, Weslei Augusto Mendonça, João Vitor Ferreira Gonçalves, Daiane de Fatima da Silva Haubert, Dheynne Heyre Silva de Matos, Amanda Silveira Reis, Werner Camargos Antunes, Luis Guilherme Teixeira Crusiol, Rubson Natal Ribeiro Sibaldelli, Alexandre Lima Nepomuceno, Norman Neumaier, José Renato Bouças Farias, Renato Herrig Furlanetto, José Alexandre Melo Demattê and Marcos Rafael Nanni
Plants 2025, 14(17), 2615; https://doi.org/10.3390/plants14172615 - 22 Aug 2025
Abstract
Soybean (Glycine max (L.) Merrill) is highly sensitive to water deficit, particularly during the vegetative phase, when morphological and metabolic plasticity support continued growth and photosynthetic efficiency. We applied eleven water regimes, from full irrigation (W100) to total water withholding (W0), to [...] Read more.
Soybean (Glycine max (L.) Merrill) is highly sensitive to water deficit, particularly during the vegetative phase, when morphological and metabolic plasticity support continued growth and photosynthetic efficiency. We applied eleven water regimes, from full irrigation (W100) to total water withholding (W0), to plants grown under controlled conditions. After 14 days, we quantified morphophysiological, biochemical, leaf optical, gas exchange, and chlorophyll a fluorescence traits. Drought induces significant reductions in leaf area, biomass, pigment pools, and photosynthetic rates (A, gs, ΦPSII) while increasing the levels of oxidative stress markers (electrolyte leakage, ROS) and proline accumulation. OJIP transients and JIP test metrics revealed reduced electron-transport efficiency and increased energy dissipation for many parameters under severe stress. Principal component analysis (PCA) clearly separated those treatments. PC1 captured growth and water status variation, whereas PC2 reflected photoprotective adjustments. These data show that progressive drought limits carbon assimilation via coordinated diffusive and biochemical constraints and that the accumulation of proline, phenolics, and lignin is associated with osmotic adjustment, antioxidant buffering, and cell wall reinforcement under stress. The combined use of hyperspectral sensors, gas exchange, chlorophyll fluorescence, and multivariate analyses for phenotyping offers a rapid, nondestructive diagnostic tool for assessing drought severity and the possibility of selecting drought-resistant genotypes and phenotypes in a changing stress environment. Full article
(This article belongs to the Special Issue Plant Challenges in Response to Salt and Water Stress)
Show Figures

Figure 1

19 pages, 441 KiB  
Review
Recent Advances and Applications of Nondestructive Testing in Agricultural Products: A Review
by Mian Li, Honglian Yin, Fei Gu, Yanjun Duan, Wenxu Zhuang, Kang Han and Xiaojun Jin
Processes 2025, 13(9), 2674; https://doi.org/10.3390/pr13092674 - 22 Aug 2025
Abstract
With the rapid development of agricultural intelligence, nondestructive testing (NDT) has shown considerable promise for agricultural product inspection. Compared with traditional methods—which often suffer from subjectivity, low efficiency, and sample damage—NDT offers rapid, accurate, and non-invasive solutions that enable precise inspection without harming [...] Read more.
With the rapid development of agricultural intelligence, nondestructive testing (NDT) has shown considerable promise for agricultural product inspection. Compared with traditional methods—which often suffer from subjectivity, low efficiency, and sample damage—NDT offers rapid, accurate, and non-invasive solutions that enable precise inspection without harming the products. These inherent advantages have promoted the increasing adoption of NDT technologies in agriculture. Meanwhile, rising quality standards for agricultural products have intensified the demand for more efficient and reliable detection methods, accelerating the replacement of conventional techniques by advanced NDT approaches. Nevertheless, selecting the most appropriate NDT method for a given agricultural inspection task remains challenging, due to the wide diversity in product structures, compositions, and inspection requirements. To address this challenge, this paper presents a review of recent advancements and applications of several widely adopted NDT techniques, including computer vision, near-infrared spectroscopy, hyperspectral imaging, computed tomography, and electronic noses, focusing specifically on their application in agricultural product evaluation. Furthermore, the strengths and limitations of each technology are discussed comprehensively, quantitative performance indicators and adoption trends are summarized, and practical recommendations are provided for selecting suitable NDT techniques according to various agricultural inspection tasks. By highlighting both technical progress and persisting challenges, this review provides actionable theoretical and technical guidance, aiming to support researchers and practitioners in advancing the effective and sustainable application of cutting-edge NDT methods in agriculture. Full article
Show Figures

Figure 1

21 pages, 4917 KiB  
Article
A High-Capacity Reversible Data Hiding Scheme for Encrypted Hyperspectral Images Using Multi-Layer MSB Block Labeling and ERLE Compression
by Yijie Lin, Chia-Chen Lin, Zhe-Min Yeh, Ching-Chun Chang and Chin-Chen Chang
Future Internet 2025, 17(8), 378; https://doi.org/10.3390/fi17080378 - 21 Aug 2025
Viewed by 95
Abstract
In the context of secure and efficient data transmission over the future Internet, particularly for remote sensing and geospatial applications, reversible data hiding (RDH) in encrypted hyperspectral images (HSIs) has emerged as a critical technology. This paper proposes a novel RDH scheme specifically [...] Read more.
In the context of secure and efficient data transmission over the future Internet, particularly for remote sensing and geospatial applications, reversible data hiding (RDH) in encrypted hyperspectral images (HSIs) has emerged as a critical technology. This paper proposes a novel RDH scheme specifically designed for encrypted HSIs, offering enhanced embedding capacity without compromising data security or reversibility. The approach introduces a multi-layer block labeling mechanism that leverages the similarity of most significant bits (MSBs) to accurately locate embeddable regions. To minimize auxiliary information overhead, we incorporate an Extended Run-Length Encoding (ERLE) algorithm for effective label map compression. The proposed method achieves embedding rates of up to 3.79 bits per pixel per band (bpppb), while ensuring high-fidelity reconstruction, as validated by strong PSNR metrics. Comprehensive security evaluations using NPCR, UACI, and entropy confirm the robustness of the encryption. Extensive experiments across six standard hyperspectral datasets demonstrate the superiority of our method over existing RDH techniques in terms of capacity, embedding rate, and reconstruction quality. These results underline the method’s potential for secure data embedding in next-generation Internet-based geospatial and remote sensing systems. Full article
Show Figures

Figure 1

20 pages, 3795 KiB  
Article
Leaf Area Index Estimation of Grassland Based on UAV-Borne Hyperspectral Data and Multiple Machine Learning Models in Hulun Lake Basin
by Dazhou Wu, Saru Bao, Yi Tong, Yifan Fan, Lu Lu, Songtao Liu, Wenjing Li, Mengyong Xue, Bingshuai Cao, Quan Li, Muha Cha, Qian Zhang and Nan Shan
Remote Sens. 2025, 17(16), 2914; https://doi.org/10.3390/rs17162914 - 21 Aug 2025
Viewed by 182
Abstract
Leaf area index (LAI) is a crucial parameter reflecting the crown structure of the grassland. Accurately obtaining LAI is of great significance for estimating carbon sinks in grassland ecosystems. However, spectral noise interference and pronounced spatial heterogeneity within vegetation canopies constitute significant impediments [...] Read more.
Leaf area index (LAI) is a crucial parameter reflecting the crown structure of the grassland. Accurately obtaining LAI is of great significance for estimating carbon sinks in grassland ecosystems. However, spectral noise interference and pronounced spatial heterogeneity within vegetation canopies constitute significant impediments to achieving high-precision LAI retrieval. This study used hyperspectral sensor mounted on an unmanned aerial vehicle (UAV) to estimate LAI in a typical grassland, Hulun Lake Basin. Multiple machine learning (ML) models were constructed to reveal a relationship between hyperspectral data and grassland LAI using two input datasets, namely spectral transformations and vegetation indices (VIs), while SHAP (SHapley Additive ExPlanation) interpretability analysis was further employed to identify high-contribution features in the ML models. The analysis revealed that grassland LAI has good correlations with the original spectrum at 550 nm and 750 nm–1000 nm, first and second derivatives at 506 nm–574 nm, 649 nm–784 nm, and vegetation indices including the triangular vegetation index (TVI), enhanced vegetation index 2 (EVI2), and soil-adjusted vegetation index (SAVI). In the models using spectral transformations and VIs, the random forest (RF) models outperformed other models (testing R2 = 0.89/0.88, RMSE = 0.20/0.21, and RRMSE = 27.34%/28.98%). The prediction error of the random forest model exhibited a positive correlation with measured LAI magnitude but demonstrated an inverse relationship with quadrat-level species richness, quantified by Margalef’s richness index (MRI). We also found that at the quadrat level, the spectral response curve pattern is influenced by attributes within the quadrat, like dominant species and vegetation cover, and that LAI has positive relationship with quadrat vegetation cover. The LAI inversion results in this study were also compared to main LAI products, showing a good correlation (r = 0.71). This study successfully established a high-fidelity inversion framework for hyperspectral-derived LAI estimation in mid-to-high latitude grasslands of the Hulun Lake Basin, supporting the spatial refinement of continental-scale carbon sink models at a regional scale. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

36 pages, 5771 KiB  
Article
Improving K-Means Clustering: A Comparative Study of Parallelized Version of Modified K-Means Algorithm for Clustering of Satellite Images
by Yuv Raj Pant, Larry Leigh and Juliana Fajardo Rueda
Algorithms 2025, 18(8), 532; https://doi.org/10.3390/a18080532 - 21 Aug 2025
Viewed by 188
Abstract
Efficient clustering of high-spatial-dimensional satellite image datasets remains a critical challenge, particularly due to the computational demands of spectral distance calculations, random centroid initialization, and sensitivity to outliers in conventional K-Means algorithms. This study presents a comprehensive comparative analysis of eight parallelized variants [...] Read more.
Efficient clustering of high-spatial-dimensional satellite image datasets remains a critical challenge, particularly due to the computational demands of spectral distance calculations, random centroid initialization, and sensitivity to outliers in conventional K-Means algorithms. This study presents a comprehensive comparative analysis of eight parallelized variants of the K-Means algorithm, designed to enhance clustering efficiency and reduce computational burden for large-scale satellite image analysis. The proposed parallelized implementations incorporate optimized centroid initialization for better starting point selection using a dynamic K-Means sharp method to detect the outlier to improve cluster robustness, and a Nearest-Neighbor Iteration Calculation Reduction method to minimize redundant computations. These enhancements were applied to a test set of 114 global land cover data cubes, each comprising high-dimensional satellite images of size 3712 × 3712 × 16 and executed on multi-core CPU architecture to leverage extensive parallel processing capabilities. Performance was evaluated across three criteria: convergence speed (iterations), computational efficiency (execution time), and clustering accuracy (RMSE). The Parallelized Enhanced K-Means (PEKM) method achieved the fastest convergence at 234 iterations and the lowest execution time of 4230 h, while maintaining consistent RMSE values (0.0136) across all algorithm variants. These results demonstrate that targeted algorithmic optimizations, combined with effective parallelization strategies, can improve the practicality of K-Means clustering for high-dimensional-satellites image analysis. This work underscores the potential of improving K-Means clustering frameworks beyond hardware acceleration alone, offering scalable solutions good for large-scale unsupervised image classification tasks. Full article
(This article belongs to the Special Issue Algorithms in Multi-Sensor Imaging and Fusion)
Show Figures

Graphical abstract

24 pages, 7251 KiB  
Article
WTCMC: A Hyperspectral Image Classification Network Based on Wavelet Transform Combining Mamba and Convolutional Neural Networks
by Guanchen Liu, Qiang Zhang, Xueying Sun and Yishuang Zhao
Electronics 2025, 14(16), 3301; https://doi.org/10.3390/electronics14163301 - 20 Aug 2025
Viewed by 267
Abstract
Hyperspectral images are rich in spectral and spatial information. However, their high dimensionality and complexity pose significant challenges for effective feature extraction. Specifically, the performance of existing models for hyperspectral image (HSI) classification remains constrained by spectral redundancy among adjacent bands, misclassification at [...] Read more.
Hyperspectral images are rich in spectral and spatial information. However, their high dimensionality and complexity pose significant challenges for effective feature extraction. Specifically, the performance of existing models for hyperspectral image (HSI) classification remains constrained by spectral redundancy among adjacent bands, misclassification at object boundaries, and significant noise in hyperspectral data. To address these challenges, we propose WTCMC—a novel hyperspectral image classification network based on wavelet transform combining Mamba and convolutional neural networks. To establish robust shallow spatial–spectral relationships, we introduce a shallow feature extraction module (SFE) at the initial stage of the network. To enable the comprehensive and efficient capture of both spectral and spatial characteristics, our architecture incorporates a low-frequency spectral Mamba module (LFSM) and a high-frequency multi-scale convolution module (HFMC). The wavelet transform suppresses noise for LFSM and enhances fine spatial and contour features for HFMC. Furthermore, we devise a spectral–spatial complementary fusion module (SCF) that selectively preserves the most discriminative spectral and spatial features. Experimental results demonstrate that the proposed WTCMC network attains overall accuracies (OA) of 98.94%, 98.67%, and 97.50% on the Pavia University (PU), Botswana (BS), and Indian Pines (IP) datasets, respectively, outperforming the compared state-of-the-art methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

26 pages, 3620 KiB  
Article
Estimation Method of Leaf Nitrogen Content of Dominant Plants in Inner Mongolia Grassland Based on Machine Learning
by Lishan Jin, Xiumei Wang, Jianjun Dong, Ruochen Wang, Hefei Wen, Yuyan Sun, Wenbo Wu, Zhihang Zhang and Can Kang
Nitrogen 2025, 6(3), 70; https://doi.org/10.3390/nitrogen6030070 - 19 Aug 2025
Viewed by 221
Abstract
Accurate nitrogen (N) content estimation in grassland vegetation is essential for ecosystem health and optimizing pasture quality, as N supports plant photosynthesis and water uptake. Traditional lab methods are slow and unsuitable for large-scale monitoring, while remote sensing models often face accuracy challenges [...] Read more.
Accurate nitrogen (N) content estimation in grassland vegetation is essential for ecosystem health and optimizing pasture quality, as N supports plant photosynthesis and water uptake. Traditional lab methods are slow and unsuitable for large-scale monitoring, while remote sensing models often face accuracy challenges due to hyperspectral data complexity. This study improves N content estimation in the typical steppe of Inner Mongolia by integrating hyperspectral remote sensing with advanced machine learning. Hyperspectral reflectance from Leymus chinensis and Cleistogenes squarrosa was measured using an ASD FieldSpec-4 spectrometer, and leaf N content was measured with an elemental analyzer. To address high-dimensional data, four spectral transformations—band combination, first-order derivative transformation (FDT), continuous wavelet transformation (CWT), and continuum removal transformation (CRT)—were applied, with Least Absolute Shrinkage and Selection Operator (LASSO) used for feature selection. Four machine learning models—Extreme Gradient Boosting (XGBoost), Support Vector Machine (SVM), Artificial Neural Network (ANN), and K-Nearest Neighbors (KNN)—were evaluated via five-fold cross-validation. Wavelet transformation provided the most informative parameters. The SVM model achieved the highest accuracy for L. chinensis (R2 = 0.92), and the ANN model performed best for C. squarrosa (R2 = 0.72). This study demonstrates that integrating wavelet transform with machine learning offers a reliable, scalable approach for grassland N monitoring and management. Full article
Show Figures

Figure 1

30 pages, 5453 KiB  
Review
Advances in Hyperspectral and Diffraction Imaging for Agricultural Applications
by Li Chen, Yu Wu, Ning Yang and Zongbao Sun
Agriculture 2025, 15(16), 1775; https://doi.org/10.3390/agriculture15161775 - 19 Aug 2025
Viewed by 294
Abstract
Hyperspectral imaging and diffraction imaging technologies, owing to their non-destructive nature, high efficiency, and superior resolution, have found widespread application in agricultural diagnostics. This review synthesizes recent advancements in the deployment of these two technologies across various agricultural domains, including the detection of [...] Read more.
Hyperspectral imaging and diffraction imaging technologies, owing to their non-destructive nature, high efficiency, and superior resolution, have found widespread application in agricultural diagnostics. This review synthesizes recent advancements in the deployment of these two technologies across various agricultural domains, including the detection of plant diseases and pests, crop growth monitoring, and animal health diagnostics. Hyperspectral imaging utilizes multi-band spectral and image data to accurately identify diseases and nutritional status, while combining deep learning and other technologies to improve detection accuracy. Diffraction imaging, by exploiting the diffraction properties of light waves, facilitates the detection of pathogenic spores and the assessment of cellular vitality, making it particularly well-suited for microscopic structural analysis. The paper also critically examines prevailing challenges such as the complexity of data processing, environmental adaptability, and the cost of instrumentation. Finally, it envisions future directions wherein the integration of hyperspectral and diffraction imaging, through multisource data fusion and the optimization of intelligent algorithms, holds promise for constructing highly precise and efficient agricultural diagnostic systems, thereby advancing the development of smart agriculture. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

23 pages, 14694 KiB  
Article
PLCNet: A 3D-CNN-Based Plant-Level Classification Network Hyperspectral Framework for Sweetpotato Virus Disease Detection
by Qiaofeng Zhang, Wei Wang, Han Su, Gaoxiang Yang, Jiawen Xue, Hui Hou, Xiaoyue Geng, Qinghe Cao and Zhen Xu
Remote Sens. 2025, 17(16), 2882; https://doi.org/10.3390/rs17162882 - 19 Aug 2025
Viewed by 262
Abstract
Sweetpotato virus disease (SPVD) poses a significant threat to global sweetpotato production; therefore, early, accurate field-scale detection is necessary. To address the limitations of the currently utilized assays, we propose PLCNet (Plant-Level Classification Network), a rapid, non-destructive SPVD identification framework using UAV-acquired hyperspectral [...] Read more.
Sweetpotato virus disease (SPVD) poses a significant threat to global sweetpotato production; therefore, early, accurate field-scale detection is necessary. To address the limitations of the currently utilized assays, we propose PLCNet (Plant-Level Classification Network), a rapid, non-destructive SPVD identification framework using UAV-acquired hyperspectral imagery. High-resolution data from early sweetpotato growth stages were processed via three feature selection methods—Random Forest (RF), Minimum Redundancy Maximum Relevance (mRMR), and Local Covariance Matrix (LCM)—in combination with 24 vegetation indices. Variance Inflation Factor (VIF) analysis reduced multicollinearity, yielding an optimized SPVD-sensitive feature set. First, using the RF-selected bands and vegetation indices, we benchmarked four classifiers—Support Vector Machine (SVM), Gradient Boosting Decision Tree (GBDT), Residual Network (ResNet), and 3D Convolutional Neural Network (3D-CNN). Under identical inputs, the 3D-CNN achieved superior performance (OA = 96.55%, Macro F1 = 95.36%, UA_mean = 0.9498, PA_mean = 0.9504), outperforming SVM, GBDT, and ResNet. Second, with the same spectral–spatial features and 3D-CNN backbone, we compared a pixel-level baseline (CropdocNet) against our plant-level PLCNet. CropdocNet exhibited spatial fragmentation and isolated errors, whereas PLCNet’s two-stage pipeline—deep feature extraction followed by connected-component analysis and majority voting—aggregated voxel predictions into coherent whole-plant labels, substantially reducing noise and enhancing biological interpretability. By integrating optimized feature selection, deep learning, and plant-level post-processing, PLCNet delivers a scalable, high-throughput solution for precise SPVD monitoring in agricultural fields. Full article
Show Figures

Figure 1

18 pages, 3802 KiB  
Article
Short-Wavelength Infrared Hyperspectral Imaging and Spectral Unmixing Techniques for Detection and Distribution of Pesticide Residues on Edible Perilla Leaves
by Dennis Semyalo, Rahul Joshi, Yena Kim, Emmanuel Omia, Lorna Bridget Alal, Moon S. Kim, Insuck Baek and Byoung-Kwan Cho
Foods 2025, 14(16), 2864; https://doi.org/10.3390/foods14162864 - 18 Aug 2025
Viewed by 390
Abstract
Pesticide residue analysis of agricultural produce is vital because of associated health concerns, highlighting the need for effective non-destructive techniques. This study introduces a method that combines short-wavelength infrared hyperspectral imaging with spectral unmixing to detect chlorfenapyr and azoxystrobin residues on perilla leaves. [...] Read more.
Pesticide residue analysis of agricultural produce is vital because of associated health concerns, highlighting the need for effective non-destructive techniques. This study introduces a method that combines short-wavelength infrared hyperspectral imaging with spectral unmixing to detect chlorfenapyr and azoxystrobin residues on perilla leaves. Sixty-six leaves were treated with pesticides at concentrations between 0 and 0.06%. The study utilized multicurve resolution-alternating least squares (MCR-ALS), a spectral unmixing method, to identify and visualize the distribution of pesticide residues. This technique achieved lack-of-fit values of 1.03% and 1.78%, with an explained variance of 99% for both pesticides. Furthermore, a quantitative model was developed that integrates insights from MCR-ALS with Gaussian process regression to estimate chlorfenapyr residue concentrations, resulting in a root mean square error of double cross-validation (RMSEV) of 0.0012% and a double cross-validation coefficient of determination (R2v) of 0.99. Compared to other chemometric approaches, such as partial least squares regression and support vector regression, the proposed integrated method decreased RMSEV by 67.57% and improved R2v by 2.06%. The combination of hyperspectral imaging with spectral unmixing offers advancements in the real-time monitoring of agricultural product safety, supporting the delivery of high-quality fresh vegetables to consumers. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

26 pages, 36602 KiB  
Article
FE-MCFN: Fuzzy-Enhanced Multi-Scale Cross-Modal Fusion Network for Hyperspectral and LiDAR Joint Data Classification
by Shuting Wei, Mian Jia and Junyi Duan
Algorithms 2025, 18(8), 524; https://doi.org/10.3390/a18080524 - 18 Aug 2025
Viewed by 336
Abstract
With the rapid advancement of remote sensing technologies, the joint classification of hyperspectral image (HSI) and LiDAR data has become a key research focus in the field. To address the impact of inherent uncertainties in hyperspectral images on classification—such as the “same spectrum, [...] Read more.
With the rapid advancement of remote sensing technologies, the joint classification of hyperspectral image (HSI) and LiDAR data has become a key research focus in the field. To address the impact of inherent uncertainties in hyperspectral images on classification—such as the “same spectrum, different materials” and “same material, different spectra” phenomena, as well as the complexity of spectral features. Furthermore, existing multimodal fusion approaches often fail to fully leverage the complementary advantages of hyperspectral and LiDAR data. We propose a fuzzy-enhanced multi-scale cross-modal fusion network (FE-MCFN) designed to achieve joint classification of hyperspectral and LiDAR data. The FE-MCFN enhances convolutional neural networks through the application of fuzzy theory and effectively integrates global contextual information via a cross-modal attention mechanism. The fuzzy learning module utilizes a Gaussian membership function to assign weights to features, thereby adeptly capturing uncertainties and subtle distinctions within the data. To maximize the complementary advantages of multimodal data, a fuzzy fusion module is designed, which is grounded in fuzzy rules and integrates multimodal features across various scales while taking into account both local features and global information, ultimately enhancing the model’s classification performance. Experimental results obtained from the Houston2013, Trento, and MUUFL datasets demonstrate that the proposed method outperforms current state-of-the-art classification techniques, thereby validating its effectiveness and applicability across diverse scenarios. Full article
(This article belongs to the Section Databases and Data Structures)
Show Figures

Figure 1

20 pages, 13547 KiB  
Article
Hyperspectral Image Denoising via Low-Rank Tucker Decomposition with Subspace Implicit Neural Representation
by Cheng Cheng, Dezhi Sun, Yaoyuan Yang, Zhoucheng Guo and Jiangjun Peng
Remote Sens. 2025, 17(16), 2867; https://doi.org/10.3390/rs17162867 - 18 Aug 2025
Viewed by 317
Abstract
Hyperspectral image (HSI) denoising is an important preprocessing step for downstream applications. Fully characterizing the spatial-spectral priors of HSI is crucial for denoising tasks. In recent years, denoising methods based on low-rank subspaces have garnered widespread attention. In the low-rank matrix factorization framework, [...] Read more.
Hyperspectral image (HSI) denoising is an important preprocessing step for downstream applications. Fully characterizing the spatial-spectral priors of HSI is crucial for denoising tasks. In recent years, denoising methods based on low-rank subspaces have garnered widespread attention. In the low-rank matrix factorization framework, the restoration of HSI can be formulated as a task of recovering two subspace factors. However, hyperspectral images are inherently three-dimensional tensors, and transforming the tensor into a matrix for operations inevitably disrupts the spatial structure of the data. To address this issue and better capture the spatial-spectral priors of HSI, this paper proposes a modeling approach named low-rank Tucker decomposition with subspace implicit neural representation (LRTSINR). This data-driven and model-driven joint modeling mechanism has the following two advantages: (1) Tucker decomposition allows for the characterization of the low-rank properties across multiple dimensions of the HSI, leading to a more accurate representation of spectral priors; (2) Implicit neural representation enables the adaptive and precise characterization of the subspace factor continuity under Tucker decomposition. Extensive experiments demonstrate that our method outperforms a series of competing methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

26 pages, 7726 KiB  
Article
Multi-Branch Channel-Gated Swin Network for Wetland Hyperspectral Image Classification
by Ruopu Liu, Jie Zhao, Shufang Tian, Guohao Li and Jingshu Chen
Remote Sens. 2025, 17(16), 2862; https://doi.org/10.3390/rs17162862 - 17 Aug 2025
Viewed by 253
Abstract
Hyperspectral classification of wetland environments remains challenging due to high spectral similarity, class imbalance, and blurred boundaries. To address these issues, we propose a novel Multi-Branch Channel-Gated Swin Transformer network (MBCG-SwinNet). In contrast to previous CNN-based designs, our model introduces a Swin Transformer [...] Read more.
Hyperspectral classification of wetland environments remains challenging due to high spectral similarity, class imbalance, and blurred boundaries. To address these issues, we propose a novel Multi-Branch Channel-Gated Swin Transformer network (MBCG-SwinNet). In contrast to previous CNN-based designs, our model introduces a Swin Transformer spectral branch to enhance global contextual modeling, enabling improved spectral discrimination. To effectively fuse spatial and spectral features, we design a residual feature interaction chain comprising a Residual Spatial Fusion (RSF) module, a channel-wise gating mechanism, and a multi-scale feature fusion (MFF) module, which together enhance spatial adaptivity and feature integration. Additionally, a DenseCRF-based post-processing step is employed to refine classification boundaries and suppress salt-and-pepper noise. Experimental results on three UAV-based hyperspectral wetland datasets from the Yellow River Delta (Shandong, China)—NC12, NC13, and NC16—demonstrate that MBCG-SwinNet achieves superior classification performance, with overall accuracies of 97.62%, 82.37%, and 97.32%, respectively—surpassing state-of-the-art methods. The proposed architecture offers a robust and scalable solution for hyperspectral image classification in complex ecological settings. Full article
Show Figures

Figure 1

Back to TopTop