Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,896)

Search Parameters:
Keywords = spatial–spectral feature

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 4385 KB  
Article
HTMNet: Hybrid Transformer–Mamba Network for Hyperspectral Target Detection
by Xiaosong Zheng, Yin Kuang, Yu Huo, Wenbo Zhu, Min Zhang and Hai Wang
Remote Sens. 2025, 17(17), 3015; https://doi.org/10.3390/rs17173015 (registering DOI) - 30 Aug 2025
Abstract
Hyperspectral target detection (HTD) aims to identify pixel-level targets within complex backgrounds, but existing HTD methods often fail to fully exploit multi-scale features and integrate global–local information, leading to suboptimal detection performance. To address these challenges, a novel hybrid Transformer–Mamba network (HTMNet) is [...] Read more.
Hyperspectral target detection (HTD) aims to identify pixel-level targets within complex backgrounds, but existing HTD methods often fail to fully exploit multi-scale features and integrate global–local information, leading to suboptimal detection performance. To address these challenges, a novel hybrid Transformer–Mamba network (HTMNet) is proposed to reconstruct the high-fidelity background samples for HTD. HTMNet consists of the following two parallel modules: the multi-scale feature extraction (MSFE) module and the global–local feature extraction (GLFE) module. Specifically, in the MSFE module, we designed a multi-scale Transformer to extract and fuse multi-scale background features. In the GLFE module, a global feature extraction (GFE) module is devised to extract global background features by introducing a spectral–spatial attention module in the Transformer. Meanwhile, a local feature extraction (LFE) module is developed to capture local background features by incorporating the designed circular scanning strategy into the LocalMamba. Additionally, a feature interaction fusion (FIF) module is devised to integrate features from multiple perspectives, enhancing the model’s overall representation capability. Experiments show that our method achieves AUC(PF, PD) scores of 99.97%, 99.91%, 99.82%, and 99.64% on four public hyperspectral datasets. These results demonstrate that HTMNet consistently surpasses state-of-the-art HTD methods, delivering superior detection performance in terms of AUC(PF, PD). Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
Show Figures

Figure 1

18 pages, 2884 KB  
Article
Research on Multi-Path Feature Fusion Manchu Recognition Based on Swin Transformer
by Yu Zhou, Mingyan Li, Hang Yu, Jinchi Yu, Mingchen Sun and Dadong Wang
Symmetry 2025, 17(9), 1408; https://doi.org/10.3390/sym17091408 - 29 Aug 2025
Viewed by 33
Abstract
Recognizing Manchu words can be challenging due to their complex character variations, subtle differences between similar characters, and homographic polysemy. Most studies rely on character segmentation techniques for character recognition or use convolutional neural networks (CNNs) to encode word images for word recognition. [...] Read more.
Recognizing Manchu words can be challenging due to their complex character variations, subtle differences between similar characters, and homographic polysemy. Most studies rely on character segmentation techniques for character recognition or use convolutional neural networks (CNNs) to encode word images for word recognition. However, these methods can lead to segmentation errors or a loss of semantic information, which reduces the accuracy of word recognition. To address the limitations in the long-range dependency modeling of CNNs and enhance semantic coherence, we propose a hybrid architecture to fuse the spatial features of original images and spectral features. Specifically, we first leverage the Short-Time Fourier Transform (STFT) to preprocess the raw input images and thereby obtain their multi-view spectral features. Then, we leverage a primary CNN block and a pair of symmetric CNN blocks to construct a symmetric spectral enhancement module, which is used to encode the raw input features and the multi-view spectral features. Subsequently, we design a feature fusion module via Swin Transformer to fuse multi-view spectral embedding and thereby concat it with the raw input embedding. Finally, we leverage a Transformer decoder to obtain the target output. We conducted extensive experiments on Manchu words benchmark datasets to evaluate the effectiveness of our proposed framework. The experimental results demonstrated that our framework performs robustly in word recognition tasks and exhibits excellent generalization capabilities. Additionally, our model outperformed other baseline methods in multiple writing-style font-recognition tasks. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 4789 KB  
Article
Towards Gas Plume Identification in Industrial and Livestock Farm Environments Using Infrared Hyperspectral Imaging: A Background Modeling and Suppression Method
by Zhiqiang Ning, Zhengang Li, Rong Qian and Yonghua Fang
Agriculture 2025, 15(17), 1835; https://doi.org/10.3390/agriculture15171835 - 29 Aug 2025
Viewed by 100
Abstract
Hyperspectral imaging for gas plume identification holds significant potential for applications in industrial emission control and environmental monitoring, including critical needs in livestock farm environments. Conventional pixel-by-pixel spectral identification methods primarily rely on spectral information, often overlooking the rich spatial distribution features inherent [...] Read more.
Hyperspectral imaging for gas plume identification holds significant potential for applications in industrial emission control and environmental monitoring, including critical needs in livestock farm environments. Conventional pixel-by-pixel spectral identification methods primarily rely on spectral information, often overlooking the rich spatial distribution features inherent in hyperspectral images. This oversight can lead to challenges such as inaccurate identification or leakage along the edge regions of gas plumes and false positives from isolated pixels in non-gas areas. While infrared imaging for gas plumes offers a new perspective by leveraging multi-frame image variations to locate plumes, these methods typically lack spectral discriminability. To address these limitations, we draw inspiration from the multi-frame analysis framework of infrared imaging and propose a novel hyperspectral gas plume identification method based on image background modeling and suppression. Our approach begins by employing background modeling and suppression techniques to accurately detect the primary gas plume region. Subsequently, a representative spectrum is extracted from this identified plume region for precise gas identification. To further enhance the identification accuracy, especially in the challenging plume edge regions, a spatial-spectral combined judgment operator is applied as a post-processing step. The effectiveness of the method was validated through experiments using both simulated and real-world measured data from ammonia and methanol gas releases. We compare its performance against classical methods and an ablation version of our model. Results consistently demonstrate that our method effectively detects low-concentration, thin, and diffuse gas plumes, offering a more robust and accurate solution for hyperspectral gas plume identification with strong applicability to real-world industrial and livestock farm scenarios. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

30 pages, 8824 KB  
Article
Modeling Urban-Vegetation Aboveground Carbon by Integrating Spectral–Textural Features with Tree Height and Canopy Cover Ratio Using Machine Learning
by Yuhao Fang, Yuning Cheng and Yilun Cao
Forests 2025, 16(9), 1381; https://doi.org/10.3390/f16091381 - 28 Aug 2025
Viewed by 183
Abstract
Accurately estimating aboveground carbon storage (AGC) of urban vegetation remains a major challenge, due to the heterogeneity and vertical complexity of urban environments, where traditional forest-based remote sensing models often perform poorly. This study integrates multimodal remote sensing data and incorporates two three-dimensional [...] Read more.
Accurately estimating aboveground carbon storage (AGC) of urban vegetation remains a major challenge, due to the heterogeneity and vertical complexity of urban environments, where traditional forest-based remote sensing models often perform poorly. This study integrates multimodal remote sensing data and incorporates two three-dimensional structural features—mean tree height (Hmean) and canopy cover ratio (CCR)—in addition to conventional spectral and textural variables. To minimize redundancy, the Boruta algorithm was applied for feature selection, and four machine learning models (SVR, RF, XGBoost, and CatBoost) were evaluated. Results demonstrate that under multimodal data fusion, three-dimensional features emerge as the dominant predictors, with XGBoost using Boruta-selected variables achieving the highest accuracy (R2 = 0.701, RMSE = 0.894 tC/400 m2). Spatial mapping of AGC revealed a “high-aggregation, low-dispersion” pattern, with the model performing best in large, continuous green spaces, while accuracy declined in fragmented or small-scale vegetation patches. Overall, this study highlights the potential of machine learning with multi-source variable inputs for fine-scale urban AGC estimation, emphasizes the importance of three-dimensional vegetation indicators, and provides practical insights for urban carbon assessment and green infrastructure planning. Full article
(This article belongs to the Section Urban Forestry)
Show Figures

Figure 1

20 pages, 3459 KB  
Article
Diagnosis of Potassium Content in Rubber Leaves Based on Spatial–Spectral Feature Fusion at the Leaf Scale
by Xiaochuan Luo, Rongnian Tang, Chuang Li and Cheng Qian
Remote Sens. 2025, 17(17), 2977; https://doi.org/10.3390/rs17172977 - 27 Aug 2025
Viewed by 283
Abstract
Hyperspectral imaging (HSI) technology has attracted extensive attention in the field of nutrient diagnosis for rubber leaves. However, the mainstream method of extracting leaf average spectra ignores the leaf spatial information in hyperspectral imaging and dilutes the response characteristics exhibited by nutrient-sensitive local [...] Read more.
Hyperspectral imaging (HSI) technology has attracted extensive attention in the field of nutrient diagnosis for rubber leaves. However, the mainstream method of extracting leaf average spectra ignores the leaf spatial information in hyperspectral imaging and dilutes the response characteristics exhibited by nutrient-sensitive local areas of leaves, thereby limiting the accuracy of modeling. This study proposes a spatial–spectral feature fusion method based on leaf-scale sub-region segmentation. It introduces a clustering algorithm to divide leaf pixel spectra into several subclasses, and segments sub-regions on the leaf surface based on clustering results. By optimizing the modeling contribution weights of leaf sub-regions, it improves the modeling and generalization accuracy of potassium diagnosis for rubber leaves. Experiments have been carried out to verify the proposed method, which is based on spatial–spectral feature fusion to outperform those of average spectral modeling. Specifically, after pixel-level MSC preprocessing, when the spectra of rubber leaf pixel regions were clustered into nine subsets, the diagnostic accuracy of potassium content in rubber leaves reached 0.97, which is better than the 0.87 achieved by average spectral modeling. Additionally, precision, macro-F1, and macro-recall all reached 0.97, which is superior to the results of average spectral modeling. Moreover, the proposed method is also superior to the spatial–spectral feature fusion method that integrates texture features. The visualization results of leaf sub-region weights showed that strengthening the modeling contribution of leaf edge regions is conducive to improving the diagnostic accuracy of potassium in rubber leaves, which is consistent with the response pattern of leaves to potassium. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

30 pages, 10140 KB  
Article
High-Accuracy Cotton Field Mapping and Spatiotemporal Evolution Analysis of Continuous Cropping Using Multi-Source Remote Sensing Feature Fusion and Advanced Deep Learning
by Xiao Zhang, Zenglu Liu, Xuan Li, Hao Bao, Nannan Zhang and Tiecheng Bai
Agriculture 2025, 15(17), 1814; https://doi.org/10.3390/agriculture15171814 - 25 Aug 2025
Viewed by 311
Abstract
Cotton is a globally strategic crop that plays a crucial role in sustaining national economies and livelihoods. To address the challenges of accurate cotton field extraction in the complex planting environments of Xinjiang’s Alaer reclamation area, a cotton field identification model was developed [...] Read more.
Cotton is a globally strategic crop that plays a crucial role in sustaining national economies and livelihoods. To address the challenges of accurate cotton field extraction in the complex planting environments of Xinjiang’s Alaer reclamation area, a cotton field identification model was developed that integrates multi-source satellite remote sensing data with machine learning methods. Using imagery from Sentinel-2, GF-1, and Landsat 8, we performed feature fusion using principal component, Gram–Schmidt (GS), and neural network techniques. Analyses of spectral, vegetation, and texture features revealed that the GS-fused blue bands of Sentinel-2 and Landsat 8 exhibited optimal performance, with a mean value of 16,725, a standard deviation of 2290, and an information entropy of 8.55. These metrics improved by 10,529, 168, and 0.28, respectively, compared with the original Landsat 8 data. In comparative classification experiments, the endmember-based random forest classifier (RFC) achieved the best traditional classification performance, with a kappa value of 0.963 and an overall accuracy (OA) of 97.22% based on 250 samples, resulting in a cotton-field extraction error of 38.58 km2. By enhancing the deep learning model, we proposed a U-Net architecture that incorporated a Convolutional Block Attention Module and Atrous Spatial Pyramid Pooling. Using the GS-fused blue band data, the model achieved significantly improved accuracy, with a kappa coefficient of 0.988 and an OA of 98.56%. This advancement reduced the area estimation error to 25.42 km2, representing a 34.1% decrease compared with that of the RFC. Based on the optimal model, we constructed a digital map of continuous cotton cropping from 2021 to 2023, which revealed a consistent decline in cotton acreage within the reclaimed areas. This finding underscores the effectiveness of crop rotation policies in mitigating the adverse effects of large-scale monoculture practices. This study confirms that the synergistic integration of multi-source satellite feature fusion and deep learning significantly improves crop identification accuracy, providing reliable technical support for agricultural policy formulation and sustainable farmland management. Full article
(This article belongs to the Special Issue Computers and IT Solutions for Agriculture and Their Application)
Show Figures

Figure 1

19 pages, 4004 KB  
Article
Spectral-Spatial Fusion for Soybean Quality Evaluation Using Hyperspectral Imaging
by Md Bayazid Rahman, Ahmad Tulsi and Abdul Momin
AgriEngineering 2025, 7(9), 274; https://doi.org/10.3390/agriengineering7090274 - 25 Aug 2025
Viewed by 224
Abstract
Accurate postharvest quality evaluation of soybeans is essential for preserving product value and meeting industry standards. Traditional inspection methods are often inconsistent, labor-intensive, and unsuitable for high-throughput operations. This study presents a non-destructive soybean classification approach using a simplified reflectance-mode hyperspectral imaging system [...] Read more.
Accurate postharvest quality evaluation of soybeans is essential for preserving product value and meeting industry standards. Traditional inspection methods are often inconsistent, labor-intensive, and unsuitable for high-throughput operations. This study presents a non-destructive soybean classification approach using a simplified reflectance-mode hyperspectral imaging system equipped with a single light source, eliminating the complexity and maintenance demands of dual-light configurations used in prior studies. A spectral–spatial data fusion strategy was developed to classify harvested soybeans into four categories: normal, split, diseased, and foreign materials such as stems and pods. The dataset consisted of 1140 soybean samples distributed across these four categories, with spectral reflectance features and spatial texture attributes extracted from each sample. These features were combined to form a unified feature representation for use in classification. Among multiple machine learning classifiers evaluated, Linear Discriminant Analysis (LDA) achieved the highest performance, with approximately 99% accuracy, 99.05% precision, 99.03% recall and 99.03% F1-score. When evaluated independently, spectral features alone resulted in 98.93% accuracy, while spatial features achieved 78.81%, highlighting the benefit of the fusion strategy. Overall, this study demonstrates that a single-illumination HSI system, combined with spectral–spatial fusion and machine learning, offers a practical and potentially scalable approach for non-destructive soybean quality evaluation, with applicability in automated industrial processing environments. Full article
(This article belongs to the Special Issue Latest Research on Post-Harvest Technology to Reduce Food Loss)
Show Figures

Figure 1

22 pages, 7451 KB  
Article
Inversion of Grassland Aboveground Biomass in the Three Parallel Rivers Area Based on Genetic Programming Optimization Features and Machine Learning
by Rong Wei, Qingtai Shu, Zeyu Li, Lianjin Fu, Qin Xiang, Chaoguan Qin, Xin Rao and Jinfeng Liu
Remote Sens. 2025, 17(17), 2936; https://doi.org/10.3390/rs17172936 - 24 Aug 2025
Viewed by 439
Abstract
Aboveground biomass (AGB) in grasslands is a vital metric for assessing ecosystem functioning and health. Accurate and efficient AGB estimation is essential for the scientific management and sustainable use of grassland resources. However, achieving low-cost, high-efficiency AGB estimation via remote sensing remains a [...] Read more.
Aboveground biomass (AGB) in grasslands is a vital metric for assessing ecosystem functioning and health. Accurate and efficient AGB estimation is essential for the scientific management and sustainable use of grassland resources. However, achieving low-cost, high-efficiency AGB estimation via remote sensing remains a key challenge. This study integrates Sentinel-1 and Sentinel-2 imagery to derive 38 multi-source feature variables, including backscatter coefficients, texture, spectral reflectance, vegetation indices, and topographic factors. These features are combined with AGB data from 112 field plots in the Three Parallel Rivers area. Feature selection was performed using Pearson correlation, Random Forest (RF), and SHAP values to identify optimal variable sets. Genetic Programming (GP) was then applied for nonlinear optimization of the selected features. Three machine learning models—RF, GBRT, and KNN—were used to estimate AGB and generate spatial distribution maps. The results revealed notable differences in model accuracy, with RF performing best overall, outperforming GBRT and KNN. After GP optimization, all models showed improved performance, with the RF model based on RF-selected features achieving the highest accuracy (R2 = 0.90, RMSE = 0.31 t/ha, MAE = 0.23 t/ha), improving R2 by 0.03 and reducing RMSE and MAE by 0.05 and 0.03 t/ha, respectively. Spatial mapping showed the AGB ranged from 0.41 to 3.59 t/ha, with a mean of 1.39 t/ha, closely aligned with the actual distribution characteristics. This study demonstrates that the RF model, combined with multi-source features and GP optimization, provides an effective approach to grassland AGB estimation and supports ecological monitoring in complex areas. Full article
Show Figures

Figure 1

28 pages, 7371 KB  
Article
Deep Fuzzy Fusion Network for Joint Hyperspectral and LiDAR Data Classification
by Guangen Liu, Jiale Song, Yonghe Chu, Lianchong Zhang, Peng Li and Junshi Xia
Remote Sens. 2025, 17(17), 2923; https://doi.org/10.3390/rs17172923 - 22 Aug 2025
Viewed by 454
Abstract
Recently, Transformers have made significant progress in the joint classification task of HSI and LiDAR due to their efficient modeling of long-range dependencies and adaptive feature learning mechanisms. However, existing methods face two key challenges: first, the feature extraction stage does not explicitly [...] Read more.
Recently, Transformers have made significant progress in the joint classification task of HSI and LiDAR due to their efficient modeling of long-range dependencies and adaptive feature learning mechanisms. However, existing methods face two key challenges: first, the feature extraction stage does not explicitly model category ambiguity; second, the feature fusion stage lacks a dynamic perception mechanism for inter-modal differences and uncertainties. To this end, this paper proposes a Deep Fuzzy Fusion Network (DFNet) for the joint classification of hyperspectral and LiDAR data. DFNet adopts a dual-branch architecture, integrating CNN and Transformer structures, respectively, to extract multi-scale spatial–spectral features from hyperspectral and LiDAR data. To enhance the model’s discriminative robustness in ambiguous regions, both branches incorporate fuzzy learning modules that model class uncertainty through learnable Gaussian membership functions. In the modality fusion stage, a Fuzzy-Enhanced Cross-Modal Fusion (FECF) module is designed, which combines membership-aware attention mechanisms with fuzzy inference operators to achieve dynamic adjustment of modality feature weights and efficient integration of complementary information. DFNet, through a hierarchical design, realizes uncertainty representation within and fusion control between modalities. The proposed DFNet is evaluated on three public datasets, and the extensive experimental results indicate that the proposed DFNet considerably outperforms other state-of-the-art methods. Full article
Show Figures

Figure 1

18 pages, 7248 KB  
Article
Comparative Performance of Machine Learning Classifiers for Photovoltaic Mapping in Arid Regions Using Google Earth Engine
by Le Zhang, Zhaoming Wang, Hengrui Zhang, Ning Zhang, Tianyu Zhang, Hailong Bao, Haokai Chen and Qing Zhang
Energies 2025, 18(17), 4464; https://doi.org/10.3390/en18174464 - 22 Aug 2025
Viewed by 433
Abstract
With increasing energy demand and advancing carbon neutrality goals, arid regions—key areas for centralized photovoltaic (PV) station development in China—urgently require efficient and accurate remote sensing techniques to support spatial distribution monitoring and ecological impact assessment. Although numerous studies have focused on PV [...] Read more.
With increasing energy demand and advancing carbon neutrality goals, arid regions—key areas for centralized photovoltaic (PV) station development in China—urgently require efficient and accurate remote sensing techniques to support spatial distribution monitoring and ecological impact assessment. Although numerous studies have focused on PV station extraction, challenges remain in arid regions with complex surface features to develop extraction frameworks that balance efficiency and accuracy at a regional scale. This study focuses on the Inner Mongolia Yellow River Basin and develops a PV extraction framework on the Google Earth Engine platform by integrating spectral bands, spectral indices, and topographic features, systematically comparing the classification performance of support vector machine, classification and regression tree, and random forest (RF) classifiers. The results show that the RF classifier achieved a high Kappa coefficient (0.94) and F1 score (0.96 for PV areas) in PV extraction. Feature importance analysis revealed that the Normalized Difference Tillage Index, near-infrared band, and Land Surface Water Index made significant contributions to PV classification, accounting for 10.517%, 6.816%, and 6.625%, respectively. PV stations are mainly concentrated in the northern and southwestern parts of the study area, characterized by flat terrain and low vegetation cover, exhibiting a spatial pattern of “overall dispersion with local clustering”. Landscape pattern indices further reveal significant differences in patch size, patch density, and aggregation level of PV stations across different regions. This study employs Sentinel-2 imagery for regional-scale PV station extraction, providing scientific support for energy planning, land use optimization, and ecological management in the study area, with potential for application in other global arid regions. Full article
Show Figures

Figure 1

20 pages, 5304 KB  
Article
Deep Learning with UAV Imagery for Subtropical Sphagnum Peatland Vegetation Mapping
by Zhengshun Liu and Xianyu Huang
Remote Sens. 2025, 17(17), 2920; https://doi.org/10.3390/rs17172920 - 22 Aug 2025
Viewed by 545
Abstract
Peatlands are vital for global carbon cycling, and their ecological functions are influenced by vegetation composition. Accurate vegetation mapping is crucial for peatland management and conservation, but traditional methods face limitations such as low spatial resolution and labor-intensive fieldwork. We used ultra-high-resolution UAV [...] Read more.
Peatlands are vital for global carbon cycling, and their ecological functions are influenced by vegetation composition. Accurate vegetation mapping is crucial for peatland management and conservation, but traditional methods face limitations such as low spatial resolution and labor-intensive fieldwork. We used ultra-high-resolution UAV imagery captured across seasonal and topographic gradients and assessed the impact of phenology and topography on classification accuracy. Additionally, this study evaluated the performance of four deep learning models (ResNet, Swin Transformer, ConvNeXt, and EfficientNet) for mapping vegetation in a subtropical Sphagnum peatland. ConvNeXt achieved peak accuracy at 87% during non-growing seasons through its large-kernel feature extraction capability, while ResNet served as the optimal efficient alternative for growing-season applications. Non-growing seasons facilitated superior identification of Sphagnum and monocotyledons, whereas growing seasons enhanced dicotyledon distinction through clearer morphological features. Overall accuracy in low-lying humid areas was 12–15% lower than in elevated terrain due to severe spectral confusion among vegetation. SHapley Additive exPlanations (SHAP) of the ConvNeXt model identified key vegetation indices, the digital surface model, and select textural features as primary performance drivers. This study concludes that the combination of deep learning and UAV imagery presents a powerful tool for peatland vegetation mapping, highlighting the importance of considering phenological and topographical factors. Full article
Show Figures

Figure 1

22 pages, 7877 KB  
Article
Large-Scale Individual Plastic Greenhouse Extraction Using Deep Learning and High-Resolution Remote Sensing Imagery
by Yuguang Chang, Xiaoyu Yu, Baipeng Li, Xiangyu Tian and Zhaoming Wu
Agronomy 2025, 15(8), 2014; https://doi.org/10.3390/agronomy15082014 - 21 Aug 2025
Viewed by 323
Abstract
Addressing the demands of agricultural resource digitization and facility crop monitoring, precise extraction of plastic greenhouses using high-resolution remote sensing imagery demonstrates pivotal significance for implementing refined farmland management. However, the complex spatial topological relationships among densely arranged greenhouses and the spectral confusion [...] Read more.
Addressing the demands of agricultural resource digitization and facility crop monitoring, precise extraction of plastic greenhouses using high-resolution remote sensing imagery demonstrates pivotal significance for implementing refined farmland management. However, the complex spatial topological relationships among densely arranged greenhouses and the spectral confusion of ground objects within agricultural backgrounds limit the effectiveness of conventional methods in the large-scale, precise extraction of plastic greenhouses. This study constructs an Individual Plastic Greenhouse Extraction Network (IPGENet) by integrating a multi-scale feature fusion decoder with the Swin-UNet architecture to improve the accuracy of large-scale individual plastic greenhouse extraction. To ensure sample accuracy while reducing manual labor costs, an iterative sampling approach is proposed to rapidly expand a small sample set into a large-scale dataset. Using GF-2 satellite imagery data in Shandong Province, China, the model realized large-scale mapping of individual plastic greenhouse extraction results. In addition to large-scale sub-meter extraction and mapping, the study conducted quantitative and spatial statistical analyses of extraction results across cities in Shandong Province, revealing regional disparities in plastic greenhouse development and providing a novel technical approach for large-scale plastic greenhouse mapping. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

20 pages, 3795 KB  
Article
Leaf Area Index Estimation of Grassland Based on UAV-Borne Hyperspectral Data and Multiple Machine Learning Models in Hulun Lake Basin
by Dazhou Wu, Saru Bao, Yi Tong, Yifan Fan, Lu Lu, Songtao Liu, Wenjing Li, Mengyong Xue, Bingshuai Cao, Quan Li, Muha Cha, Qian Zhang and Nan Shan
Remote Sens. 2025, 17(16), 2914; https://doi.org/10.3390/rs17162914 - 21 Aug 2025
Viewed by 508
Abstract
Leaf area index (LAI) is a crucial parameter reflecting the crown structure of the grassland. Accurately obtaining LAI is of great significance for estimating carbon sinks in grassland ecosystems. However, spectral noise interference and pronounced spatial heterogeneity within vegetation canopies constitute significant impediments [...] Read more.
Leaf area index (LAI) is a crucial parameter reflecting the crown structure of the grassland. Accurately obtaining LAI is of great significance for estimating carbon sinks in grassland ecosystems. However, spectral noise interference and pronounced spatial heterogeneity within vegetation canopies constitute significant impediments to achieving high-precision LAI retrieval. This study used hyperspectral sensor mounted on an unmanned aerial vehicle (UAV) to estimate LAI in a typical grassland, Hulun Lake Basin. Multiple machine learning (ML) models were constructed to reveal a relationship between hyperspectral data and grassland LAI using two input datasets, namely spectral transformations and vegetation indices (VIs), while SHAP (SHapley Additive ExPlanation) interpretability analysis was further employed to identify high-contribution features in the ML models. The analysis revealed that grassland LAI has good correlations with the original spectrum at 550 nm and 750 nm–1000 nm, first and second derivatives at 506 nm–574 nm, 649 nm–784 nm, and vegetation indices including the triangular vegetation index (TVI), enhanced vegetation index 2 (EVI2), and soil-adjusted vegetation index (SAVI). In the models using spectral transformations and VIs, the random forest (RF) models outperformed other models (testing R2 = 0.89/0.88, RMSE = 0.20/0.21, and RRMSE = 27.34%/28.98%). The prediction error of the random forest model exhibited a positive correlation with measured LAI magnitude but demonstrated an inverse relationship with quadrat-level species richness, quantified by Margalef’s richness index (MRI). We also found that at the quadrat level, the spectral response curve pattern is influenced by attributes within the quadrat, like dominant species and vegetation cover, and that LAI has positive relationship with quadrat vegetation cover. The LAI inversion results in this study were also compared to main LAI products, showing a good correlation (r = 0.71). This study successfully established a high-fidelity inversion framework for hyperspectral-derived LAI estimation in mid-to-high latitude grasslands of the Hulun Lake Basin, supporting the spatial refinement of continental-scale carbon sink models at a regional scale. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

19 pages, 1299 KB  
Article
Structured Emission and Entanglement Dynamics of a Giant Atom in a Photonic Creutz Ladder
by Vassilios Yannopapas
Photonics 2025, 12(8), 827; https://doi.org/10.3390/photonics12080827 - 20 Aug 2025
Viewed by 388
Abstract
We explore the spontaneous emission dynamics of a giant atom coupled to a photonic Creutz ladder, focusing on how flat-band frustration and synthetic gauge fields shape atom–photon interactions. The Creutz ladder exhibits perfectly flat bands, Aharonov–Bohm caging, and topological features arising from its [...] Read more.
We explore the spontaneous emission dynamics of a giant atom coupled to a photonic Creutz ladder, focusing on how flat-band frustration and synthetic gauge fields shape atom–photon interactions. The Creutz ladder exhibits perfectly flat bands, Aharonov–Bohm caging, and topological features arising from its nontrivial hopping structure. By embedding the giant atom at multiple spatially separated sites, we reveal interference-driven emission control and the formation of nonradiative bound states. Using both spectral and time-domain analyses, we uncover strong non-Markovian dynamics characterized by persistent oscillations, long-lived entanglement, and recoherence cycles. The emergence of bound-state poles in the spectral function is accompanied by spatially localized photonic profiles and directionally asymmetric emission, even in the absence of band dispersion. Calculations of von Neumann entropy and atomic purity confirm the formation of coherence-preserving dressed states in the flat-band regime. Furthermore, the spacetime structure of the emitted field displays robust zig-zag interference patterns and synthetic chirality, underscoring the role of geometry and topology in photon transport. Our results demonstrate how flat-band photonic lattices can be leveraged to engineer tunable atom–photon entanglement, suppress radiative losses, and create structured decoherence-free subspaces for quantum information applications. Full article
(This article belongs to the Special Issue Recent Progress in Optical Quantum Information and Communication)
Show Figures

Figure 1

24 pages, 7251 KB  
Article
WTCMC: A Hyperspectral Image Classification Network Based on Wavelet Transform Combining Mamba and Convolutional Neural Networks
by Guanchen Liu, Qiang Zhang, Xueying Sun and Yishuang Zhao
Electronics 2025, 14(16), 3301; https://doi.org/10.3390/electronics14163301 - 20 Aug 2025
Viewed by 446
Abstract
Hyperspectral images are rich in spectral and spatial information. However, their high dimensionality and complexity pose significant challenges for effective feature extraction. Specifically, the performance of existing models for hyperspectral image (HSI) classification remains constrained by spectral redundancy among adjacent bands, misclassification at [...] Read more.
Hyperspectral images are rich in spectral and spatial information. However, their high dimensionality and complexity pose significant challenges for effective feature extraction. Specifically, the performance of existing models for hyperspectral image (HSI) classification remains constrained by spectral redundancy among adjacent bands, misclassification at object boundaries, and significant noise in hyperspectral data. To address these challenges, we propose WTCMC—a novel hyperspectral image classification network based on wavelet transform combining Mamba and convolutional neural networks. To establish robust shallow spatial–spectral relationships, we introduce a shallow feature extraction module (SFE) at the initial stage of the network. To enable the comprehensive and efficient capture of both spectral and spatial characteristics, our architecture incorporates a low-frequency spectral Mamba module (LFSM) and a high-frequency multi-scale convolution module (HFMC). The wavelet transform suppresses noise for LFSM and enhances fine spatial and contour features for HFMC. Furthermore, we devise a spectral–spatial complementary fusion module (SCF) that selectively preserves the most discriminative spectral and spatial features. Experimental results demonstrate that the proposed WTCMC network attains overall accuracies (OA) of 98.94%, 98.67%, and 97.50% on the Pavia University (PU), Botswana (BS), and Indian Pines (IP) datasets, respectively, outperforming the compared state-of-the-art methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop