Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,298)

Search Parameters:
Keywords = multispectral sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 6701 KB  
Article
Novel Fabry-Pérot Filter Structures for High-Performance Multispectral Imaging with a Broadband from the Visible to the Near-Infrared
by Bo Gao, Tianxin Wang, Lu Chen, Shuai Wang, Chenxi Li, Fajun Xiao, Yanyan Liu and Weixing Yu
Sensors 2025, 25(19), 6123; https://doi.org/10.3390/s25196123 - 3 Oct 2025
Abstract
The integration of a pixelated Fabry–Pérot filter array onto the image sensor enables on-chip snapshot multispectral imaging, significantly reducing the size and weight of conventional spectral imaging equipment. However, a traditional Fabry–Pérot cavity, based on metallic or dielectric layers, exhibits a narrow bandwidth, [...] Read more.
The integration of a pixelated Fabry–Pérot filter array onto the image sensor enables on-chip snapshot multispectral imaging, significantly reducing the size and weight of conventional spectral imaging equipment. However, a traditional Fabry–Pérot cavity, based on metallic or dielectric layers, exhibits a narrow bandwidth, which restricts their utility in broader applications. In this work, we propose novel Fabry–Pérot filter structures that employ dielectric thin films for phase modulation, enabling single-peak filtering across a broad operational wavelength range from 400 nm to 1100 nm. The proposed structures are easy to fabricate and compatible with complementary metal-oxide-semiconductor (CMOS) image sensors. Moreover, the structures show low sensitivity to oblique incident angles of up to 30° with minimal wavelength shifts. This advanced Fabry–Pérot filter design provides a promising pathway for expanding the operational wavelength of snapshot spectral imaging systems, thereby potentially extending their application across numerous related fields. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 3776 KB  
Article
An Efficient Method for Retrieving Citrus Orchard Evapotranspiration Based on Multi-Source Remote Sensing Data Fusion from Unmanned Aerial Vehicles
by Zhiwei Zhang, Weiqi Zhang, Chenfei Duan, Shijiang Zhu and Hu Li
Agriculture 2025, 15(19), 2058; https://doi.org/10.3390/agriculture15192058 - 30 Sep 2025
Abstract
Severe water scarcity has become a critical constraint to global agricultural development. Enhancing both the timeliness and accuracy of crop evapotranspiration (ETc) retrieval is essential for optimizing irrigation scheduling. Addressing the limitations of conventional ground-based point-source measurements in rapidly acquiring [...] Read more.
Severe water scarcity has become a critical constraint to global agricultural development. Enhancing both the timeliness and accuracy of crop evapotranspiration (ETc) retrieval is essential for optimizing irrigation scheduling. Addressing the limitations of conventional ground-based point-source measurements in rapidly acquiring two-dimensional ETc information at the field scale, this study employed unmanned aerial vehicle (UAV) remote sensing equipped with multispectral and thermal infrared sensors to obtain high spatiotemporal resolution imagery of a representative citrus orchard (Citrus reticulata Blanco cv. ‘Yichangmiju’) in western Hubei at different phenological stages. In conjunction with meteorological data (air temperature, daily net radiation, etc.), ETc was retrieved using two established approaches: the Seguin-Itier (S-I) model, which relates canopy–air temperature differences to ETc, and the multispectral-driven single crop coefficient method, which estimates ETc by combining vegetation indices with reference evapotranspiration. The thermal-infrared-driven S-I model, which relates canopy–air temperature differences to ETc, and the multispectral-driven single crop coefficient method, which estimates ETc by combining vegetation indices with reference evapotranspiration. The findings indicate that: (1) both the S-I model and the single crop coefficient method achieved satisfactory ETc estimation accuracy, with the latter performing slightly better (accuracy of 80% and 85%, respectively); (2) the proposed multi-source fusion model consistently demonstrated high accuracy and stability across all phenological stages (R2 = 0.9104, 0.9851, and 0.9313 for the fruit-setting, fruit-enlargement, and coloration–sugar-accumulation stages, respectively; all significant at p < 0.01), significantly enhancing the precision and timeliness of ETc retrieval; and (3) the model was successfully applied to ETc retrieval during the main growth stages in the Cangwubang citrus-producing area of Yichang, providing practical support for irrigation scheduling and water resource management at the regional scale. This multi-source fusion approach offers effective technical support for precision irrigation control in agriculture and holds broad application prospects. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Graphical abstract

31 pages, 1983 KB  
Review
Integrating Remote Sensing and Autonomous Robotics in Precision Agriculture: Current Applications and Workflow Challenges
by Magdalena Łągiewska and Ewa Panek-Chwastyk
Agronomy 2025, 15(10), 2314; https://doi.org/10.3390/agronomy15102314 - 30 Sep 2025
Abstract
Remote sensing technologies are increasingly integrated with autonomous robotic platforms to enhance data-driven decision-making in precision agriculture. Rather than replacing conventional platforms such as satellites or UAVs, autonomous ground robots complement them by enabling high-resolution, site-specific observations in real time, especially at the [...] Read more.
Remote sensing technologies are increasingly integrated with autonomous robotic platforms to enhance data-driven decision-making in precision agriculture. Rather than replacing conventional platforms such as satellites or UAVs, autonomous ground robots complement them by enabling high-resolution, site-specific observations in real time, especially at the plant level. This review analyzes how remote sensing sensors—including multispectral, hyperspectral, LiDAR, and thermal—are deployed via robotic systems for specific agricultural tasks such as canopy mapping, weed identification, soil moisture monitoring, and precision spraying. Key benefits include higher spatial and temporal resolution, improved monitoring of under-canopy conditions, and enhanced task automation. However, the practical deployment of such systems is constrained by terrain complexity, power demands, and sensor calibration. The integration of artificial intelligence and IoT connectivity emerges as a critical enabler for responsive, scalable solutions. By focusing on how autonomous robots function as mobile sensor platforms, this article contributes to the understanding of their role within modern precision agriculture workflows. The findings support future development pathways aimed at increasing operational efficiency and sustainability across diverse crop systems. Full article
Show Figures

Figure 1

28 pages, 1926 KB  
Systematic Review
Drone Imaging and Sensors for Situational Awareness in Hazardous Environments: A Systematic Review
by Siripan Rattanaamporn, Asanka Perera, Andy Nguyen, Thanh Binh Ngo and Javaan Chahl
J. Sens. Actuator Netw. 2025, 14(5), 98; https://doi.org/10.3390/jsan14050098 - 29 Sep 2025
Abstract
Situation awareness is essential for ensuring safety in hazardous environments, where timely and accurate information is critical for decision-making. Unmanned Aerial Vehicles (UAVs) have emerged as valuable tools in enhancing situation awareness by providing real-time data and monitoring capabilities in high-risk areas. This [...] Read more.
Situation awareness is essential for ensuring safety in hazardous environments, where timely and accurate information is critical for decision-making. Unmanned Aerial Vehicles (UAVs) have emerged as valuable tools in enhancing situation awareness by providing real-time data and monitoring capabilities in high-risk areas. This study explores the integration of advanced technologies, focusing on imaging and sensor technologies such as thermal, spectral, and multispectral cameras, deployed in critical zones. By merging these technologies into UAV platforms, responders gain access to essential real-time information while reducing human exposure to hazardous conditions. This study presents case studies and practical applications, highlighting the effectiveness of these technologies in a range of hazardous situations. Full article
(This article belongs to the Special Issue AI-Assisted Machine-Environment Interaction)
Show Figures

Figure 1

37 pages, 2297 KB  
Systematic Review
Search, Detect, Recover: A Systematic Review of UAV-Based Remote Sensing Approaches for the Location of Human Remains and Clandestine Graves
by Cherene de Bruyn, Komang Ralebitso-Senior, Kirstie Scott, Heather Panter and Frederic Bezombes
Drones 2025, 9(10), 674; https://doi.org/10.3390/drones9100674 - 26 Sep 2025
Abstract
Several approaches are currently being used by law enforcement to locate the remains of victims. Yet, traditional methods are invasive and time-consuming. Unmanned Aerial Vehicle (UAV)-based remote sensing has emerged as a potential tool to support the location of human remains and clandestine [...] Read more.
Several approaches are currently being used by law enforcement to locate the remains of victims. Yet, traditional methods are invasive and time-consuming. Unmanned Aerial Vehicle (UAV)-based remote sensing has emerged as a potential tool to support the location of human remains and clandestine graves. While offering a non-invasive and low-cost alternative, UAV-based remote sensing needs to be tested and validated for forensic case work. To assess current knowledge, a systematic review of 19 peer-reviewed articles from four databases was conducted, focusing specifically on UAV-based remote sensing for human remains and clandestine grave location. The findings indicate that different sensors (colour, thermal, and multispectral cameras), were tested across a range of burial conditions and models (human and mammalian). While UAVs with imaging sensors can locate graves and decomposition-related anomalies, experimental designs from the reviewed studies lacked robustness in terms of replication and consistency across models. Trends also highlight the potential of automated detection of anomalies over manual inspection, potentially leading to improved predictive modelling. Overall, UAV-based remote sensing shows considerable promise for enhancing the efficiency of human remains and clandestine grave location, but methodological limitations must be addressed to ensure findings are relevant to real-world forensic cases. Full article
Show Figures

Figure 1

22 pages, 4736 KB  
Article
Radiometric Cross-Calibration and Validation of KOMPSAT-3/AEISS Using Sentinel-2A/MSI
by Jin-Hyeok Choi, Kyoung-Wook Jin, Dong-Hwan Cha, Kyung-Bae Choi, Yong-Han Jo, Kwang-Nyun Kim, Gwui-Bong Kang, Ho-Yeon Shin, Ji-Yun Lee, Eunyeong Kim, Hojong Chang and Yun Gon Lee
Remote Sens. 2025, 17(19), 3280; https://doi.org/10.3390/rs17193280 - 24 Sep 2025
Viewed by 144
Abstract
The successful launch of Korea Multipurpose Satellite-3/Advanced Earth Imaging Sensor System (KOMPSAT-3/AEISS) on 18 May 2012 allowed the Republic of Korea to meet the growing demand for high-resolution satellite imagery. However, like all satellite sensors, KOMPSAT-3/AEISS experienced temporal changes post-launch and thus requires [...] Read more.
The successful launch of Korea Multipurpose Satellite-3/Advanced Earth Imaging Sensor System (KOMPSAT-3/AEISS) on 18 May 2012 allowed the Republic of Korea to meet the growing demand for high-resolution satellite imagery. However, like all satellite sensors, KOMPSAT-3/AEISS experienced temporal changes post-launch and thus requires ongoing evaluation and calibration. Although more than a decade has passed since launch, the KOMPSAT-3/AEISS mission and its multi-year data archive remain widely used. This study focused on the cross-calibration of KOMPSAT-3/AEISS with Sentinel-2A/Multispectral Instrument (MSI) by comparing the radiometric responses of the two satellite sensors under similar observation conditions, leveraging the linear relationship between Digital Numbers (DN) and top-of-atmosphere (TOA) radiance. Cross-calibration was performed using near-simultaneous satellite images of the same region, and the Spectral Band Adjustment Factor (SBAF) was calculated and applied to account for differences in spectral response functions (SRF). Additionally, Bidirectional Reflectance Distribution Function (BRDF) correction was applied using MODIS-based kernel models to minimize angular reflectance effects caused by differences in viewing and illumination geometry. This study aims to evaluate the radiometric consistency of KOMPSAT-3/AEISS relative to Sentinel-2A/MSI over Baotou scenes acquired in 2022–2023, derive band-specific calibration coefficients and compare them with prior results, and conduct a side-by-side comparison of cross-calibration and vicarious calibration. Furthermore, the cross-calibration yielded band-specific gains of 0.0196 (Blue), 0.0237 (Green), 0.0214 (Red), and 0.0136 (NIR). These findings offer valuable implications for Earth observation, environmental monitoring, and the planning and execution of future satellite missions. Full article
Show Figures

Graphical abstract

28 pages, 14783 KB  
Article
HSSTN: A Hybrid Spectral–Structural Transformer Network for High-Fidelity Pansharpening
by Weijie Kang, Yuan Feng, Yao Ding, Hongbo Xiang, Xiaobo Liu and Yaoming Cai
Remote Sens. 2025, 17(19), 3271; https://doi.org/10.3390/rs17193271 - 23 Sep 2025
Viewed by 133
Abstract
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS [...] Read more.
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS and PAN data. Consequently, spectral distortion and spatial degradation often occur, limiting high-precision downstream applications. To address these issues, this work proposes a Hybrid Spectral–Structural Transformer Network (HSSTN) that enhances multi-level collaboration through comprehensive modelling of spectral–structural feature complementarity. Specifically, the HSSTN implements a three-tier fusion framework. First, an asymmetric dual-stream feature extractor employs a residual block with channel attention (RBCA) in the MS branch to strengthen spectral representation, while a Transformer architecture in the PAN branch extracts high-frequency spatial details, thereby reducing modality discrepancy at the input stage. Subsequently, a target-driven hierarchical fusion network utilises progressive crossmodal attention across scales, ranging from local textures to multi-scale structures, to enable efficient spectral–structural aggregation. Finally, a novel collaborative optimisation loss function preserves spectral integrity while enhancing structural details. Comprehensive experiments conducted on QuickBird, GaoFen-2, and WorldView-3 datasets demonstrate that HSSTN outperforms existing methods in both quantitative metrics and visual quality. Consequently, the resulting images exhibit sharper details and fewer spectral artefacts, showcasing significant advantages in high-fidelity remote sensing image fusion. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

25 pages, 8787 KB  
Article
Non-Destructive Drone-Based Multispectral and RGB Image Analyses for Regression Modeling to Assess Waterlogging Stress in Pseudolysimachion linariifolium
by TaekJin Yoon, TaeWan Kim and SungYung Yoo
Horticulturae 2025, 11(9), 1139; https://doi.org/10.3390/horticulturae11091139 - 18 Sep 2025
Viewed by 355
Abstract
Urban gardens play a vital role in enhancing the quality of the environment and biodiversity. However, irregular rainfall and poor soil drainage due to climate change have increased the exposure of garden plants to waterlogging stress. Pseudolysimachion linariifolium (Pall. ex Link) Holub, a [...] Read more.
Urban gardens play a vital role in enhancing the quality of the environment and biodiversity. However, irregular rainfall and poor soil drainage due to climate change have increased the exposure of garden plants to waterlogging stress. Pseudolysimachion linariifolium (Pall. ex Link) Holub, a perennial herbaceous plant native to Northeast Asia, is widely used for its ornamental value in urban landscaping. However, its physiological responses to excess moisture conditions remain understudied. In our study, we evaluated the stress responses of P. linariifolium to waterlogging by using non-destructive analysis with drone-based multispectral imagery. We used R (ver. 4.3.2) and the Quantum Geographical Information System (QGIS ver. 3.42.1) to calculate vegetation indices, including the Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Green Leaf Index (GLI), Normalized Green Red Difference Index (NGRDI), Blue Green Pigment Index (BGI), and Visible Atmospherically Resistant Index (VARI). We analyzed the indices combined with the Cumulative volumetric Soil Moisture content (SM_Cum) measured by sensors. With waterlogging treatment, NDVI decreased by 21% and GNDVI by over 34% to indicate reduced photosynthetic activity and chlorophyll content. Correlation analysis, principal component analysis, and hierarchical clustering clearly distinguished stress responses over time. Regression models using NDVI and GNDVI explained 89.7% of the variance in SM_Cum. Our results demonstrate that drone-based vegetation index analysis can effectively quantify waterlogging stress in garden plants and can contribute to improved moisture management and growth monitoring in urban gardens. Full article
Show Figures

Figure 1

24 pages, 11665 KB  
Article
Response of Nearby Sensors to Variable Doses of Nitrogen Fertilization in Winter Fodder Crops Under Mediterranean Climate
by Luís Silva, Caroline Brunelli, Raphael Moreira, Sofia Barbosa, Manuela Fernandes, Andreia Miguel, Benvindo Maçãs, Constantino Valero, Manuel Patanita, Fernando Cebola Lidon and Luís Alcino Conceição
Sensors 2025, 25(18), 5811; https://doi.org/10.3390/s25185811 - 17 Sep 2025
Viewed by 310
Abstract
The sustainable intensification of forage production in Mediterranean climates requires technological solutions that optimize the use of agricultural inputs. This study aimed to evaluate the performance of proximal optical sensors in recommending and monitoring variable rate nitrogen fertilization in winter forage crops cultivated [...] Read more.
The sustainable intensification of forage production in Mediterranean climates requires technological solutions that optimize the use of agricultural inputs. This study aimed to evaluate the performance of proximal optical sensors in recommending and monitoring variable rate nitrogen fertilization in winter forage crops cultivated under Mediterranean conditions. A handheld multispectral active sensor (HMA), a multispectral camera on an unmanned aircraft vehicle (UAV), and one passive on-the-go sensor (OTG) were used to generate real-time nitrogen (N) application prescriptions. The sensors were assessed for their correlation with agronomic parameters such as plant fresh matter (PFM), plant dry matter (PDM), plant N content (PNC), crude protein (CP) in%, crude protein yield (CPyield) per unit of area, and N uptake (NUp). The real-time N fertilization stood out by promoting a 15.23% reduction in the total N fertilizer applied compared to a usual farmer-fixed dose of 150 kg ha−1, saving 22.90 kg ha−1 without compromising crop productivity. Additionally, NDVI_OTG showed moderate simple linear correlation with PFM (R2 = 0.52), confirming its effectiveness in prescription based on vegetative vigor. UAV_II (NDVI after fertilization) showed even stronger correlations with CP (R2 = 0.58), CPyield (R2 = 0.53), and NUp (R2 = 0.53), highlighting its sensitivity to physiological responses induced by N fertilization. Although the HMA sensor operates via point readings, it also proved effective, with significant correlations to NUp (R2 = 0.55) and CPyield (R2 = 0.53). It is concluded that integrating sensors enables both precise input prescription and efficient monitoring of plant physiological responses, fostering cost-effectiveness, sustainability, and improved agronomic efficiency. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

29 pages, 19475 KB  
Article
Fine-Scale Grassland Classification Using UAV-Based Multi-Sensor Image Fusion and Deep Learning
by Zhongquan Cai, Changji Wen, Lun Bao, Hongyuan Ma, Zhuoran Yan, Jiaxuan Li, Xiaohong Gao and Lingxue Yu
Remote Sens. 2025, 17(18), 3190; https://doi.org/10.3390/rs17183190 - 15 Sep 2025
Viewed by 433
Abstract
Grassland classification via remote sensing is essential for ecosystem monitoring and precision management, yet conventional satellite-based approaches are fundamentally constrained by coarse spatial resolution. To overcome this limitation, we harness high-resolution UAV multi-sensor data, integrating multi-scale image fusion with deep learning to achieve [...] Read more.
Grassland classification via remote sensing is essential for ecosystem monitoring and precision management, yet conventional satellite-based approaches are fundamentally constrained by coarse spatial resolution. To overcome this limitation, we harness high-resolution UAV multi-sensor data, integrating multi-scale image fusion with deep learning to achieve fine-scale grassland classification that satellites cannot provide. First, four categories of UAV data, including RGB, multispectral, thermal infrared, and LiDAR point cloud, were collected, and a fused image tensor consisting of 10 channels (NDVI, VCI, CHM, etc.) was constructed through orthorectification and resampling. For feature-level fusion, four deep fusion networks were designed. Among them, the MultiScale Pyramid Fusion Network, utilizing a pyramid pooling module, effectively integrated spectral and structural features, achieving optimal performance in all six image fusion evaluation metrics, including information entropy (6.84), spatial frequency (15.56), and mean gradient (12.54). Subsequently, training and validation datasets were constructed by integrating visual interpretation samples. Four backbone networks, including UNet++, DeepLabV3+, PSPNet, and FPN, were employed, and attention modules (SE, ECA, and CBAM) were introduced separately to form 12 model combinations. Results indicated that the UNet++ network combined with the SE attention module achieved the best segmentation performance on the validation set, with a mean Intersection over Union (mIoU) of 77.68%, overall accuracy (OA) of 86.98%, F1-score of 81.48%, and Kappa coefficient of 0.82. In the categories of Leymus chinensis and Puccinellia distans, producer’s accuracy (PA)/user’s accuracy (UA) reached 86.46%/82.30% and 82.40%/77.68%, respectively. Whole-image prediction validated the model’s coherent identification capability for patch boundaries. In conclusion, this study provides a systematic approach for integrating multi-source UAV remote sensing data and intelligent grassland interpretation, offering technical support for grassland ecological monitoring and resource assessment. Full article
Show Figures

Figure 1

20 pages, 4263 KB  
Article
Comparative Assessment of Remote and Proximal NDVI Sensing for Predicting Wheat Agronomic Traits
by Marko M. Kostić, Vladimir Aćin, Milan Mirosavljević, Zoran Stamenković, Krstan Kešelj, Nataša Ljubičić, Antonio Scarfone, Nikola Stanković and Danijela Bursać Kovačević
Drones 2025, 9(9), 641; https://doi.org/10.3390/drones9090641 - 13 Sep 2025
Viewed by 543
Abstract
Monitoring wheat traits across diverse environments requires reliable sensing tools that balance accuracy, cost, and scalability. This study compares the performance of proximal and UAV-derived NDVI sensing for predicting the key agronomic traits in winter wheat. The research was conducted at a long-term [...] Read more.
Monitoring wheat traits across diverse environments requires reliable sensing tools that balance accuracy, cost, and scalability. This study compares the performance of proximal and UAV-derived NDVI sensing for predicting the key agronomic traits in winter wheat. The research was conducted at a long-term NPK field experiment on Haplic Chernozem soils in Rimski Šančevi, Serbia, using UAV multispectral imagery and a handheld proximal sensor to collect NDVI data across 400 micro-plots and six phenological stages. The UAV-derived NDVI achieved a higher mean value (0.71 vs. 0.60), lower coefficient of variation (29.2% vs. 33.0%), and stronger correlation with the POM readings (R2 = 0.92). For trait prediction, the UAV-based NDVI reached R2 values up to 0.95 for grain yield and 0.84 for plant height, outperforming the POM (maximum R2 = 0.94 and 0.83, respectively), and it showed superior temporal consistency (average R2 = 0.74 vs. 0.64). Although the POM performed comparably during mid-season under controlled conditions, its sensitivity to operator handling and limited spatial resolution reduced robustness in more variable field scenarios. A cost–benefit analysis revealed that the POM offers advantages in affordability, ease of use, and deployment in small-scale settings, while UAV systems are better suited for large-scale monitoring due to their higher spatial resolution and data richness. The findings highlight the importance of selecting sensing technologies based on biological context, operational goals, and resource constraints, and suggest that combining methods through stratified sampling may improve the efficiency and accuracy of crop monitoring in precision agriculture. Full article
Show Figures

Figure 1

19 pages, 7290 KB  
Article
Assessing Pacific Madrone Blight with UAS Remote Sensing Under Different Skylight Conditions
by Michael C. Winfield, Michael G. Wing, Julia H. Wood, Savannah Graham, Anika M. Anderson, Dustin C. Hawks and Adam H. Miller
Remote Sens. 2025, 17(18), 3141; https://doi.org/10.3390/rs17183141 - 10 Sep 2025
Viewed by 927
Abstract
We investigated the relationship between foliar blight, tree structure, and spectral signatures in a Pacific Madrone (Arbutus menziesii) orchard in Oregon using unoccupied aerial system (UAS) multispectral imagery and ground surveying. Aerial data were collected under both cloudy and sunny conditions [...] Read more.
We investigated the relationship between foliar blight, tree structure, and spectral signatures in a Pacific Madrone (Arbutus menziesii) orchard in Oregon using unoccupied aerial system (UAS) multispectral imagery and ground surveying. Aerial data were collected under both cloudy and sunny conditions using a six-band sensor (red, green, blue, near-infrared, red edge, and longwave infrared), and ground surveying recorded foliar blight and tree height for 29 trees. We observed band- and index-dependent spectral variation within crowns and between lighting conditions. The Normalized Difference Vegetation Index (NDVI), Modified Simple Ratio Index Red Edge (MSRE), and Red Edge Chlorophyll Index (RECI) showed higher consistency across lighting changes (adjusted R2 ≈ 0.95), while the Green Chlorophyll Index (GCI), Modified Simple Ratio Index (MSR), and Green Normalized Difference Vegetation Index (GNDVI) showed slightly lower consistency (adjusted R2 ≈ 0.92) but greater sensitivity to blight under cloudy skies. Diffuse skylight increased blue and near-infrared reflectance, reduced red, and enhanced blight detection using GCI, MSR, and GNDVI. Tree height was inversely related to blight presence (p < 0.005), and spectral variation within crowns was significant (p < 0.01), suggesting a role for canopy architecture. The support vector machine classification of tree crowns achieved 92.5% accuracy (kappa = 0.87). Full article
(This article belongs to the Special Issue Plant Disease Detection and Recognition Using Remotely Sensed Data)
Show Figures

Graphical abstract

22 pages, 15219 KB  
Article
Integrating UAS Remote Sensing and Edge Detection for Accurate Coal Stockpile Volume Estimation
by Sandeep Dhakal, Ashish Manandhar, Ajay Shah and Sami Khanal
Remote Sens. 2025, 17(18), 3136; https://doi.org/10.3390/rs17183136 - 10 Sep 2025
Viewed by 454
Abstract
Accurate stockpile volume estimation is essential for industries that manage bulk materials across various stages of production. Conventional ground-based methods such as walking wheels, total stations, Global Navigation Satellite Systems (GNSSs), and Terrestrial Laser Scanners (TLSs) have been widely used, but often involve [...] Read more.
Accurate stockpile volume estimation is essential for industries that manage bulk materials across various stages of production. Conventional ground-based methods such as walking wheels, total stations, Global Navigation Satellite Systems (GNSSs), and Terrestrial Laser Scanners (TLSs) have been widely used, but often involve significant safety risks, particularly when accessing hard-to-reach or hazardous areas. Unmanned Aerial Systems (UASs) provide a safer and more efficient alternative for surveying irregularly shaped stockpiles. This study evaluates UAS-based methods for estimating the volume of coal stockpiles at a storage facility near Cadiz, Ohio. Two sensor platforms were deployed: a Freefly Alta X quadcopter equipped with a Real-Time Kinematic (RTK) Light Detection and Ranging (LiDAR, active sensor) and a WingtraOne UAS with Post-Processed Kinematic (PPK) multispectral imaging (optical, passive sensor). Three approaches were compared: (1) LiDAR; (2) Structure-from-Motion (SfM) photogrammetry with a Digital Surface Model (DSM) and Digital Terrain Model (DTM) (SfM–DTM); and (3) an SfM-derived DSM combined with a kriging-interpolated DTM (SfM–intDTM). An automated boundary detection workflow was developed, integrating slope thresholding, Near-Infrared (NIR) spectral filtering, and Canny edge detection. Volume estimates from SfM–DTM and SfM–intDTM closely matched LiDAR-based reference estimates, with Root Mean Square Error (RMSE) values of 147.51 m3 and 146.18 m3, respectively. The SfM–intDTM approach achieved a Mean Absolute Percentage Error (MAPE) of ~2%, indicating strong agreement with LiDAR and improved accuracy compared to prior studies. A sensitivity analysis further highlighted the role of spatial resolution in volume estimation. While RMSE values remained consistent (141–162 m3) and the MAPE below 2.5% for resolutions between 0.06 m and 5 m, accuracy declined at coarser resolutions, with the MAPE rising to 11.76% at 10 m. This emphasizes the need to balance the resolution with the study objectives, geographic extent, and computational costs when selecting elevation data for volume estimation. Overall, UAS-based SfM photogrammetry combined with interpolated DTMs and automated boundary extraction offers a scalable, cost-effective, and accurate approach for stockpile volume estimation. The methodology is well-suited for both the high-precision monitoring of individual stockpiles and broader regional-scale assessments and can be readily adapted to other domains such as quarrying, agricultural storage, and forestry operations. Full article
Show Figures

Figure 1

20 pages, 4045 KB  
Article
Sugarcane (Saccharum officinarum) Productivity Estimation Using Multispectral Sensors in RPAs, Biometric Variables, and Vegetation Indices
by Marta Laura de Souza Alexandre, Izabelle de Lima e Lima, Matheus Sterzo Nilsson, Rodnei Rizzo, Carlos Augusto Alves Cardoso Silva and Peterson Ricardo Fiorio
Agronomy 2025, 15(9), 2149; https://doi.org/10.3390/agronomy15092149 - 8 Sep 2025
Viewed by 465
Abstract
The sugarcane crop is of great economic relevance to Brazil, and the precise productivity estimation is a major challenge in production. Therefore, the aim of this study was to estimate the productivity of sugarcane cultivars in different regions, using multispectral sensors embedded in [...] Read more.
The sugarcane crop is of great economic relevance to Brazil, and the precise productivity estimation is a major challenge in production. Therefore, the aim of this study was to estimate the productivity of sugarcane cultivars in different regions, using multispectral sensors embedded in RPAs and biometric variables sampled in the field. The study was conducted in two experimental areas, located in the municipalities of Itirapina-SP and Iracemápolis-SP, with 16 cultivars in a randomized block design. The images were acquired using the multispectral sensor MicaSense Altum, allowing the extraction of spectral bands and vegetation indices. In parallel, biometric variables were collected at 149 and 295 days after planting (DAP). The machine learning models Random Forest (RF) and Extreme Gradient Boosting (XGBoost) were calibrated using different sets of variables, and, despite the similar performance, it was decided to use the model derived from XGBoost in the analyses, since it deals more effectively with overfitting. The results indicated a good performance of the model (R2 = 0.83 and 0.66; RMSE = 18.7 t ha−1 and 25.3 t ha−1; MAE = 15.7 and 20.2; RPIQ = 3.22 and 2.61) for the validations K-fold and Leave-one-out cross-validation (LOOCV). The correlations between biometric variables, spectral bands, and vegetation indices varied according to crop development stage. The leaf insertion angle presented a strong correlation with near-infrared (NIR) (r = 0.76) and the indices ExG and VARI (r = 0.70 and r = 0.69, respectively). The present work demonstrated that the integration between multispectral and biometric data represents a promising approach for estimating sugarcane productivity. Full article
Show Figures

Figure 1

30 pages, 19154 KB  
Article
Mapping of Leaf Pigments in Lettuce via Hyperspectral Imaging and Machine Learning
by João Vitor Ferreira Gonçalves, Renan Falcioni, Thiago Rutz, Andre Luiz Biscaia Ribeiro da Silva, Renato Herrig Furlanetto, Luís Guilherme Teixeira Crusiol, Karym Mayara de Oliveira, Caio Almeida de Oliveira, Nicole Ghinzelli Vedana, José Alexandre Melo Demattê and Marcos Rafael Nanni
Horticulturae 2025, 11(9), 1077; https://doi.org/10.3390/horticulturae11091077 - 5 Sep 2025
Viewed by 558
Abstract
The nutritional and commercial value of lettuce (Lactuca sativa L.) is determined by its foliar pigment and phenolic composition, which varies among cultivars. This study aimed to assess the capacity of hyperspectral and applied multispectral imaging, combined with machine learning algorithms, to [...] Read more.
The nutritional and commercial value of lettuce (Lactuca sativa L.) is determined by its foliar pigment and phenolic composition, which varies among cultivars. This study aimed to assess the capacity of hyperspectral and applied multispectral imaging, combined with machine learning algorithms, to predict and map key biochemical traits, such as chloroplastidic pigments (chlorophylls and carotenoids) and extrachloroplastidic pigments (anthocyanins, flavonoids, and phenolic compounds). Eleven cultivars exhibiting contrasting pigmentation profiles were grown under controlled greenhouse conditions, and their chlorophyll a and b, carotenoid, anthocyanin, flavonoid, and total phenolic contents were evaluated. Spectral reflectance data were acquired via a Headwall hyperspectral sensor and a MicaSense multispectral sensor, and the pigment contents were quantified via solvent extraction and a UV microplate reader. We developed predictive models via seven machine learning approaches, with partial least squares regression (PLSR) and random forest (RF) emerging as the most robust algorithms for pigment estimation. Chlorophyll a and b are highly and positively correlated (r > 0.9), which is consistent with their hyperspectral reflectance imaging results. The hyperspectral data consistently outperformed the multispectral data in terms of predictive accuracy (e.g., R2 = 0.91 and 0.76 for anthocyanins and flavonoids via RF) and phenolic compounds with R2 = 0.79, capturing subtle spectral features linked to biochemical variation. Spatial maps revealed strong genotype-dependent heterogeneity in pigment and phenolic distributions, supporting the potential of this approach for cultivar discrimination and pigment phenotyping. These findings demonstrate that hyperspectral imaging integrated with data-driven modelling offers a powerful, nondestructive framework for the biochemical monitoring of leafy vegetables, supporting breeding, precision agriculture, and food quality assessment. Full article
(This article belongs to the Section Vegetable Production Systems)
Show Figures

Figure 1

Back to TopTop