1. Introduction
Advances in unmanned aerial vehicles (UAVs) and the miniaturization of multispectral instruments have led to their widespread application in precision agriculture (PA) [
1,
2,
3]. Notable applications of multispectral UAV imagery in PA involve foliar pigment content estimation [
4], plant disease detection [
5], and plant growth dynamics monitoring [
6]. A crucial step in conducting a reliable analysis of plant traits is to recover the surface reflectance from raw image data through radiometric calibration [
7]. However, external factors, particularly variable illumination during the flight [
8,
9], significantly affect the accuracy of reflectance conversion, posing an inevitable challenge for UAV fieldwork in natural environments.
The empirical line method (ELM), as a predominant radiometric calibration technique for multispectral UAV imagery, assumes a linear relationship between image-derived radiance and invariant ground object-derived reflectance [
10]. Typically, the ELM calibration is performed once, either before or after a flight campaign using reference panels, and the derived linear model is consistently applied to all subsequently collected images. This operation assumes that all images are acquired under the same illumination as the capture of reference panels, neglecting in-flight lighting variations. Such assumptions explain the underlying reason why UAV-based field monitoring using optical cameras is recommended under a clear sky or uniform cloudy conditions. However, it is not always possible to wait for the ideal conditions for a drone flight. This is limiting, particularly in high-latitude regions such as the Netherlands, Ireland, Denmark, and so on, where overcast and variable solar irradiance is frequent [
11]. Additionally, plant growth is a dynamic process with notable physiological and phenotypic variations often observed within just a week [
12]. This necessitates collecting UAV data during specific growth stages, which may inevitably coincide with cloud cover conditions. Moreover, advancements in battery technology have also enhanced UAV capabilities for long-duration flight, enabling the coverage of larger areas [
13]. This also increases the likelihood of encountering varying lighting conditions, complicating the maintenance of consistent illumination levels during flights. Consequently, expanding the UAV data collection window requires developing methods to alleviate the impact of dynamic illumination on multispectral images and the resulting derived plant reflectance.
To address the challenges of variable illumination in spectral UAV imagery, recently developed methods can be mainly divided into two categories: sensor-based and image-based approaches [
5,
11,
14,
15,
16]. In terms of sensor-based methods, the main idea is to measure the solar irradiance simultaneously with image capture, either onboard the UAV or on the ground, to compensate for variable illumination effects [
16,
17]. For instance, manufacturers such as AgEagle (AgEagle Aerial Systems, Wichita, KS, USA) and Parrot Sequoia (Parrot Drone SAS, Paris, France) have developed specialized onboard irradiance sensors like the downwelling light sensor (DLS-2) and the sunshine sensor to help mitigate the influence of poor lighting conditions in real time. The strength of onboard lighting sensors is their ability to provide real-time illumination measurements, enabling the dynamic adjustment of UAV image radiometric properties for accurate data capture under varying light conditions [
18]. Additionally, onboard sensors automate correction processes, reducing post-processing and manual intervention, thereby increasing the efficiency of data collection and handling large datasets. Despite these advantages, sensor-based methods have certain drawbacks that must be considered. For instance, Cao et al. (2019) [
19] evaluated the DLS-corrected method’s performance under varying illumination conditions, suggesting the need for further improved radiometric accuracy. Additionally, researches [
18] indicate that the measured irradiance is prone to earth–sensor and sensor–solar angles caused by the vibration and tilting of drones, potentially introducing systematic errors in compensation. In addition to using onboard sensors, another solution is to place lighting sensors on the ground level. In Xue et al. (2023)’s study [
16], they corrected the influence of illumination by placing an additional camera on the ground to capture panel images simultaneously with drone flight. Honkavaara et al. (2013) conducted similar measurements, continuously recording in situ irradiance using an ASD FieldSpec Pro with irradiance optics to compensate for lighting variation [
20]. However, ground-based corrections have an intrinsic shortcoming in that such corrections are unreliable if there are potential differences in illumination between the UAV location and the ground sensor location. Additionally, for sensor-based methods, the associated technology increases the overall cost of the UAV system. This can be a limiting factor for budget-constrained projects or organizations.
Compared to sensor-based approaches, image-based methods rely solely on the captured imagery, eliminating the need for extra sensors or equipment. This reduces the overall weight and power consumption of the UAV, allowing for longer flight times and greater payload capacity [
21]. Therefore, image-based adjustment methods have increasingly gained attention as an efficient approach to correct the images under different illumination conditions [
14,
15,
22]. For example, Qin et al. (2022) developed a novel illumination estimation model based on illumination consistency within single images and reflectance consistency for the same tie points across images [
15]. Subsequently, they proposed an illumination compensation model based on physical imaging principles to mitigate the effects of varying lighting conditions. Honkavaara et al. (2012) introduced the radiometric block adjustment (RBA) method, which estimates linear regression coefficients for image pairs based on the observed values for the same tie points within the overlapping area of consecutive images, where a tie point is the same point in the field detected in two images [
23]. In addition, linear regression coefficients are included for images that contain the observation of a reference panel. Jointly, the regression coefficients of the tie points and the reference panels are optimized to deal with variable illumination. Honkavaara and Khoramshahi (2018) further optimized the RBA method by integrating bidirectional reflectance distribution functions (BRDFs) and assigning different weights to observations of tie points and reference panels [
14]. However, the method is unsuitable for processing large image sets due to the error accumulation [
24]. Subsequently, Kizel et al. (2018) proposed a generalized empirical line method (GELM) based on this similar concept that uses statistical information from tie points to homogenize images [
22]. By performing a linear regression for tie points, this method reduced the number of tie point equations, thereby reducing computational complexity. The GELM algorithm is designed for imaging spectroscopy data containing 385 spectral channels. However, the GELM method has been validated on four hyperspectral images and requires further optimization and testing across a broader dataset. Another limitation of these image-based approaches is the lack of consideration for the content of the tie points and the specific monitoring tasks, resulting in general corrections that may not be optimal for specific tasks, such as field monitoring. In summary, current RBA-based algorithms are unsuitable for processing numerous multispectral images and require accuracy enhancement and validation with consideration for specific crop monitoring applications.
In this study, to achieve accurate radiometric calibration of UAV-collected multispectral images under varying lighting conditions for crop monitoring, we optimized the RBA-based method and validated the performance of these methods in terms of accuracy and homogeneity. The specific objectives of this article were the following: (1) to optimize the RBA-based method to make it suitable for processing numerous multispectral images and (2) to improve the performance of the RBA-based method for crop monitoring. To deal with the first objective, we reduced the number of tie points referring to the strategy proposed by Kizel et al. (2018) [
22] and investigated the balance between the weights put on the tie points versus the weights on the observations of the reference panels. The second objective was targeted by considering only the tie points that are relevant to the crop monitoring task. To evaluate the effectiveness of the proposed methods, the resulting orthomosaic was evaluated comprehensively on an UAV image dataset collected under fluctuating solar irradiance conditions. Additionally, the contributions of this paper are the following:
We optimized the RBA method by reducing the number of tie point observations and assigning higher weights to the reference point observations, thereby successfully achieving robust reflectance image conversion from radiance images under changing lighting conditions.
We proposed the RBA-Plant method to enhance the radiometric accuracy and uniformity of the generating reflectance orthomosaic. We demonstrated that the strategy of excluding non-vegetation tie points in the RBA equation system helps improve the performance of generated orthomosaics.
The remaining part of the paper is organized as follows.
Section 2 details the experimental settings, principles, and framework of the optimized radiometric block adjustment method.
Section 3 presents the performance evaluation of that method. Lastly,
Section 4 gives the discussion and conclusions.
2. Materials and Methods
2.1. Study Area
The field experiment was conducted at Unifarm, the agricultural experimental farm in Wageningen, Gelderland province, the Netherlands (51°59′28.8″N, 5°39′50.3″E). The objective of this experiment was to compare two planting systems: monoculture and stripcropping, shown as white and blue rectangles, respectively, in
Figure 1. The main cultivated crop was potato, whereas in the stripcropping field, potatoes and grass were grown in alternating strips. By the time of UAV data collection in June, the potato had reached the stolon initiation stage and begun to form a canopy. The ground objects in the study area included potato plants, bare soil, grass, and roads.
Eight ground control points (GCPs), depicted as red dots in
Figure 1, were strategically placed within the field. The locations of these GCPs were accurately measured by a real-time kinematic (RTK)-enabled rover with accuracies of 2 cm in the horizontal direction and 5 cm in the vertical direction. Additionally, four sets of custom reference panels, shown as five-pointed stars in
Figure 1, were placed along the UAV flight path. Their properties are detailed in
Section 2.2.
2.2. Instrumentation and Data Acquisition
In this study, a Matrice M210 RTK UAV (DJI, Shenzhen, Guangdong, China) equipped with an Altum five-band multispectral camera (MicanSense, AgEagle Aerial Systems, Wichita, KS, USA), and a DLS-2 downwelling light sensor was employed for aerial image dataset collection. The Altum camera is one of the most popular cameras for agricultural applications such as plant counting and advanced vegetation research applications. The Altum camera has the following five separate detectors, each characterized by specific focal wavelengths and full-width at half-maximum (FWHM) bandwidths: 475 nm and 20 nm (blue), 560 and 20 nm (green), 668 and 10 nm (red), 717 and 10 nm (red edge), and 840 and 40 nm (near-infrared band—NIR). The onboard DLS sensor recorded the in-flight ambient irradiance at each image capture moment. Moreover, the manufacturer provided a small Calibrated Reference Panel (CRP), which is a homogeneous scattering panel with an approximate reflectance of 0.5, with a size of 10 cm × 10 cm.
Additionally, four sets of self-made reference panels, each consisting of four 60 cm × 60 cm wooden panels individually painted with matte black, light gray, dark gray, and white paints, were employed in this experiment. The custom reference panels were designed with a larger size, allowing them to be captured at flying altitude and then used for calibration. Reflectance measurements for these panels were taken using the Altum camera and the CRP. This process entailed placing the CRP next to a panel under the same lighting, capturing an overhead image, and converting the raw image to reflectance data following MicaSense’s recommended protocol [
25]. The reflectance values of each custom panel were calculated from a central 50 × 50-pixel region. In this study, it was observed that white panels, with a reflectance of approximately 0.9, were prone to overexposure, which negatively affected the performance of the latter established conversion models. Thus, only light gray, dark gray, and black panels were employed as ground targets for calibration.
Figure 2 displays the average reflectance properties across all bands for the four sets of reference panels.
The UAV flight was conducted on 14 June 2022, from 11:20 to 12:00 under variable illumination conditions.
Figure 3 illustrates the trend of solar irradiance changes at the 560 nm wavelength recorded by the DLS sensor during the entire UAV data collection period. The irradiance fluctuated, showing several sharp peaks and valleys, indicating that the sun was intermittently obscured due to the movement of clouds. The UAV operated at a flight altitude of 50 m, maintaining an 85% overlap in images for both forward and sideway directions. Additionally, other imaging parameters, including exposure time, ISO, and shutter speed, were set automatically.
After the UAV data collection, the reflectance of several potato plants was measured using the onboard Altum camera at a close distance to set ground truth reflectance measurements of the plants for evaluation of the method. The operator positioned the camera directly above the crop at a distance of around 1 m at each predetermined sampling point, where the CRP was placed adjacent to the crop for calibration. Images of 133 potato plants in the stripcropping field and 130 plants in the monoculture field were captured. The coordinates of these sampling points were recorded prior to the experiment, and those points were evenly distributed across the field. Each captured image was converted into a reflectance image utilizing the CRP. The reflectance values were derived by averaging the readings from a central -pixel region of the potato canopy in each image.
2.3. Radiometric Block Adjustment Method
2.3.1. Methodology Overview
Typically, UAVs capture consecutive images with a certain spatial overlap rate, effectively imaging the same area from multiple viewpoints. These overlapping regions in UAV imagery were utilized to homogenize the radiometric differences between sequential images, thereby correcting for differences in the illumination of 2D maps or 3D models. This is done by detecting tie points in neighboring images and comparing the radiance values for these tie points in both images to measure the differences. In addition, placing multiple sets of reference panels with known reflectance within the experimental field could provide baselines for accurate radiometric calibration. However, these reference panels are not detectable in every image and are visible in only a few images across the dataset. To achieve accurate radiometric correction of the entire dataset under varying lighting conditions, a strategy that combines the use of occasionally observed reference panels and tie points from the overlapping regions between adjacent images was developed.
An overview of the workflow for the optimized radiometric block adjustment method developed in this study is illustrated in
Figure 4.
Generally, the input for this algorithm consists of a collection of multispectral radiance images, which contains a small number of images capturing reference panels and numerous consecutive images with overlapping regions. The preliminary processing step primarily aims to extract and select appropriate points and their values. For pairs of spatially adjacent images, tie points are extracted using the Metashape Python package (Agisoft LLC, St. Petersburg, Russia). Subsequently, tie points situated within vegetation regions are selected for further analysis. For images capturing reference panels, a -pixel rectangle centered on the panel is selected to extract average pixel values. These derived values are subsequently imported into the matrix formulation stage to construct the equation systems. This phase aims to eliminate outlier values and minimize the number of linear equations derived from tie points. The resulting equation system is then treated as a quadratic programming optimization problem, which is solved via the interior point method developed by Alibaba Cloud Team. Ultimately, the algorithm produces calibration coefficients for each image in the dataset, converting the input radiance images into well-calibrated reflectance images.
In addition to this overview, detailed descriptions of each stage are provided in the following sections.
2.3.2. Mathematical Principle
In this mathematical formulation in constructing reference point equations and tie point equations, we follow the notation in Kizel et al. (2018) [
22]. Let
represent a multispectral aerial radiance image, where
W,
H, and
P denote the image’s width, height, and number of spectral bands, respectively. The multispectral dataset containing
S images is denoted by
. To mitigate the impact of variable illumination, two relative parameters
and
were introduced into the conventional ELM-based reflectance conversion equation for each individual image. The adjusted equation is given as follows:
The first type of equation is constructed based on the points extracted from the reference panels with already known reflectance. Due to the already known reflectance and extracted radiance values from those panels, we could obtain corresponding equations. In this study, a total of 12 panels were used, and each panel provides one equation according to Equation (
1), as follows:
where
is the reflectance value of the points on the
ith panel at wavelength
. Let
represent
, and let
denote
. Equation (
2) can be simplified to the following:
The parameters
and
are estimated based on panels.
denotes aerial radiance image
i, which captures reference panels. To give a general form, let
n represent the total number of points extracted from images that capture reference panels for the dataset
. The equation system
is thus given by the following:
The second type of equation is constructed based on the assumption that the reflectance of the same ground objects remains constant across all images. According to the aforementioned assumption, the reflectance difference of the same tie point on different images is expected to be close to zero. Let
and
denote the radiance values in images
j and
k, respectively. Let
and
represent the radiance value of the
lth tie point, located in the overlapping area between radiance image
j and
k, respectively, at wavelength
. The reflectance of each pair of tie points is supposed to be the same, thus, providing the following equation:
where
represents the reflectance value of tie point
l on image
j, and
denotes the reflectance value of the same tie point on image
k.
Then, by substituting Equation (
1) into Equation (
5), the second type of equation is formulated as follows:
Let
m denote the number of tie points extracted from overlapping images for the entire dataset
. The linear equation system
is subsequently established and is formulated as follows:
where the subscript
and
represent the index of overlapping images for the
tth pair of tie points.
denotes the pixel location of the
tth pair of tie points.
Let
represent the vector of calibration coefficients of all the images in dataset
, which is given by the following:
The matrix form of
and
can be formulated as follows:
where
and
denote the coefficient matrix of
in
and
, respectively, while
and
correspond to the right-hand side vectors of
and
. To better illustrate the specific form of the above matrices, an example is provided as follows.
Assume that a reference panel can be identified in the second image; the corresponding row in
is
Assume that a pair of tie points can be recognized in the first and third image; the corresponding row in
is
The calibration parameters are estimated by solving an optimization problem, and the formulation is given as follows:
The variable
represents the weight assigned to reference panel points. In practice, the quantity of tie points equations is significantly more than that of reference points equations, introducing bias into the obtained result. To keep the balance between the two types of equations, we set higher weights to the reference points equations to mitigate this issue partially. In addition, the actual reflectance of plants is expected to be in the range of zero and one
. To ensure that the final obtained reflectance falls within the correct range, we added a constraint to the optimization problem, referring to [
22]:
Let
denote the radiance values for all the pixels in each image within the dataset. In summary, by applying constraints to Equation (
13), the complete mathematical form of the radiometric block adjustment method can be described as follows:
2.3.3. Preliminary Processing
The preliminary processing step involves selecting reference points, extracting tie points, and filtering out non-vegetation points:
Reference points selection: The identification of reference points is performed on images capturing reference panels. In this study’s flight campaign, reference panels were captured in 12 images. For each reference image, -pixel rectangles centered on light gray, dark gray, and black panels were utilized to obtain average values, which served as the observed reference point values. The white panels were excluded, as they are prone to overexposure that results in the loss of valid information. The above operation was performed manually.
Tie points extraction and filtering out non-vegetation points: This step is performed on each pair of images with overlapping regions. Tie points are extracted automatically via the Metashape Python package, and their corresponding coordinates are recorded. The operation was executed on a band-by-band basis. Subsequently, tie points located in vegetation areas are selected, focusing our attention on plant analysis, where emphasizing crop-related information helps enhance the quality of our analysis.
Figure 5 shows the workflow of selecting proper tie points located in the vegetation area. The normalized difference vegetation index (NDVI) is utilized to help effectively distinguish between vegetation and non-vegetation areas due to its sensitivity to chlorophyll content [
26]; the NDVI’s formula is shown as follows:
where
and
represent the near-infrared and red bands, respectively.
The first step for calculating the NDVI image is to align one from the image pair followed by the tutorial by the MicaSense manufacturer [
25]. The alignment process involves three key steps: unwarping the images via built-in lens calibration, calculating affine transformation matrices to align each band with a reference band, and aligning and cropping the images to exclude non-overlapping pixels across all bands. Notably, once the alignment transformation matrices are determined, they can be applied to additional images from the same flight, significantly simplifying the calculation. Once the NDVI image is acquired, then we can do the vegetation segmentation. In this study, the observed potato plants were at the stage of stolon initiation, and the sensor used was an Altum multispectral camera, resulting in NDVI values for the potatoes generally above 0.45. Distinguishing between potato and non-vegetation areas on the NDVI map was relatively straightforward. We set the threshold to 0.6 to exclude non-vegetation points more strictly, with the segmentation results illustrated in
Figure 6c, simultaneously, where all extracted tie points can be projected into the NDVI image using calculated affine transformation matrices. Afterward, tie points with NDVI values above 0.6 were selected, thereby filtering out non-vegetation areas. Ultimately, those selected tie points were projected into the origin single-channel image.
Figure 6 illustrates the result of excluding soil and other non-vegetation areas using an image as an example.
2.3.4. Matrix Formulation
In certain instances, outlier values may appear within the image due to pixels that are either saturated or damaged. In particular, this issue becomes more common with dramatic changes in illumination during the flight. Consequently, removing outliers is crucial to ensure the reliability of the results. Let
and
denote the mean value and standard deviation, respectively, of the radiance values for all tie points in the band
of image
. The global outlier values can be removed using the following equation:
The parameter is used to quantify the deviation of outliers from the mean value, and we set the value of empirically to 3.
After global outlier detection, some outliers may not be apparent or may be located in locally anomalous regions. The Local Outlier Factor (LOF) method can more precisely detect outliers by analyzing the behavior of data points in their local context. This approach improves the accuracy and comprehensiveness of outlier detection. Consequently, it complements the shortcomings of global outlier detection, ensuring cleaner and more reliable data. In this study, the Sklearn package was used for LOF detection with the parameters set to 20 and set to 0.05.
It is worth noting that the quantity of extracted tie points for the dataset
substantially exceeds the number of reference points. This imbalance in the number of tie point equations relative to reference point equations can induce bias in the derived solution. In a large system of equations, an excessive number of tie points can introduce significant constraints, potentially rendering the system unsolvable. Furthermore, an imbalance in the number of equations relative to tie points and reference points may result in the reference points having minimal impact on the overall solution. However, since reference points provide ground truth data, their influence in the system is supposed to be increased to obtain more accurate and reliable solutions. To mitigate this issue, we adopted a twofold strategy: (1) selecting only two feature points fitted from the tie points, as suggested by Kizel et al. (2018) [
22], and (2) assigning higher weights to the reference points equations.
Figure 7 illustrates the methodology for selecting two feature points to represent all tie points across paired images. Initially, a regression line is fitted using the radiance values from all tie points between the image pair. Subsequently, as illustrated in
Figure 7b, the maximum and minimum points on this regression line are identified and selected as feature points. These two feature points are then utilized to construct tie point equations. This strategy effectively reduces the number of tie point equations, aiming to achieve a balance between the two equation types, and significantly enhances the solution efficiency. Furthermore, the settings and analysis of
in Equation (
15) are detailed in
Section 3.4.
2.3.5. Solution and Optimization
In this section, we introduce the method of optimization and the solution of the equation system and the corresponding package utilized. The Equation (
15) can be converted to a constrained quadratic programming formulation.
The MindOpt optimization solver developed by Alibaba Cloud Team (2021) [
27] was used to help find the optimal solution. The method was run with Python 3.10 on Windows 10. The solution yielded a pair of calibration coefficients for each radiance image within the multispectral dataset.
2.4. Reference Methods
2.4.1. Empirical Line Method
The ELM [
10] is the most common way to achieve surface reflectance conversion from raw data. Establishing a linear relationship between the radiance value of reference panels as measured by the camera and its corresponding reflectance, the ELM model is described for each wavelength using Equation (
19):
where
is the reflectance value at pixel
within wavelength
,
denotes the radiance image, and
and
are absolute calibration coefficients.
2.4.2. DLS-CRP Method
The manufacturer MicaSense offered a sensor-corrected solution (DLS-CRP) to reduce the impact of variable illumination. A brief introduction of this method is given as follows; for more details, refer to [
28]. An image of the CRP is captured before and after the flight to provide a baseline reflectance value. The correction coefficient
is then calculated as follows:
where
represents the mean radiance values of the CRP at the
waveband,
denotes the reflectance of the CRP, and
refers to the irradiance value recorded by the DLS at the specific time the CRP was captured.
For later collected images, the DLS records the irradiance value at the moment of capture and embeds it into the picture. The DLS-corrected reflectance image is then derived as follows:
where
represents the UAV-collected radiance images,
denotes the DLS-recorded corresponding irradiance, and
represents the DLS-corrected converted reflectance image.
2.4.3. DLS-LCRP Method
For the DLS-CRP method, the image of the CRP is captured on the ground, indicating that the camera is positioned at a close distance from the panel. This operation suggests that the primary component of radiance reaching the camera is directly reflected radiance from the panel, without considering the effects of atmospheric scattering. In practice, UAVs usually fly at a flight altitude of 30 to 120 m, and the atmospheric scattering lighting indeed affects the accuracy of the radiometrically calibrated outputs. In this study, at the beginning of the flight, an image of the custom light gray panel was captured at a flying altitude of 50 m to calculate the initial correction coefficient
. The subsequent reflectance images conversion flowchart was the same as
Section 2.4.2.
2.5. Evaluation
Evaluation 1: The optimized radiometric block adjustment method generates a pair of conversion coefficients for each image, representing the slope and intercept, with the slope being the dominant factor in the conversion equation. We first illustrate the trend between the slope of the transformation parameters for each image and the corresponding DLS-measured irradiance.
Evaluation 2: The performance of the optimized radiometric block adjustment method is visually evaluated by showing the generated orthomosaics. Visual evaluation offers an intuitive understanding of the differences between various methods, focusing primarily on the homogeneity of the reflectance orthomosaics across different bands. Additionally, the visual assessment can also provide a preliminary indication of the variations in reflectance values. To better analyze the performance of the optimized method, a total of five methods is compared: (1) ELM-converted orthomosaic; (2) DLS-CRP corrected orthomosaic; (3) DLS-LCRP corrected orthomosaic; (4) the optimized radiometric block adjustment (RBA) corrected orthomosaic, produced using the above-described method and without filtering out non-vegetation points; (5) RBA-Plant generated orthomosaic. For brevity, the optimized method involving selecting tie points in vegetation regions is referred to as RBA-Plant in the subsequent content.
Evaluation 3: Quantitative assessment.
Evaluation 3.1: The first perspective of quantitative assessment is to evaluate the performance of the correction to image pairs. After correction, the values of tie points located in overlapping regions between image pairs are supposed to be similar. Image pairs with side overlaps were used because their irradiance changes are more noticeable, allowing for better observation of the correction performance. The indicator, the normalized root mean squared deviation (NRMSD), was used to evaluate the difference between image pairs, and its definition is as follows:
where
and
denote the reflectance of tie point
i on images
j and
k, respectively.
n represents the number of tie points between image pairs.
and
represent the maximum and minimum reflectance values from all images, respectively.
Evaluation 3.2: The second aspect of quantitative analysis focuses on radiometric accuracy. As mentioned in
Section 2.2, the reflectance of a total of 263 plants was measured on the ground and used as ground truth reflectance. For ease of comparison, the GPS location of those plants was recorded and can be found on the orthomosaic. Let
denote the reflectance values at the
band for the plant
i, as extracted from the corrected orthomosaics, and let
represent the ground truth reflectance of plant
i at the
waveband. The indicator, the root mean squared error (RMSE), was used to evaluate the difference between
and
, representing radiometric accuracy. Its formula is given as follows:
Evaluation 3.3: The last aspect of quantitative evaluation is the uniformity of the generated orthomosaic. To this end, the metric Coefficient of Variation (CV) was utilized to assess the reflectance values’ consistency among the sampled plants. The formula is given as follows:
where
signifies the standard deviation of the reflectance values extracted from the orthomosaic across all
N sampled plants in the field, and
denotes the mean reflectance of all sampled plants. Consequently, lower values of
are indicative of a higher level of uniformity for the corrected orthomosaic.
It is critical to acknowledge that our analysis is based on green, red, red edge, and NIR bands. The abandonment of the blue channel is justified by two main factors: first, plants predominantly absorb blue light for photosynthesis, resulting in a significant decrease in signals captured from vegetation areas in the blue channel. Second, under cloudy conditions, the proportion of scattered light increases, with the blue light constituting a significant portion of this scattered light. It introduces additional noise into the blue channel [
8] under such conditions. Given the consequent low signal-to-noise ratio in the blue waveband, excluding the blue band from our analysis was deemed necessary.