*Article* **Dynamic Influence Elimination and Chlorophyll Content Diagnosis of Maize Using UAV Spectral Imagery**

#### **Lang Qiao <sup>1</sup> , Dehua Gao <sup>1</sup> , Junyi Zhang <sup>2</sup> , Minzan Li <sup>1</sup> , Hong Sun 1,\* and Junyong Ma <sup>3</sup>**


Received: 28 June 2020; Accepted: 13 August 2020; Published: 17 August 2020

**Abstract:** In order to improve the diagnosis accuracy of chlorophyll content in maize canopy, the remote sensing image of maize canopy with multiple growth stages was acquired by using an unmanned aerial vehicle (UAV) equipped with a spectral camera. The dynamic influencing factors of the canopy multispectral images of maize were removed by using different image segmentation methods. The chlorophyll content of maize in the field was diagnosed. The crop canopy spectral reflectance, coverage, and texture information are combined to discuss the different segmentation methods. A full-grown maize canopy chlorophyll content diagnostic model was created on the basis of the different segmentation methods. Results showed that different segmentation methods have variations in the extraction of maize canopy parameters. The wavelet segmentation method demonstrated better advantages than threshold and ExG index segmentation methods. This method segments the soil background, reduces the texture complexity of the image, and achieves satisfactory results. The maize canopy multispectral band reflectance and vegetation index were extracted on the basis of the different segmentation methods. A partial least square regression algorithm was used to construct a full-grown maize canopy chlorophyll content diagnostic model. The result showed that the model accuracy was low when the image background was not removed (Rc<sup>2</sup> (the determination coefficient of calibration set) = 0.5431, RMSEF (the root mean squared error of forecast) = 4.2184, MAE (the mean absolute error) = 3.24; Rv<sup>2</sup> (the determination coefficient of validation set) = 0.5894, RMSEP (the root mean squared error of prediction) = 4.6947, and MAE = 3.36). The diagnostic accuracy of the chlorophyll content could be improved by extracting the maize canopy through the segmentation method, which was based on the wavelet segmentation method. The maize canopy chlorophyll content diagnostic model had the highest accuracy (Rc<sup>2</sup> = 0.6638, RMSEF = 3.6211, MAE = 2.89; Rv<sup>2</sup> = 0.6923, RMSEP = 3.9067, and MAE = 3.19). The research can provide a feasible method for crop growth and nutrition monitoring on the basis of the UAV platform and has a guiding significance for crop cultivation management.

**Keywords:** UAV; crop canopy; multispectral image; chlorophyll content; remote sensing technique

### **1. Introduction**

Chlorophyll content is one of the important indicators that reflect the photosynthetic ability and nutrient status of maize plants [1–3]. The traditional crop chlorophyll diagnosis is mainly carried out by chemical analysis, which requires destructive sampling, takes a long time, and is costly. These conditions might not satisfy the requirements of rapid chlorophyll monitoring on field crops for making management decision. According to the principle of light absorption and reflectance, technologies of spectral analysis, imaging spectroscopy, and other nondestructive methods have been widely used in crop monitoring [4–8]. Combined with the development of airborne or unmanned aerial vehicle (UAV) platforms [9], imaging spectroscopy obtained with high spatial and temporal resolution has become a preferred method and research topic in farmland estimation owing to its advantages of high efficiency and non-invasion [10–12]. Thus, this article aims to use the multispectral sensor carried by the UAV to collect maize canopy spectral data in the field and conduct a rapid diagnosis of the chlorophyll content to estimate the growth status and guide the field management.

Most current studies on spectral image focus on the diagnosis of chlorophyll content [11–13]. The three directions of these studies include the analysis of spectral response [14–16], quantification and selection of sensitive parameters [17,18], and optimization of models [19–22] on the basis of the visible and near-infrared images. Yu et al. [23] found that the ratio of reflection difference indexes (RRDIs) can eliminate the influence of the crop canopy structure on the spectral reflection characteristics to improve the estimation of chlorophyll content. Gauray et al. [24] collected spectral data of maize canopy by multispectral camera and constructed a diagnostic model of chlorophyll content by machine learning. The optimal model determination coefficient (R<sup>2</sup> ) was 0.904. The results showed that the combination of an airborne multispectral sensor and machine learning could effectively improve the detection accuracy of chlorophyll content. The above-mentioned studies outline the capability of chlorophyll content estimating by vegetation indices, especially collected on the basis of UAV in the visible, red edge, and near infrared bands, which has sensitive responses to the physiology and biochemistry of vegetation [14].

Many researches have attempted to improve vegetation indices for crop monitoring. Soil adjusted vegetation index (SAVI) used to reduce the influences from the soil back ground shows capability to improve the estimation accuracy [25]. Wu et al. [26] found that the integrated indexes, such as transformed chlorophyll absorption reflectance index (TCARI), modified the chlorophyll absorption ratio index (MCARI) and optimized the soil-adjusted vegetation index (OSAVI), are more suitable for chlorophyll estimation than the traditional vegetation indexes because many interference factors, such as shadow, soil background, and nonphotosynthetic materials, are considered. At the same time, Liu et al. [3] found that with the advance of maize growth period, crop spectrum had characteristic migration in different growth periods, but the vegetation index based on fixed wavelength calculation could not effectively reflect this dynamic migration phenomenon, which reduced its applicability in different growth periods. However, there is no unified mathematical expression that defines all vegetation indices due to the complexity of light spectra combinations, instruments and so on. For UAV applications, Xue et al. [27] investigated more than 100 vegetation indices and classified their specific applicability into basic vegetation indices, adjusted-soil vegetation indices, vegetation indices sensing in spectra regions, and so on. The survey showed that the amounts of vegetation indices, combining visible and near-infrared bands, had significantly improved the sensitivity of the detection of green vegetation.

The influencing factors and reducing methods for crop estimation via spectroscopy have been explored in many studies. During the processing of the canopy spectral image of field crops, Qian et al. [22] found that the factors of canopy structure, soil background, and weeds in the spectral image would interfere with the spectral reflection characteristics of crop canopy; thus, the methods for chlorophyll content estimation were proposed via precision image segmentation and multispectral calibration. Fei et al. [28] used the canopy spectrum to detect the nitrogen nutrition of maize. They found that the ability of the NDRE index to diagnose maize nitrogen gradually increased with the decrease of soil coverage. This finding indicated that the soil background would interfere with the diagnostic ability of the NDRE vegetation index. Roosjen et al. [29] found that the crop canopy structure and other external information had an effect on the crop spectral reflectance. The diagnostic ability of the leaf area index and chlorophyll content were improved when the PROSAIL model was used to

retrieve multiple angle reflectance data of crop canopy and reduce external information interference. Several factors, such as soil coverage, vegetation canopy geometry, and weeds, will interfere with the spectral characteristics of plant canopy. Thus, the influencing factors on crop diagnosis are difficult to reduce on the basis of the spectral imaging in the field.

We address the primary concern with image segmentation in eliminating the influence from soil background to solve these problems that greatly limit the reliability of the spectral imaging technology for crop nutrition diagnosis. The several alternative methods of spectral image segmentation include spectral segmentation methods using vegetation indices [21,22], threshold segmentation methods based on spatial grey value distribution [30,31], and learning-based segmentation methods [32]. The method based on the vegetation index mainly distinguishes green plants from the background according to the reflectance differences between the crop and the background special. This method is greatly affected by outdoor ambient light; a strong or weak ambient light will reduce the image segmentation effect [17]. The threshold-based segmentation method classifies crops and background according to the grayscale characteristics of the image, and the calculation is simple; however, this method is sensitive to noise [33]. The segmentation methods based on learning mainly include supervised and unsupervised machine learning classification methods. These methods have high accuracy and strong adaptability to the changes of light environment; however, the calculation is complex, and the results rely on a large amount of data in the training stage [34,35].

The dynamic crop monitoring during different stages by UAV-based spectral images is more complex than the static data collected by manual or farm machinery devices. The growth developments of crop from the soil background are difficult to separate or estimate because of the changes of crops and environment, such as light spots and shadows. The noise removal methods, such as median filtering [36], Gaussian filtering [37], and wavelet transform filtering [38], are also involved to eliminate such influences caused by the light environment. Wavelets have good capability in UAV imaging processing due to their multiresolution and multiscale analytical property. Fang et al. [39] used wavelet transform to process a UAV image and decompose the image signals of different scales to eliminate noise. The research proved that wavelet transform could help eliminate the noise generated during image acquisition and transmission [39]. The wavelet segmentation algorithm has been recently proposed combined with wavelet transform and threshold segmentation methods. The mechanism is also combined with the advantages of the noise filtering and object classification [40].

These studies highlight the critical need for dynamic influence elimination during crop growth stages on the basis of UAV spectral imaging. Several methods could be selected to improve the UAV-based imaging spectrum. However, much uncertainty still exists about the effects of these methods and their influence on the results of the chlorophyll content estimation of maize canopy during growth stages. We aim to clarify several aspects of chlorophyll content diagnosis of maize at different stages by using UAV spectral imagery. It also aims to eliminate the dynamic influences and explore its effects on the results. Moreover, the proposed methods could improve the accuracy of chlorophyll content diagnosis of field maize.

In order to better remove the dynamic influencing factors of the canopy multispectral images of maize obtained by UAV, and to diagnose the chlorophyll content of maize in the field our specific objectives are as follows: (1) to study the effect of soil background noise on the spectral reflectance of maize canopy; (2) to evaluate the performance of different segmentation methods to eliminate the noise in a UAV image; and (3) to establish the diagnostic model of maize canopy chlorophyll content on the basis of the different segmentation methods. These tasks are conducted to improve the diagnostic accuracy of the maize canopy chlorophyll content and provide the basis for crop growth dynamic monitoring on the basis of UAV multispectral images.

#### **2. Materials and Methods**

#### *2.1. Spectral Data Collection Remote Sens.* **2020**, *12*, x FOR PEER REVIEW 4 of 20

The experiment was conducted from June to August 2019, at the Dry Farming and Water-saving Agricultural Experimental Station in Hengshui, Hebei Academy of Agricultural Sciences, China. Figure 1 shows the location of the experimental station. The experimental station of Dry Farming and Water-saving Agriculture is located in Central China (latitude 37.9035011244, east longitude 115.7083898640), 22 m above sea level, and covers an area of 853 acres. The region belongs to the continental monsoon climate zone and is warm and semi-arid. The climate is characterized by four distinct seasons with large differences between warm, dry, and wet. The region has an average annual rainfall of 642.1 mm and dry winters. The experiment was conducted from June to August 2019, at the Dry Farming and Water-saving Agricultural Experimental Station in Hengshui, Hebei Academy of Agricultural Sciences, China. Figure 1 shows the location of the experimental station. The experimental station of Dry Farming and Water-saving Agriculture is located in Central China (latitude 37.9035011244, east longitude 115.7083898640), 22 m above sea level, and covers an area of 853 acres. The region belongs to the continental monsoon climate zone and is warm and semi-arid. The climate is characterized by four distinct seasons with large differences between warm, dry, and wet. The region has an average annual rainfall of 642.1 mm and dry winters.

Figure 1 shows the following six experimental gradients in this study: A1—(nitrogen 0 kg; phosphorus 0 kg)/mu, A2—(nitrogen 6 kg; phosphorus 4 kg)/mu, A3—(nitrogen 12 kg; phosphorus 8 kg)/mu, A4—(nitrogen 24 kg; phosphorus 16 kg)/mu, A5—(nitrogen 36 kg; phosphorus 24 kg)/mu, and A6—(nitrogen 48 kg; phosphorus 32 kg)/mu. Each gradient includes four straw levels: B1—no straw, B2—150 kg/mu, B3—300 kg/mu, and B4—600 kg/mu. The nitrogen test was combined with the straw test. There were 72 experimental areas in total, and 24 experimental areas were repeated for one time, 3 times in total. The UAV remote sensing data acquisition experiment was simultaneously carried out with the field data acquisition and sampling. The experiment was divided into seedling (July 15, 2019), jointing (July 25, 2019), and ear stages (August 1, 2019), according to the growing season of maize. Figure 1 shows the following six experimental gradients in this study: A1—(nitrogen 0 kg; phosphorus 0 kg)/mu, A2—(nitrogen 6 kg; phosphorus 4 kg)/mu, A3—(nitrogen 12 kg; phosphorus 8 kg)/mu, A4—(nitrogen 24 kg; phosphorus 16 kg)/mu, A5—(nitrogen 36 kg; phosphorus 24 kg)/mu, and A6—(nitrogen 48 kg; phosphorus 32 kg)/mu. Each gradient includes four straw levels: B1—no straw, B2—150 kg/mu, B3—300 kg/mu, and B4—600 kg/mu. The nitrogen test was combined with the straw test. There were 72 experimental areas in total, and 24 experimental areas were repeated for one time, 3 times in total. The UAV remote sensing data acquisition experiment was simultaneously carried out with the field data acquisition and sampling. The experiment was divided into seedling (July 15, 2019), jointing (July 25, 2019), and ear stages (August 1, 2019), according to the growing season of maize.

**Figure 1.** Locations and treatments of the experiments in this study. **Figure 1.** Locations and treatments of the experiments in this study.

#### *2.2. Data Collection 2.2. Data Collection*

#### 2.2.1. UAV Image Acquisition 2.2.1. UAV Image Acquisition

The DJI M600 Pro UAV is used as the loading platform (10 kg UAV weight, 5 kg maximum load weight, 18 min flight time, and 30 m flight height) and equipped with a Red Edge MX multispectral camera for remote sensing data acquisition. The quality of the Red Edge MX multispectral camera is 232 g, and the resolution is 1280 \* 960 pixels (the sensor size is 8.7 cm \* 5.9 cm \* 4.54 cm). This camera can collect spectral images of five bands, namely, blue(B), green(G), red(R), red edge (REG)e, and near infrared (NIR). The central wavelength in each band is at 475, 560, 668,717, and 840nm, with the bandwidth 32, 27,14, 12, and 57nm, respectively. The specific parameters are shown in Table 1. The time period from 12 noon to 13 noon is selected to stabilize the intensity of solar radiation and clear the sky without clouds for UAV data collection. Such initiative is carried out obtain a low remote sensing effect of noise. The flying height of the UAV is 30 m, the flying speed is 4 m/s, and the image overlap is set to 80%. The DJI M600 Pro UAV is used as the loading platform (10 kg UAV weight, 5 kg maximum load weight, 18 min flight time, and 30 m flight height) and equipped with a Red Edge MX multispectral camera for remote sensing data acquisition. The quality of the Red Edge MX multispectral camera is 232 g, and the resolution is 1280 × 960 pixels (the sensor size is 8.7 cm × 5.9 cm × 4.54 cm). This camera can collect spectral images of five bands, namely, blue(B), green(G), red(R), red edge (REG)e, and near infrared (NIR). The central wavelength in each band is at 475, 560, 668, 717, and 840 nm, with the bandwidth 32, 27, 14, 12, and 57 nm, respectively. The specific parameters are shown in Table 1. The time period from 12 noon to 13 noon is selected to stabilize the intensity of solar radiation and clear the sky without clouds for UAV data collection. Such initiative is carried out obtain a low remote sensing effect of noise. The flying height of the UAV is 30 m, the flying speed is 4 m/s, and the image overlap is set to 80%.

Model Red Edge MX Perspective 47.2° Weight 232 g Trigger method Overlap model


**Table 1.** Multispectral camera parameters.

#### 2.2.2. Ground Data Acquisition

After the UAV data collection was completed, the ground data collection, which mainly included the sampling point selection and GPS position information recording, was performed. A maize plant was selected as a sample point at the center of each sample area, and a part of the leaves was placed into a sealed bag and refrigerated, then taken back to the laboratory to determine the chlorophyll content via stoichiometry and with a spectrophotometer. First, a 4 cm × 4 cm (excluding veins) leaf tissue was cut out from the middle of each leaf. Second, the chopped leaves were submerged in 25 mL of a mixture of acetone and ethanol and then soak in the dark for 24 h. After chlorophyll extraction, absorbance was measured with a 752 UV spectrophotometer. The spectrophotometer requires internal thermal equilibrium of the instrument. The device needs to be preheated for 30 min. The sample solution was poured into three cuvettes. The absorbance was measured at the wavelengths of 645 and 663 nm [2]. The formulas for calculating the total chlorophyll content are as follows:

$$\mathcal{C}\_a = 12.72 A\_{663} - 2.59 A\_{645} \tag{1}$$

*C<sup>b</sup>* = 22.88*A*<sup>645</sup> − 4.67*A*663, (2)

$$\mathsf{C}\_{t} = \mathsf{C}\_{\mathfrak{a}} + \mathsf{C}\_{\mathfrak{b}\prime} \tag{3}$$

where *A*<sup>645</sup> and *A*<sup>663</sup> are the absorbance of 645 and 663 nm, respectively; C*<sup>a</sup>* is chlorophyll a content (mg/L); C*<sup>b</sup>* is chlorophyll b content (mg/L); and *C<sup>t</sup>* is the total chlorophyll content (mg/L).

#### *2.3. Data Processing*

#### 2.3.1. Data Processing Flow

After the UAV flight was over, the acquired remote sensing and ground data was processed. The processing flow is shown in Figure 2. The preprocessing mainly includes image stitching and geometric correction, which are completed by Pix4dmapper software. The Red Edge MX sensor combines GPS data and light intensity values in the spectral image. The Pix4dmapper software can import the spectral image to complete the stitching. During stitching, a vector file of the ground control points determined in advance is imported to geometrically correct the stitched image to ensure that the geographic coordinates of the stitched image are consistent with the real coordinates. The stitched multiple band images are fused, and the regions of interest are extracted using ENVI 5.2 software.

Radiation calibration of spectral images is conducted by using black and white calibration boards. Radiation calibration refers to converting the gray values presented light intensity information of the maize multispectral image to reflectivity information. Radiation calibration boards are placed in the study area where the ground is flat and free of shadows. The UAV performs data collection. Equation (4) is used to complete the radiation calibration of the spectral image.

$$\frac{\rho - \rho\_1}{\rho\_2 - \rho\_1} = \frac{DN - DN\_1}{DN\_2 - DN\_1} \tag{4}$$

where *DN*, *DN*1, and *DN*<sup>2</sup> are the digital numbers to show the light intensity values of the crop, black, and white calibration boards, respectively; ρ, ρ1, and ρ<sup>2</sup> are the calculated reflectance values of the crop, black, and white calibration boards, respectively.

**Figure 2.** Data processing flowchart.

#### **Figure 2.** Data processing flowchart. 2.3.2. Extraction of Multispectral Vegetation Index

2.3.2. Extraction of Multispectral Vegetation Index The multispectral image given by the UAV can provide spectral images in five bands with wavelengths of 475, 560, 668, 717, and 840 nm. After radiation calibration, the original image with the DN value was converted into a reflectance image, and the corresponding vegetation index image was calculated. Twenty types of vegetation indices were selected for construction of a chlorophyll content diagnostic model on the basis of the existing vegetation indices and combined with the characteristics The multispectral image given by the UAV can provide spectral images in five bands with wavelengths of 475, 560, 668, 717, and 840 nm. After radiation calibration, the original image with the DN value was converted into a reflectance image, and the corresponding vegetation index image wascalculated. Twenty types of vegetation indices were selected for construction of a chlorophyll content diagnostic model on the basis of the existing vegetation indices and combined with the characteristicsof multispectral images. These selected vegetation indices are shown in Table 2.

#### of multispectral images. These selected vegetation indices are shown in Table 2. 2.3.3. Texture Information Extraction

2.3.3. Texture Information Extraction Texture is an expression of image features. In a multispectral image of a crop canopy, texture can represent the structural characteristics of the maize canopy. In this study, the gray distribution statistical method was used to extract texture features from the UAV multispectral image. The standard deviation (σ), smoothness, and entropy were selected to characterize the texture features. They were calculated for input image following Equations (5), (6), and (7), respectively. The standard deviation is a measure of the average contrast of the image. The smaller the value, the more uniform the value of adjacent pixels in the image. Smoothness is a relative smoothness measure of the Texture is an expression of image features. In a multispectral image of a crop canopy, texture can represent the structural characteristics of the maize canopy. In this study, the gray distribution statistical method was used to extract texture features from the UAV multispectral image. The standard deviation (σ), smoothness, and entropy were selected to characterize the texture features. They were calculated for input image following Equations (5)–(7), respectively. The standard deviation is a measure of the average contrast of the image. The smaller the value, the more uniform the value of adjacent pixels in the image. Smoothness is a relative smoothness measure of the brightness of the image. The value ranges from zero to one. The closer the value is to 0, the smoother the image is.

The entropy is a measure of the randomness of the image. The smaller the value is, the lower the randomness of pixels in the image is, and the more uniform the image is.

$$
\sigma = \sqrt{\mu\_2(Z)}\tag{5}
$$

$$\text{Smoothness} = \frac{\sigma^2}{(1+\sigma^2)}\tag{6}$$

$$Entropy = -\sum\_{i=0}^{L-1} p(Z\_i) \log\_2 P(Z\_i) \tag{7}$$



#### 2.3.4. Coverage Extraction

The soil background in the crop canopy multispectral image will interfere with the spectral characteristics of the crop canopy. Segmentation of the crop canopy with different segmentation methods will yield different results. Coverage indicates the proportion of the crop canopy in the multispectral image. The coverage can be used to characterize the segmentation results of different segmentation methods. The calculation method is shown in Equation (9), where *V<sup>c</sup>* is the maize canopy coverage, *S<sup>c</sup>* is the number of pixels in the maize canopy vector file area, and *S<sup>a</sup>* is the total of the number of pixels in the multispectral image. For the convenience of calculation, we normalize the coverage value, and its value range is 0 to 1.

$$V\_{\mathcal{C}} = \frac{\mathbf{S}\_{\mathcal{C}}}{\mathbf{S}\_{\mathcal{A}}} \tag{8}$$

#### *2.4. Segmentation Method*

#### 2.4.1. Threshold Segmentation

The premise of the threshold segmentation method is to obtain the segmentation interval of the grayscale image. In the multispectral image, the reflectance difference between the maize plant and the soil background in the near-infrared band reaches the maximum. The difference characteristic of the grayscale interval is significant; the near-infrared image is regarded as a segmented image. The segmentation threshold extracted by the maximum interclass method difference (Otsu) is used to binarize the near-infrared image to finally obtain a maize canopy vector file.

#### 2.4.2. ExG Index Segmentation

Green plants have obvious spectral reflection characteristics in the visible light band. The green (*g*) band has a high reflectance, the chlorophyll has a strong absorption characteristic in the red (*r*) band, and the blue (*b*) band is sensitive to the chlorophyll concentration response. However, the soil does not have such spectral reflection characteristics. The plant can be effectively separated from the soil background by using this spectral difference characteristic. Ex-green vegetation index (ExG), as a typical visible light vegetation index, has been widely used in research.

$$\text{ExG } = \text{ } \mathbf{2} \ast \mathbf{g} - \mathbf{r} \, \text{ } \tag{9}$$

#### 2.4.3. Wavelet Segmentation

During the acquisition of multispectral images by UAV, the acquired images will have a certain degree of noise due to the influence of the external environment and equipment differences. Wavelet transform, as a signal processing tool, can multiply and scale the image space domain signals to obtain low-frequency and high-frequency wavelets through scaling and translation operations. Accordingly, noise signals are eliminated, and useful ones are retained. In this study, the near-infrared image is denoised by wavelet denoising on the basis of the threshold segmentation method. The noise is eliminated to achieve maize canopy image segmentation.

#### *2.5. Model Establishment and Accuracy Evaluation*

Partial least squares regression (PLS) is one of the multivariate statistical data analytical methods that integrates the advantages of principal component analysis, canonical correlation analysis, and linear regression analysis. This method has the advantage of handling multiple correlations between independent variables of small samples. In this study, the experimental data of PLS on July 15, 2019, July 25, 2019, and August 1, 2019 are used to establish a model of the relationship between maize canopy chlorophyll content and multispectral remote sensing images. Fifty-four validation set data are obtained.

The root mean square error (RMSE), determination coefficient (R<sup>2</sup> ), and the mean absolute error (MAE) of the actual and detected values are used for evaluating the diagnostic capability of the model. Among the parameters, RMSE is used to measure the degree of dispersion of the experimental results, and the model effect is good when the value is small; the R<sup>2</sup> represents the degree of fitting of the model, and the model diagnostic accuracy is high when the value is close to one; MAE can better reflect the actual situation of predicted value error, and the model effect is good when the value is small. In order to distinguish the precision of modeling set and validation set, Rc<sup>2</sup> (the determination coefficient of calibration set) and RMSEF (the root mean squared error of forecast) are used to represent the precision of modeling set, and Rv<sup>2</sup> (the determination coefficient of validation set) and RMSEP (the root mean squared error of prediction) are used to represent the precision of validation set.

#### **3. Results 3. Results**

#### *3.1. Ground Statistics 3.1. Ground Statistics*

A total of three experiments were performed in this study. Seventy-two maize leaves were collected at the seedling, jointing, and ear stages for chlorophyll content extraction. A total of 216 samples were obtained during the three growth stages. The box plots of chlorophyll content in maize leaves at each growth stage are shown in Figure 3. The KS algorithm was used to divide the entire growth period samples according to different segmentation methods. The results are shown in Table 3. Table 3 illustrates that the training and validation sets contain a large number of chlorophyll content values for the total sample data. A total of three experiments were performed in this study. Seventy-two maize leaves were collected at the seedling, jointing, and ear stages for chlorophyll content extraction. A total of 216 samples were obtained during the three growth stages. The box plots of chlorophyll content in maize leaves at each growth stage are shown in Figure 3. The KS algorithm was used to divide the entire growth period samples according to different segmentation methods. The results are shown in Table 3. Table 3 illustrates that the training and validation sets contain a large number of chlorophyll content values for the total sample data.

*Remote Sens.* **2020**, *12*, x FOR PEER REVIEW 9 of 20

better reflect the actual situation of predicted value error, and the model effect is good when the value is small. In order to distinguish the precision of modeling set and validation set, Rc2 (the determination coefficient of calibration set) and RMSEF (the root mean squared error of forecast) are used to represent the precision of modeling set, and Rv2 (the determination coefficient of validation set) and RMSEP (the root mean squared error of prediction) are used to represent the precision of

**Figure 3.** Box plots of chlorophyll content in maize leaves at three growth stages. **Figure 3.** Box plots of chlorophyll content in maize leaves at three growth stages.



#### Validation set 54 54.721 34.878 47.104 5.192 Total sample 216 55.579 20.108 44.616 6.491 *3.2. Results of Spectral Image Segmentation by Di*ff*erent Segmentation Methods*

Wavelet segmentation Training set 162 55.579 20.108 49.030 6.245 Validation set 54 53.809 24.989 42.911 7.025 *3.2. Results of Spectral Image Segmentation by Different Segmentation Methods*  ENVI 5.2 software was used to draw the regions of interest of the maize canopy and the soil background in the multispectral image for exploring the effect of soil background on the reflectance ENVI 5.2 software was used to draw the regions of interest of the maize canopy and the soil background in the multispectral image for exploring the effect of soil background on the reflectance of maize canopy in the multispectral image. The average gray value of the region of interest was extracted. The results are shown in Figure 4. The average gray values of the maize canopy and soil are different in five bands. The maize canopy gray values at the blue and red bands are lower than that of soil. The gray value of crop canopy in the green, red edge, and near-infrared bands is higher than that of soil, and the difference between the two reaches the maximum at near-infrared band. Therefore, accurate maize canopy spectral information can be obtained by removing the soil background from multispectral images.

The ExG index segmentation, threshold segmentation method, and wavelet segmentation method were used to remove the soil background in the multispectral image to obtain an accurate maize canopy spectrum. The segmentation results are shown in Figure 5. The maize canopy image shows background from multispectral images.

background from multispectral images.

that the ExG index segmentation will retain several noise points when removing the soil background, and the segmentation effect is poor. The reflectance of maize canopy and soil in the near infrared band has a large difference. On this basis, threshold segmentation method was used because it can efficiently remove the soil background. However, the background close to the maize leaves is retained. The wavelet segmentation method first removes the edges and noise points of the near-infrared image and then uses the maize canopy. The spectral difference characteristics of the soil are segmented; thus, better results can be obtained than the ExG index segmentation and threshold segmentation method. shows that the ExG index segmentation will retain several noise points when removing the soil background, and the segmentation effect is poor. The reflectance of maize canopy and soil in the near infrared band has a large difference. On this basis, threshold segmentation method was used because it can efficiently remove the soil background. However, the background close to the maize leaves is retained. The wavelet segmentation method first removes the edges and noise points of the nearinfrared image and then uses the maize canopy. The spectral difference characteristics of the soil are segmented; thus, better results can be obtained than the ExG index segmentation and threshold segmentation method. background, and the segmentation effect is poor. The reflectance of maize canopy and soil in the near infrared band has a large difference. On this basis, threshold segmentation method was used because it can efficiently remove the soil background. However, the background close to the maize leaves is retained. The wavelet segmentation method first removes the edges and noise points of the nearinfrared image and then uses the maize canopy. The spectral difference characteristics of the soil are segmented; thus, better results can be obtained than the ExG index segmentation and threshold segmentation method.

maize canopy spectrum. The segmentation results are shown in Figure 5. The maize canopy image

shows that the ExG index segmentation will retain several noise points when removing the soil

*Remote Sens.* **2020**, *12*, x FOR PEER REVIEW 10 of 20

*Remote Sens.* **2020**, *12*, x FOR PEER REVIEW 10 of 20

of maize canopy in the multispectral image. The average gray value of the region of interest was extracted. The results are shown in Figure 4. The average gray values of the maize canopy and soil are different in five bands. The maize canopy gray values at the blue and red bands are lower than that of soil. The gray value of crop canopy in the green, red edge, and near-infrared bands is higher than that of soil, and the difference between the two reaches the maximum at near-infrared band. Therefore, accurate maize canopy spectral information can be obtained by removing the soil

of maize canopy in the multispectral image. The average gray value of the region of interest was extracted. The results are shown in Figure 4. The average gray values of the maize canopy and soil are different in five bands. The maize canopy gray values at the blue and red bands are lower than that of soil. The gray value of crop canopy in the green, red edge, and near-infrared bands is higher than that of soil, and the difference between the two reaches the maximum at near-infrared band. Therefore, accurate maize canopy spectral information can be obtained by removing the soil

The ExG index segmentation, threshold segmentation method, and wavelet segmentation

The ExG index segmentation, threshold segmentation method, and wavelet segmentation

**Figure 4.** Grayscale distribution of crops and soil in multispectral images. **Figure 4.** Grayscale distribution of crops and soil in multispectral images. **Figure 4.** Grayscale distribution of crops and soil in multispectral images.

**Figure 5.** Maize canopy multispectral image segmentation. **Figure 5. Figure 5.**  Maize canopy multispectral image segmentation. Maize canopy multispectral image segmentation.
