**3. Materials and Methods**

**3. Materials and Methods**  The overall workflow is shown in Figure 3. The data from the UAS were collected for RGB imagery and multispectral data. These data were used to produce a LULC map of the study area and to compute the fractions of the LULC classes. The multispectral data were used to further compute multiple VIs. Different scaled polygons were generated and within each grid, the fractional coverage and VI values were analyzed for their relationships. The data were further divided into training and The overall workflow is shown in Figure 3. The data from the UAS were collected for RGB imagery and multispectral data. These data were used to produce a LULC map of the study area and to compute the fractions of the LULC classes. The multispectral data were used to further compute multiple VIs. Different scaled polygons were generated and within each grid, the fractional coverage and VI values were analyzed for their relationships. The data were further divided into training and validation sets, and an estimation of FVC was implemented for different spatial resolutions. **3. Materials and Methods**  The overall workflow is shown in Figure 3. The data from the UAS were collected for RGB imagery and multispectral data. These data were used to produce a LULC map of the study area and to compute the fractions of the LULC classes. The multispectral data were used to further compute multiple VIs. Different scaled polygons were generated and within each grid, the fractional coverage and VI values were analyzed for their relationships. The data were further divided into training and validation sets, and an estimation of FVC was implemented for different spatial resolutions.

validation sets, and an estimation of FVC was implemented for different spatial resolutions.

**Figure 3.** Overall flowchart of the methodology.

#### *3.1. Coordinate Observation for Ground Control Points 3.1. Coordinate Observation for Ground Control Points*

#### 3.1.1. GCP Collection Using Low Cost GNSS 3.1.1. GCP Collection Using Low Cost GNSS

For improving the 3D modeling and georeferencing, ground control points (GCPs) were placed and coordinates were collected before the flight campaign. The Reach (Emlid, Hong Kong, China) global navigation satellite system (GNSS) was utilized for collecting the XYZ geographical coordinates (Figure 4). A total of two stations were used. One was set on the tip of a long envy pipe and was fixed at 4 m high off the ground so the surrounding obstacles would not block the GNSS signals; this station was used as a base station. The other receiver was placed on a tripod and utilized as a rover station during the collection of coordinates at each GCP target. A total of six GCPs were observed and the GNSS signals at each GCP were recorded for 5 min. The signals were further processed with a post-processing kinematic (PPK) method to enhance the XYZ coordinate precision. Depending on the satellite signals and processing, the coordinate data can enhance the precision using the PPK method up to a centimeter-level error; when using non-processed GNSS data, the precision can result in a 10 m error (when only using the GPS [32]). The Reach GNSS was set to observe GNSS signals from GPS, QZSS, Galileo and Beidou at a logging frequency of 5 Hz. For improving the 3D modeling and georeferencing, ground control points (GCPs) were placed and coordinates were collected before the flight campaign. The Reach (Emlid, Hong Kong, Hong Kong) global navigation satellite system (GNSS) was utilized for collecting the XYZ geographical coordinates (Figure 4). A total of two stations were used. One was set on the tip of a long envy pipe and was fixed at 4 m high off the ground so the surrounding obstacles would not block the GNSS signals; this station was used as a base station. The other receiver was placed on a tripod and utilized as a rover station during the collection of coordinates at each GCP target. A total of six GCPs were observed and the GNSS signals at each GCP were recorded for 5 min. The signals were further processed with a post-processing kinematic (PPK) method to enhance the XYZ coordinate precision. Depending on the satellite signals and processing, the coordinate data can enhance the precision using the PPK method up to a centimeter-level error; when using non-processed GNSS data, the precision can result in a 10 m error (when only using the GPS [32]). The Reach GNSS was set to observe GNSS signals from GPS, QZSS, Galileo and Beidou at a logging frequency of 5 Hz.

*Remote Sens.* **2019**, *11*, x FOR PEER REVIEW 5 of 17

**Figure 3.** Overall flowchart of the methodology.

**Figure 4.** Small and light-weight global navigation satellite system (GNSS) equipment for postprocessing kinematic (PPK). **Figure 4.** Small and light-weight global navigation satellite system (GNSS) equipment for post-processing kinematic (PPK).

#### 3.1.2. PPK Processing for Precise GCP Coordinates 3.1.2. PPK Processing for Precise GCP Coordinates

For the postprocessing of the GNSS signals, the open source RTKLIB software [33] was used. RTKLIB is a program package for standard and precise positioning with GNSS and can perform a PPK analysis by using Receiver Independent Exchange Format (RINEX) files. First, using the RTKLIB, the log files for both the base and rover stations were opened and examined for the GNSS signal quality. The signals were checked for each satellite, and the only satellites that seemed to receive continuous L1 frequency signals during the recording were included for further processing, and the other satellites were omitted (Figure 5). Each observation of the GCP targets was postprocessed with the "static" option in the positioning mode, "combined" in the filter type, "Fixed and Held" for the integer ambiguity resolution and 15 degrees for the elevation mask, and 35 dB for SNR (signal-tonoise ratio) filtering were used. After the coordinate of each GCP was computed, the coordinate logs were averaged from the fixed solution (Q1) and the ratio factor of ambiguity validation ≧500. However, if there were no logs for Q1 then a float solution (Q2) was used. For the postprocessing of the GNSS signals, the open source RTKLIB software [33] was used. RTKLIB is a program package for standard and precise positioning with GNSS and can perform a PPK analysis by using Receiver Independent Exchange Format (RINEX) files. First, using the RTKLIB, the log files for both the base and rover stations were opened and examined for the GNSS signal quality. The signals were checked for each satellite, and the only satellites that seemed to receive continuous L1 frequency signals during the recording were included for further processing, and the other satellites were omitted (Figure 5). Each observation of the GCP targets was postprocessed with the "static" option in the positioning mode, "combined" in the filter type, "Fixed and Held" for the integer ambiguity resolution and 15 degrees for the elevation mask, and 35 dB for SNR (signal-to-noise ratio) filtering were used. After the coordinate of each GCP was computed, the coordinate logs were averaged from the fixed solution (Q1) and the ratio factor of ambiguity validation =500. However, if there were no logs for Q1 then a float solution (Q2) was used.

*Remote Sens.* **2019**, *11*, x FOR PEER REVIEW 6 of 17

**Figure 5.** Example of the GNSS signal status from each satellite and the selection of the omitted **Figure 5.** Example of the GNSS signal status from each satellite and the selection of the omitted satellites.

#### satellites. *3.2. UAS Flights Data Collection*

#### *3.2. UAS Flights Data Collection*  3.2.1. Fixed-Wing RGB Acquisition and Processing

3.2.1. Fixed-Wing RGB Acquisition and Processing On 17 October, 2018, a flight campaign was conducted at one of the compartment areas in the plantation site. A Firefly6 Pro (BirdsEyeView Aerobotics, New Hampshire, US) fixed-wing VTOL (vertical takeoff and landing) UAS was utilized to collect the aerial photography of the area. For the photo shoot, the SONY α6000 camera was embedded in the gimbal attached beneath the fixed-wing UAS and collected throughout the flight path. The flight path of the UAS was set to a south-north direction with the camera time-lapse set to two seconds, corresponding to a forward (side) overlap of 75% (70%), and flying at 140 m at altitude with an approximate 3 cm ground sampling distance (GSD). The flight course did not consider the wind direction. The collected aerial images were further processed with the software Photoscan Version 1.4.5 (Agisoft, St. Petersburg, Russia) for 3D modeling and mosaicking of the whole study area. The Photoscan parameter was considered with a "High" alignment with default tie and key points, and a dense point cloud of "High" with "Mild" depth filtering. A digital surface model (DSM) was generated using the dense-point cloud and an orthoimage was computed using the DSM. The first routine was processed without adding the GCP information. After the first orthoimage was generated, the GCPs were additionally added and the model was reconstructed by adjusting the point clouds with the GCP information. Furthermore, DSM and the orthoimage was reconstructed based on the GCP-provided point clouds. This processing On 17 October, 2018, a flight campaign was conducted at one of the compartment areas in the plantation site. A Firefly6 Pro (BirdsEyeView Aerobotics, New Hampshire, US) fixed-wing VTOL (vertical takeoff and landing) UAS was utilized to collect the aerial photography of the area. For the photo shoot, the SONY α6000 camera was embedded in the gimbal attached beneath the fixed-wing UAS and collected throughout the flight path. The flight path of the UAS was set to a south-north direction with the camera time-lapse set to two seconds, corresponding to a forward (side) overlap of 75% (70%), and flying at 140 m at altitude with an approximate 3 cm ground sampling distance (GSD). The flight course did not consider the wind direction. The collected aerial images were further processed with the software Photoscan Version 1.4.5 (Agisoft, St. Petersburg, Russia) for 3D modeling and mosaicking of the whole study area. The Photoscan parameter was considered with a "High" alignment with default tie and key points, and a dense point cloud of "High" with "Mild" depth filtering. A digital surface model (DSM) was generated using the dense-point cloud and an orthoimage was computed using the DSM. The first routine was processed without adding the GCP information. After the first orthoimage was generated, the GCPs were additionally added and the model was reconstructed by adjusting the point clouds with the GCP information. Furthermore, DSM and the orthoimage was reconstructed based on the GCP-provided point clouds. This processing method makes placing manual GCPs throughout the software easier, especially when a large quantity of images is used.

#### quantity of images is used. 3.2.2. Multispectral Data Collection and Processing

3.2.2. Multispectral Data Collection and Processing On 17 October, 2018, the flight campaign was conducted at the compartment area in the plantation site, which is the same as the corresponding area for the RGB acquisition. Right after the RGB collection flight, another flight was conducted to collect multispectral images of the area. The SlantRange 3p (SlantRange, California, US) sensor was embedded in the gimbal of the VTOL UAS and images were collected throughout the flight path. The path was the same as the RGB flight. However, the forward (side) overlap was set to 40% (40%), and the flying altitude as 150 m, corresponding to an approximate 6 cm GSD. The SlantRange 3p can collect four bands, namely green, red, red-edge and near infrared. The ambient illumination sensor is placed on top of the VTOL UAS during the shooting (Figure 6) and the information is later used for calibrating each image for solar On 17 October 2018, the flight campaign was conducted at the compartment area in the plantation site, which is the same as the corresponding area for the RGB acquisition. Right after the RGB collection flight, another flight was conducted to collect multispectral images of the area. The SlantRange 3p (SlantRange, San Diego, CA, USA) sensor was embedded in the gimbal of the VTOL UAS and images were collected throughout the flight path. The path was the same as the RGB flight. However, the forward (side) overlap was set to 40% (40%), and the flying altitude as 150 m, corresponding to an approximate 6 cm GSD. The SlantRange 3p can collect four bands, namely green, red, red-edge and near infrared. The ambient illumination sensor is placed on top of the VTOL UAS during the shooting (Figure 6) and the information is later used for calibrating each image for solar illuminations. The metadata of each image was processed with the SlantView software, which calibrates each image for solar illumination and includes coordinate information. The preprocessed data were exported and

method makes placing manual GCPs throughout the software easier, especially when a large

illuminations. The metadata of each image was processed with the SlantView software, which

further processed via Photoscan to generate the overall view of the study area. The parameter slightly changes from RGB processing, where the alignment is "Very High", the dense point cloud is "Ultra high," while the other process stays the same. The multispectral data again performed its first routine without GCP and the second process was added with GCP calibration. data were exported and further processed via Photoscan to generate the overall view of the study area. The parameter slightly changes from RGB processing, where the alignment is "Very High", the dense point cloud is "Ultra high," while the other process stays the same. The multispectral data again performed its first routine without GCP and the second process was added with GCP calibration.

**Figure 6.** Firefly Pro 6 fixed-wing vertical takeoff and landing (VTOL) unmanned aerial system (UAS) carrying the multispectral sensor and the illumination sensor. **Figure 6.** Firefly Pro 6 fixed-wing vertical takeoff and landing (VTOL) unmanned aerial system (UAS) carrying the multispectral sensor and the illumination sensor.

### *3.3. Image Classification*

*3.3. Image Classification*  The study area is a small section of the compartment area of the *Acacia* plantation. However, there is a variety of LULC types in the area, as not only acacias but also water bodies, bare soils and grassy shrubs can be observed. Using the RGB image and multispectral imagery, a conventional supervised image classification was performed to develop a categorical map that shows the distribution of the general LULC types of the site. The Multilayer-Perceptron (MLP) classification [31] was implemented to classify each LULC class, which is a classifier that can handle even nonlinear trends between variables. A total of four classes were classified: *Acacia* trees, bare soil, water bodies and grass/shrubs. For the validation of the classified map, an accuracy assessment was additionally performed. Using the developed RGB orthoimage and ground information for the study area, 600 evenly distributed sampled points per class were manually collected throughout the scene, and the error matrix was computed. The LULC map was further used to compute the fraction of LULC classes within each gridded location. Other authors indicate that UAS information is highly correlated with traditional ground-based hemispherical photography, which can be used as reference for groundtruth information [34]. Therefore, the UAS-based classification result will be further used likewise as The study area is a small section of the compartment area of the *Acacia* plantation. However, there is a variety of LULC types in the area, as not only acacias but also water bodies, bare soils and grassy shrubs can be observed. Using the RGB image and multispectral imagery, a conventional supervised image classification was performed to develop a categorical map that shows the distribution of the general LULC types of the site. The Multilayer-Perceptron (MLP) classification [31] was implemented to classify each LULC class, which is a classifier that can handle even nonlinear trends between variables. A total of four classes were classified: *Acacia* trees, bare soil, water bodies and grass/shrubs. For the validation of the classified map, an accuracy assessment was additionally performed. Using the developed RGB orthoimage and ground information for the study area, 600 evenly distributed sampled points per class were manually collected throughout the scene, and the error matrix was computed. The LULC map was further used to compute the fraction of LULC classes within each gridded location. Other authors indicate that UAS information is highly correlated with traditional ground-based hemispherical photography, which can be used as reference for ground-truth information [34]. Therefore, the UAS-based classification result will be further used likewise as ground-truthing.

#### ground-truthing. *3.4. Relationship Analysis of VIs and Fraction of LULC*

#### *3.4. Relationship Analysis of VIs and Fraction of LULC*  3.4.1. Various Vegetation Indices

3.4.1. Various Vegetation Indices Three different VIs were computed from the multispectral bands collected using SlantRange. Three different VIs were computed from the multispectral bands collected using SlantRange. Namely, the Normalized Difference Vegetation Index (NDVI), Green NDVI (GNDVI) and Red-edge NDVI (ReNDVI) were used, where each index can be computed using the following formula:

$$\text{NDVI} = \frac{\text{IR} - \text{R}}{\text{IR} + \text{R}'} \tag{1}$$

$$\text{GNDVI} = \frac{\text{IR} - \text{G}}{\text{IR} + \text{G}'} \tag{2}$$

$$\text{ReNDVI} = \frac{\text{IR} - \text{Re}}{\text{IR} + \text{Re}'} \tag{3}$$

ReNDVI ൌ IR െ Re IR Re, (3) where IR is the infrared band, R is the red band, G is the green band and Re is the red-edge band. The indices all have similar functions. However, they use slightly different characteristics of the where IR is the infrared band, R is the red band, G is the green band and Re is the red-edge band. The indices all have similar functions. However, they use slightly different characteristics of the electromagnetic spectra. The IR and R are used often and characterize the vegetation function where the leaves' chlorophyll absorbs the red spectra and reflects infrared spectra [35]. Using this information

one can sense the vegetation activity (greenness) of the area. GNDVI utilizes the green band instead of the red band. The GNDVI is considered an alternative index to NDVI, as it has a wider dynamic range and a higher sensitivity to chlorophyll [36]. ReNDVI also uses the slightly longer wavelength from the red channel instead of R or G, which corresponds to vegetation status information, especially water stress [37]. These three indices were examined with the fractions of the LULC classes to understand the sensitivity of the VIs.

### 3.4.2. Various Grid Scaling for Relationship Analysis

Four differently scaled square gridded polygons were continuously constructed along the study area with 5, 10, 30 and 60 m grid sizes. The 5 m resolution grid corresponds to satellites such as Rapideye, while the 10 m resolution grid corresponds to Sentinel-2 and the 30 m resolution corresponds to the satellites from the LANDSAT series. The 60 m grid is an additional size used to examine the viewability when the resolution is upscaled to a coarser grid size, which is almost the maximum size that can be determined in the current study area for a relevant analysis. Within each grid, the percentage of LULC classes and the average values of each VI were extracted and examined for the relationship analysis.

### 3.4.3. Estimating FVC Using UAS Data and Comparison with Satellite-Based FVC

The sample dataset that was used for relationship analysis is divided into sets of training and validation data. For the 5, 10, 30 and 60 m grid data, the samples were randomly divided into 50% for training and the remaining 50% for validating the estimated UAS-derived FVC. The model evaluation is considered by computing the R<sup>2</sup> and the root mean square error (RMSE) and the relative RMSE for each grid scale model [38].

Furthermore, Sentinel-2 imagery was downloaded via the webpage and processed through the Sentinel Application Platform (SNAP) software ver. 6.0.5 to compute the satellite-based FVC. The imagery was observed on 14 September, 2018, just one month before the UAS observation. The biophysical processor module within the SNAP software is used for computing biophysical parameters. While the UAS-based FVC uses a simple empirical model, the SNAP FVC is considered via the neural network method [39]. The satellite-based FVC is later compared with the result from the UAS-based FVC.

#### **4. Results**

### *4.1. UAS Photogrammetry and GCP*

#### 4.1.1. PPK Processing for Coordinate Precision

Using the base station and rover GNSS log collected through a small GNSS unit, postprocessing of the GNSS signals was performed using the RTKLIB. A total of six points were processed. Most of the targets are showing fixed positioning of the GNSS equipment (<1 cm precision in XYZ direction), while some GCPs give slightly larger errors when using the float solution. However, even though not all observations give a clear fixed solution, the PPK method can nonetheless give much higher precision than just a single GNSS observation, even with an extremely lightweight, small unit GNSS. The geolocation (XYZ coordinates) computed from the PPK process was used for each GCP target location.

#### 4.1.2. Orthoimagery from UAS Flights for RGB and Multispectral Data

From the flight using the VTOL UAS, totals of 567 and 1204 images were obtained for RGB and multispectral data, respectively. Figure 7 shows the same location of the mosaicked images for the RGB and multispectral data. Due to the irrelevant overlap from the multispectral sensor, there are some gaps in the southern part of the compartment site. However, the procedure succeeded overall in developing an orthorectified image. Gap areas are omitted from further analysis.

*Remote Sens.* **2019**, *11*, x FOR PEER REVIEW 9 of 17

in developing an orthorectified image. Gap areas are omitted from further analysis.

in developing an orthorectified image. Gap areas are omitted from further analysis.

**Figure 7.** Ortho imagery of the test site for (**a**) the true color composite (RGB) image from the digital camera and (**b**) the false color composite (RGB: NIR, Red and Green) imagery from the multispectral sensor. The resolution is reduced to 0.2 m from the original resolution for visual purposes. **Figure 7.** Ortho imagery of the test site for (**a**) the true color composite (RGB) image from the digital camera and (**b**) the false color composite (RGB: NIR, Red and Green) imagery from the multispectral sensor. The resolution is reduced to 0.2 m from the original resolution for visual purposes. **Figure 7.** Ortho imagery of the test site for (**a**) the true color composite (RGB) image from the digital camera and (**b**) the false color composite (RGB: NIR, Red and Green) imagery from the multispectral sensor. The resolution is reduced to 0.2 m from the original resolution for visual purposes.

#### *4.2. LULC Map of the Study Area and Its Errors 4.2. LULC Map of the Study Area and Its Errors 4.2. LULC Map of the Study Area and Its Errors*

method.

method.

Utilizing the RGB imagery and multispectral data, image classification was performed to develop a categorical map. A total of four classes were generated that consisted of the general landscape of the plantation site. Figure 8 shows the result of the mapping. For the validation of the map, an accuracy assessment was performed, and the error matrix is computed in Table 1. The result indicates the overall accuracy of the categorical map as 90.07%. The user accuracies (Producers accuracy) for *Acacia* forest, bare land, water bodies and grass/shrubs were 83% (96%), 96% (83%), 91% (95%) and 94% (89%), respectively. Utilizing the RGB imagery and multispectral data, image classification was performed to develop a categorical map. A total of four classes were generated that consisted of the general landscape of the plantation site. Figure 8 shows the result of the mapping. For the validation of the map, an accuracy assessment was performed, and the error matrix is computed in Table 1. The result indicates the overall accuracy of the categorical map as 90.07%. The user accuracies (Producers accuracy) for *Acacia* forest, bare land, water bodies and grass/shrubs were 83% (96%), 96% (83%), 91% (95%) and 94% (89%), respectively. Utilizing the RGB imagery and multispectral data, image classification was performed to develop a categorical map. A total of four classes were generated that consisted of the general landscape of the plantation site. Figure 8 shows the result of the mapping. For the validation of the map, an accuracy assessment was performed, and the error matrix is computed in Table 1. The result indicates the overall accuracy of the categorical map as 90.07%. The user accuracies (Producers accuracy) for *Acacia* forest, bare land, water bodies and grass/shrubs were 83% (96%), 96% (83%), 91% (95%) and 94% (89%), respectively.

**Figure 8.** Land use/land cover (LULC) map of the compartment area, developed using RGB and multispectral data observed from the fixed-wing UAS using the multilayer-perceptron (MLP) **Figure 8.** Land use/land cover (LULC) map of the compartment area, developed using RGB and multispectral data observed from the fixed-wing UAS using the multilayer-perceptron (MLP) **Figure 8.** Land use/land cover (LULC) map of the compartment area, developed using RGB and multispectral data observed from the fixed-wing UAS using the multilayer-perceptron (MLP) method.

results is 90.7%.


**Table 1.** Accuracy assessment and the error matrix of the generated LULC map. Error C is the error of commission, and Error O is the error of omission. The overall accuracy of the categorical map results is 90.7%. **Classified**  *Acacia* **Bare Soil Water Body Grass/Shru <sup>b</sup>Total Error C** 

*Remote Sens.* **2019**, *11*, x FOR PEER REVIEW 10 of 17

#### *4.3. Sensitivity of VI to the Fraction of Vegetation Cover (FVC) 4.3. Sensitivity of VI to the Fraction of Vegetation Cover (FVC)*

Figure 9 shows the relationship analysis between the VIs and fractions of LULC covers for different scaling resolutions. Each VI is examined for three different LULC types: *Acacia*, grass/shrubs and non-vegetation (i.e., bare soil + water bodies). Each degree of correlation can be seen within the figure. In general, for the *Acacia* class, a clear exponential relationship occurs between the FVC and VIs. At the 5 and 10 m grid scales, ReNDVI shows the highest correlation, with GNDVI and NDVI showing lower R2 values. With the resolution becoming coarser, the sensitivity of the VIs to FVC shows a lesser error among all VIs, and for the 60 m grid, all VIs show a strong correlation (R<sup>2</sup> > 0.9). The grass/shrub class, which is classified as another vegetation type, does not show a clearer trend than the *Acacia* trees. The trend line for the grass/shrub type is shown with a second order polynomial for reference. The polynomial appears to not show many relevant relationships, but it does seem to show a combined trend of the area that has different land conditions. One example is a segment that shows a positive correlation between FVC and VIs, which is shown at areas such as on the west side of the study site, along with the track of the bare soils and water body that crosses from north to south direction. This area shows a simple positive correlation between vegetation and the VIs. Another segment is found within the compartment of the plantation area that shows a negative correlation between the FVC and VIs. In such areas, larger fractions of acacias are found when the grass/shrub class reduces, resulting in higher VIs. Other than the vegetative classes, non-vegetative areas were also examined for the sensitivity of VIs. Interestingly, at all grid scales, a clear negative correlation is observed between the fraction of non-vegetation cover (denoted hereafter as FnVC) and VIs. Similar to the *Acacia* class, the errors obviously reduce with coarser grids. However, instead of ReNDVI being more sensitive for acacias, the NDVI shows a higher correlation throughout different VIs and at different grid scales for FnVC. Figure 9 shows the relationship analysis between the VIs and fractions of LULC covers for different scaling resolutions. Each VI is examined for three different LULC types: *Acacia*, grass/shrubs and non-vegetation (i.e., bare soil + water bodies). Each degree of correlation can be seen within the figure. In general, for the *Acacia* class, a clear exponential relationship occurs between the FVC and VIs. At the 5 and 10 m grid scales, ReNDVI shows the highest correlation, with GNDVI and NDVI showing lower R2 values. With the resolution becoming coarser, the sensitivity of the VIs to FVC shows a lesser error among all VIs, and for the 60 m grid, all VIs show a strong correlation (R2 > 0.9). The grass/shrub class, which is classified as another vegetation type, does not show a clearer trend than the *Acacia* trees. The trend line for the grass/shrub type is shown with a second order polynomial for reference. The polynomial appears to not show many relevant relationships, but it does seem to show a combined trend of the area that has different land conditions. One example is a segment that shows a positive correlation between FVC and VIs, which is shown at areas such as on the west side of the study site, along with the track of the bare soils and water body that crosses from north to south direction. This area shows a simple positive correlation between vegetation and the VIs. Another segment is found within the compartment of the plantation area that shows a negative correlation between the FVC and VIs. In such areas, larger fractions of acacias are found when the grass/shrub class reduces, resulting in higher VIs. Other than the vegetative classes, non-vegetative areas were also examined for the sensitivity of VIs. Interestingly, at all grid scales, a clear negative correlation is observed between the fraction of non-vegetation cover (denoted hereafter as FnVC) and VIs. Similar to the *Acacia* class, the errors obviously reduce with coarser grids. However, instead of ReNDVI being more sensitive for acacias, the NDVI shows a higher correlation throughout different VIs and at different grid scales for FnVC.

**Figure 9.** *Cont.*

*Remote Sens.* **2019**, *11*, x FOR PEER REVIEW 11 of 17

**Figure 9.** *Cont.*

*Remote Sens.* **2019**, *11*, x FOR PEER REVIEW 12 of 17

**Figure 9.** Relationship analysis between the different vegetation indices and fraction of LULC types: *Acacia*, grass/shrub and non-vegetation (bare soil + water) at different resolution scales. **Figure 9.** Relationship analysis between the different vegetation indices and fraction of LULC types: *Acacia*, grass/shrub and non-vegetation (bare soil + water) at different resolution scales. **Figure 9.** Relationship analysis between the different vegetation indices and fraction of LULC types: *Acacia*, grass/shrub and non-vegetation (bare soil + water) at different resolution scales.

#### *4.4. FVC Estimation and Satellite-Derived FVC vs. UAS-Derived FVC 4.4. FVC Estimation and Satellite-Derived FVC vs. UAS-Derived FVC 4.4. FVC Estimation and Satellite-Derived FVC vs. UAS-Derived FVC*

Figure 10 shows both the estimated FVC from the model training and the evaluation of the model for each grid scale together with the comparison of satellite-derived to UAS-derived FVC at the 10 m scale. The RMSE values of the model for 5, 10, 30 and 60 m were 0.107, 0.112, 0.085 and 0.05, respectively. The visual interpretation shows that the higher resolution image gives more detail of the area. However, the 30 m resolution data can characterize the overall trend of the site, as the bare soils on the west side can be seen clearly, and the difference between the north compartment with the lower FVC and the southern compartment with the higher FVC is acknowledged. The 60 m resolution can also characterize the area to some degree, however within the spatial extent of the study area, the information is much more aggregated compared to the other, finer resolutions. Figure 10 shows both the estimated FVC from the model training and the evaluation of the model for each grid scale together with the comparison of satellite-derived to UAS-derived FVC at the 10 m scale. The RMSE values of the model for 5, 10, 30 and 60 m were 0.107, 0.112, 0.085 and 0.05, respectively. The visual interpretation shows that the higher resolution image gives more detail of the area. However, the 30 m resolution data can characterize the overall trend of the site, as the bare soils on the west side can be seen clearly, and the difference between the north compartment with the lower FVC and the southern compartment with the higher FVC is acknowledged. The 60 m resolution can also characterize the area to some degree, however within the spatial extent of the study area, the information is much more aggregated compared to the other, finer resolutions. Figure 10 shows both the estimated FVC from the model training and the evaluation of the model for each grid scale together with the comparison of satellite-derived to UAS-derived FVC at the 10 m scale. The RMSE values of the model for 5, 10, 30 and 60 m were 0.107, 0.112, 0.085 and 0.05, respectively. The visual interpretation shows that the higher resolution image gives more detail of the area. However, the 30 m resolution data can characterize the overall trend of the site, as the bare soils on the west side can be seen clearly, and the difference between the north compartment with the lower FVC and the southern compartment with the higher FVC is acknowledged. The 60 m resolution can also characterize the area to some degree, however within the spatial extent of the study area, the information is much more aggregated compared to the other, finer resolutions.

The FVC computed from Sentinel-2 is shown in Figure 10e. The evaluation of the Sentinel-2 FVC is compared with the UAS FVC (Figure 10b). From the visual interpretation, the Sentinel-2 FVC shows a more smoothened result than the sharper UAS FVC. Since the Sentinel-2 FVC uses the neural network method for all different bands, the resampled information of the bands appears in the image (RGB is 10 m, however IR and SWIR are 20–30 m and the aerosol/water vapor bands are 60 m). The model validation shows overestimations for lower FVCs and underestimations for higher FVCs for the Sentinel-2 data. Two potential explanations can be made for the FVC difference. One is the error from the neural network approach, and the other is the temporal difference between the satellite and UAS. This subject will be discussed in the next section. The FVC computed from Sentinel-2 is shown in Figure 10e. The evaluation of the Sentinel-2 FVC is compared with the UAS FVC (Figure 10b). From the visual interpretation, the Sentinel-2 FVC shows a more smoothened result than the sharper UAS FVC. Since the Sentinel-2 FVC uses the neural network method for all different bands, the resampled information of the bands appears in the image (RGB is 10 m, however IR and SWIR are 20–30 m and the aerosol/water vapor bands are 60 m). The model validation shows overestimations for lower FVCs and underestimations for higher FVCs for the Sentinel-2 data. Two potential explanations can be made for the FVC difference. One is the error from the neural network approach, and the other is the temporal difference between the satellite and UAS. This subject will be discussed in the next section. The FVC computed from Sentinel-2 is shown in Figure 10e. The evaluation of the Sentinel-2 FVC is compared with the UAS FVC (Figure 10b). From the visual interpretation, the Sentinel-2 FVC shows a more smoothened result than the sharper UAS FVC. Since the Sentinel-2 FVC uses the neural network method for all different bands, the resampled information of the bands appears in the image (RGB is 10 m, however IR and SWIR are 20–30 m and the aerosol/water vapor bands are 60 m). The model validation shows overestimations for lower FVCs and underestimations for higher FVCs for the Sentinel-2 data. Two potential explanations can be made for the FVC difference. One is the error from the neural network approach, and the other is the temporal difference between the satellite and UAS. This subject will be discussed in the next section.

**Figure 10.** *Cont.*

*Remote Sens.* **2019**, *11*, x FOR PEER REVIEW 13 of 17

**Figure 10.** Estimated FVC from the UAS-based trained model for (**a**) 5 m (**b**) 10 m (**c**) 30 m and (**d**) 60 m. (**e**) The estimated FVC from Sentinel-2 data using the neural network method. The validation of the sentinel-based FVC is compared with (**b**). **Figure 10.** Estimated FVC from the UAS-based trained model for (**a**) 5 m (**b**) 10 m (**c**) 30 m and (**d**) 60 m. (**e**) The estimated FVC from Sentinel-2 data using the neural network method. The validation of the sentinel-based FVC is compared with (**b**).

#### **5. Discussion**

We will first discuss the trends of FVC to the sensitivity of VIs and the resolution. The resolution (grid scale) shows a straightforward trend, where a coarser resolution shows a stronger correlation. This also coincides with the findings of Riihimaki et al. (2019), which explains that when the finer data is aggregated to a larger unit, the variation in the data decreases, resulting in a higher correlation [29,40]. Starting from the 5 m up to the 60 m resolution, the R<sup>2</sup> improves for the explanatory power and the estimated FVC shows a much lower RMSE for coarser resolutions. This indicates that utilizing UAS data can improve the efficiency and quality of collected ground-truth information for validating coarser imagery FVCs. This approach may outperform expectations by using high-resolution Google Earth scenes [30] or direct observations in the field [28]; this dynamic remains in issues such as the temporal difference of scenes, coarse resolution images that make delineating fractional covers difficult and sampling scale/registration differences between field plots and imagery [29,30]. Since these issues can be controlled for UAS data, new possibilities in bridging to larger-scale earth observations may occur [41].

The sensitivity of VIs to FVC and FnVC was also clear. For the *Acacia* trees, an exponential increase of FVC occurred with increasing VIs, while for the FnVC, the decreasing trend of FVC with increasing VIs was seen throughout different grid scales. The grass/shrub class showed almost no trends when an overall trend is shown for the whole study area. However, their characteristics can be seen at two different segments, where the pure grass/shrub cover is dominant or if it is mixed within the *Acacia* plantation area. Many authors have indicated the issues caused by the mixing reflectance from vegetation or the background response to the spectral variability that is observed within the grid [42–44]. The background soil reflectance brings errors in observing correct vegetation signatures [42,44], and the mixture of vegetation types can lead to errors due to the spectral difference, especially for woody vegetation and green grass [43]. In our study, *Acacia* and grass/shrubs were mixed within the compartment area. When the FVC as separated clearly by species, it gave a strong correlation for woody vegetation between the VIs taken from UAS. As expressed by Xiao and Moody [43], the mixture of the vegetation types will result in uncertainties in the estimation. When both the *Acacia* and grass/shrub classes are aggregated, the R<sup>2</sup> of the exponential model was reduced from a minimum of 0.008 to a maximum of 0.086 depending on the grid scales (for ReNDVI). On the other hand, the correlation between the aggregated FVC (*Acacia* + Grass/Shrub) and VIs was the strongest for NDVI compared to other the VIs for all grid scales. Estimating single species fractions via satellite data also seems to show stronger correlations when using ReNDVI. However, when considering all vegetation types, NDVI is preferred, although the overall correlation would decrease if all vegetation types were considered. However, depending on different biomes this dynamic may also show trend changes [17]. Therefore, the relationship found here may be suitable mainly for IFP sites within Indonesia for the *Acacia* species. Notably, estimating the FnVC seems to also be possible, as it has a superior explanatory power compared to directly estimating the vegetation cover. The prospects of estimating the vegetation cover by inversely estimating FnVC look probable, although a large-scale analysis would need to be made for further confirmation.

The comparison between the satellite FVC and UAS FVC gave interesting results. The satellite FVC was based on the neural network approach, which generally tends to show a high accuracy [21,45]. However, from our observation, the lower boundary of the FVC was overestimated and the higher FVC was underestimated. The neural network approach needs to be treated with care during the training phase, as it can produce poor results if poor information is used during the training [46]. Therefore, the neural network approach might also show strong results in other areas, but this was not the case here. Another potential issue is the temporal difference between the satellite data and UAS. The observation period was valid for similar months. However, due to clouds, the data were unusable, which forced us to use only September's data. Although it is only a 1-month difference, the fast-growing acacias may show differences in the FVC result, hence, the validation between the UAS-FVC and satellite FVC could have had shown such a trend. This is the conventional issue of satellite data, where clouds

and haze limit data collection on preferable observation dates. We would like to emphasize that this result does not conclude that the FVC estimation via regression is better than the neural network, but the conventional platforms could limit the desired information of the environment at various scenes. The UAS data can overcome this issue and may collect data at any occasion, time and resolution [31,47]. Thus, the UAS FVC may become a potential method for future FVC developments. The potential obstacles posed to future studies would be aviation law and engineering issues [47]. For example, the available flight time per UASs would limit the spatial extent, and the aviation law that limits the flight altitude would also limit the spatial extent. Even collecting land information at a 5 m resolution can provide sufficient green cover information, but expanding the observation area with a limited flight time would be constrained by the flying altitude, as it is mainly regulated by the law. Further investigation will be performed for more precise comparison between the satellite and UAS data in the future by coinciding both data sets.

### **6. Conclusions**

This work demonstrated utilizing UAS to observe the RGB and multispectral data for analyzing the LULC of the plantation area, and further examined the relationship between the fraction of each LULC class with comparison to different VIs at multiple spatial resolutions. UAS observation was successful for RGB and multispectral data. An LULC map of the area was developed using the orthorectified RGB and multispectral data, and the fraction of each LULC type was extracted within different resolution grids. A comparison of the fractional vegetation cover (FVC) to NDVI, GNDVI and ReNDVI was performed. The result indicates the possibilities of computing the vegetation cover of terrestrial environments through UAS-derived data at IFP in Indonesia. The differences between the FVC and the grid scale and VIs were clear. ReNDVI showed a stronger correlation at a finer resolution for *Acacia* classes and NDVI was superior for estimating non-vegetation classes. Methods to delineate or monitor the land environment through UAS tended to show promising results and the possibility exists of expanding their potential applications to various fields and approaches. The emerging trend of UAS remote sensing is increasing globally. While airborne and spaceborne sensing are still developing, we hope that the new technology of UAS can integrate with the conventional sensing technology for new findings and developments in the near future. We believe our methods can also be useful for related restoration programs that are currently in progress in Indonesia. We will continue with the approach next to generalize the UAS result and modify the relationships using, for example, radiative transfer models.

**Author Contributions:** K.I. took the lead in designing, processing the research and writing the manuscript; T.K. arranged for the field surveys, research grants and revising the paper; S.S. and A.Y.S. contributed in field surveying, data collection, data process and revising/editing the manuscript; O.K. contributed in field surveys, research grants and revising of the paper; and all authors have substantially contributed to the work.

**Funding:** This research was funded by the Research Institute for Humanity and Nature (RIHN: a constituent member of NIHU) Project No.14200117 and by IDH The Sustainable Trade Initiative.

**Acknowledgments:** We thank PT. Wana Subur Lestari (WSL) and PT. Mayangkara Tanaman Industri (MTI) for all the support during the field work.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Remote Sensing* Editorial Office E-mail: remotesensing@mdpi.com www.mdpi.com/journal/remotesensing

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18

www.mdpi.com ISBN 978-3-0365-2191-6