*2.2. Image Processing*

Agisoft Photoscan Professional Pro software 1.1.0 (Agisoft LCC, St. Petersburg, Russia) was used to generate ortho-mosaicked images for both experiments. The software has built-in structure-from-motion (SfM) and other multiview stereo algorithms. The general workflow involves aligning photos, placing ground control points, building dense points, building textures, generating orthomosaics and the digital elevation model. All of the photos underwent image quality estimation in Photoscan, and those with quality flag values under 0.5 were rejected from processing. The vignetting effect can be avoided to some degree; due to the mosaic mode of texture generation in Photoscanwe chose [32]. Instead of calculating the average value of all pixels from individual photos that overlapped on the same point, this mosaic mode only uses the value where the pixel in interest is located within the shortest distance from the image center [32].

The Environment for Visualizing Images software (ENVI, version 5.1, Harris Corporation, Melbourne, FL, USA) was utilized to conduct a classification of the mosaicked images, in order to distinguish non-vegetation and vegetation pixel types. The supervised classification scheme used for the Brooksvale Park was maximum likelihood, and for the Yale Playground, it was the spectral angle mapper. The Yale Playground was severely affected by shadows of trees and buildings because the solar elevation angle was low at the time of the experiment. The spectral angle mapper directly compares the spectra of images to known spectra, and creates vectors in order to calculate the spectral angle between them [33]. Therefore, this classifier is not sensitive to illumination conditions. The mosaicked images (Figure 2) were then used for the determination of landscape albedo.

**Figure 2.** Mosaicked images of Brooksvale Park (**a**) and Yale Playground (**b**). The mosaicked images have a resolution of 4 cm per pixel.

#### *2.3. Spectrometer Measurement of Ground Targets*

A high resolution spectrometer (FieldSpec Pro FR, Malvern Panalyical Ltd., Malvern, UK) was used to record the reflectance spectra of ground targets, including non-vegetative features and ground vegetation. Before the measurement of each ground target, the spectrometer was calibrated using a white reference disc (Spectralon, Labsphere). The spectrometer field experiments were carried out under both overcast and clear sky conditions at each field site (Table 2). The UAV and spectrometer field experiments were not conducted on the same day, as the former were done in the autumn of 2015 and the latter in the spring of 2016, and therefore the vegetation conditions were different. However, the solar elevation angle did not differ much in the case of the Yale Playground. A pistol grip was used to measure the single point of each ground target five times. A typical standard deviation of the reflectance in the visible bands for each ground target was less than 0.02. The non-vegetation ground targets included a wide range of brightness, from black dustbins to white-paint markings (Figure S1). Vegetation ground targets were grass at both sites. The Lambertian assumption was adopted so that the spectral reflectance measured by the spectrometer was taken as the spectral albedo. The wavelength ranges for the red, green, and blue bands in this study were defined as 620–670 nm, 540–560 nm and 460–480 nm, respectively, coinciding with the three color wavebands of the cameras.



We used the spectrometer measurements for three purposes. The first purpose was to calibrate the mosaicked image. A calibration curve for each of the three wavebands was established by comparing the measured spectral albedo with the DN value of the same ground target identified in the mosaicked image. These curves were then used to convert the DN values of all the image pixels to a spectral albedo.

The second purpose was to determine the broadband (visible and shortwave) albedos of these ground targets. We used the Simple Model of Atmospheric Radiative Transfer of Sunshine program (SMART, version 2.9.5) developed by the National Renewable Energy Laboratory, United States of America Department of Energy [34], to simulate the spectral irradiance of solar radiation. In the SMART simulation, the U.S. standard atmosphere was chosen as the reference atmosphere, aerosol model was set as urban type, and the sky condition was either clear or overcast. The parameters used for the SMART calculations are given in Supplementary Table S1. The data from the SMART and spectrometer had a 1 nm spectral resolution. The broadband albedo (visible or shortwave) was computed as:

$$\alpha\_\* = \frac{\sum\_{a}^{b} \rho(\lambda) I(\lambda)}{\sum\_{a}^{b} I(\lambda)} \tag{1}$$

where *α*∗ is broadband albedo of the ground target, *I*(*λ*) is the solar spectral irradiance, *λ* is wavelength, *ρ*(*λ*) is the spectral reflectance recorded by the spectrometer at wavelength *λ*, and *a* and *b* denote the range of the waveband. For the visible band, *a* and *b* are 400 nm and 760 nm, respectively, and for the shortwave, they are 400 nm and 1750 nm, respectively. These albedo values were then compared with the albedo values estimated with the satellite algorithm. It should be noticed that our shortwave band (400–1750 nm), which represents the effective range of the spectrometer measurement, is narrower than the typical shortwave definition of 400–2500 nm, and therefore, our shortwave albedo that we derived here may lose a small contribution from energy at 1750–2500 nm.

The third purpose was to determine a factor for converting the visible band albedo to the shortwave band albedo. This conversion factor is needed in order to obtain an estimate of the landscape shortwave albedo from the drone mosaicked image, because the drone image consisted of only three visible bands. From the visible and shortwave band albedos for the ground targets, we determined a mean ratio of shortwave to visible band albedo for non-vegetation features, and a mean ratio for the vegetation features.

## *2.4. Landscape Albedo Estimation*

Figure 3 depicts the workflow of albedo estimation at the landscape scale. (i) A mosaicked image of the landscape was produced from the drone photographs using the Agisoft Photoscan software. (ii) The calibration functions based on the spectrometer measurement were used to convert the DN value of each pixel in the mosaicked image to spectral albedo in the three wavebands (red, green, and blue). (iii) The Landsat 8 visible band albedo algorithm (Equation (2) below) was validated with the visible band albedo of the ground targets. The validated Landsat 8 conversion algorithm was then used to determine the visible band albedo of each pixel in the whole image. The Landsat8 algorithm is given as [30]:

$$
\alpha\_{\text{vis}/Landsat8} = 0.5621 \alpha\_2 + 0.1479 \alpha\_3 + 0.2512 \alpha\_4 - 0.0015 \tag{2}
$$

Here, *α*2, *α*3, and *α*4 represent blue, green, and red spectral albedos calculated from (ii), respectively. (iv) Pixels in the mosaicked image was classified as vegetation and non-vegetation types. (v) The shortwave albedo of the vegetation and non-vegetation pixels was obtained by multiplying their visible band albedo with the ratio of shortwave to visible band albedo obtained with the spectrometer for the vegetation ground targets and for the non-vegetation targets, respectively. (vi) The landscape shortwave albedo was calculated as the mean value of the pixels in the drone image mosaic. The visible and shortwave band albedo values were given as mean ± 1 standard deviation of all the pixel albedo values in the mosaic.

**Figure 3.** Workflow for estimating landscape visible and shortwave band albedo.

#### *2.5. Retrieval of Landscape Albedo from the Landsat Satellite*

Landsat 8 Operational Land Imager (OLI) surface reflectance products were used as a reference for evaluating the landscape visible and shortwave band albedo obtained with the drone images. These products have been atmospherically corrected from the Landsat 8 top of atmosphere reflectance by using the second simulation of the satellite signal in the Solar Spectrum Vectorial model [35]. They performed better than Landsat 5/7 products, by taking advantage of the new OLI coastal aerosol band (0.433–0.450 μm), which is beneficial for detecting aerosol properties [35]. The Landsat image was acquired on 6 October 2015 under clear sky condition and the WRS\_PATH and Row were 13 and 31 respectively. The corresponding surface reflectance product can be ordered from https://earthexplorer.usgs.gov/. We used 72 pixels on the image that corresponded roughly to the drone image of the Brooksvale Park, and 16 pixels for the Yale Playground. The Landsat camera has a relatively small field of view (15◦), and therefore, the BRDF correction is not considered in its surface reflectance product [36]. We used the Landsat 8 snow-free visible (Equation (2)) and shortwave band albedo coefficients (Equation (3)) to obtain the Landsat 8 validation values [30].

$$n\_{SW/L\text{mdsat}} = 0.2453n\_2 + 0.0508n\_3 + 0.1804n\_4 + 0.3081n\_5 + 0.1332n\_6 + 0.0521n\_7 + 0.0011\tag{3}$$

Here, *α*2, *α*3, *α*4, *α*5, *α*6, and *α*4 represent the spectral surface reflectances of six bands of Landsat 8 (450–510 nm, 530–590 nm, 640–670 nm, 850–880 nm, 1570–1650 nm, and 2110–2290 nm, respectively) [37]. Although it is possible to use the information retrieved from MODIS to make BRDF corrections to the Landsat albedo [29,30], this correction was not performed here, to be consistent with the drone methodology, which does not account for BRDF behaviors either.
