1. Introduction
The use of hyperspectral remote sensing in vegetation studies is well established for a wide array of ecosystems (e.g., [
1,
2,
3,
4,
5]). Hundreds of contiguous spectral bands most commonly within the visible, near, and shortwave infrared regions of the electromagnetic spectrum (0.4–2.5 µm) provide detailed information on plant chemical and structural characteristics that are useful for species richness assessments (e.g., [
2,
6]), invasive species studies (e.g., [
7,
8]) and plant health (e.g., [
9,
10,
11]). Absorption features related to photosynthetic pigments (e.g., chlorophyll) and other constituents (e.g., water and nitrogen) provide a unique make-up of the plant condition that is expressed by the vegetation spectral signature. Therefore, variations in shape and sometimes amplitude of the spectral signature can be used to, for example, identify individual species or vegetation traits [
3,
4,
5]. Both whiskbroom (e.g., Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [
12]) and pushbroom (or line imager) hyperspectral sensors such as HyMAP™ [
13,
14], the Compact Airborne Spectrographic Imager (CASI), and the Shortwave Airborne Spectrographic Imager (SASI) [
15,
16] have been thoroughly tested (e.g., sensor characterization) and extensively used in airborne hyperspectral research campaigns over the last 25 years [
12]. Whiskbroom systems use a mirror scanning side to side to collect measurements from one pixel at a time, whereas pushbroom systems collect an entire row of pixels simultaneously in the forward motion of the sensor. The spatial extent covered by airborne hyperspectral missions is suitable for ecological studies covering tens to a few hundred km
2 (e.g., [
14,
16]). Due to its relatively high cost, airborne hyperspectral remote sensing has, in general, low temporal resolution compared to satellite platforms, which limits its utility as a monitoring tool (e.g., assessing short- or long-term changes). As such, airborne hyperspectral data are better suited for contributing to integrated approaches for biodiversity assessments and ecological monitoring.
In an ideal monitoring program, a bottom-up approach that takes into account different spatial, spectral and temporal scales is suggested for consistent Earth observations [
1]. Such an approach should provide high fidelity
in-situ measurements (i.e., field spectroscopy) that represent the true spatial and spectral variability of the ecosystem under study (e.g., [
17,
18]). These
in-situ measurements must complement the wealth of information provided by the airborne hyperspectral data. For instance, field spectroscopy measurements have been shown to achieve high classification accuracy [
3,
19] and improve field-based models (e.g., leaf to canopy) [
20]. For large regions or ecosystems (hundreds to thousands of km
2), the rich archive of freely available satellite data (and derived products) at medium spatial (<30 m) and spectral resolutions (7–12 bands), with a moderate temporal resolution (e.g., Landsat, Sentinel-2), is essential for biodiversity assessments and ecological monitoring. Nevertheless, due to the high cost of implementing a representative and adequate sample (i.e., the number of spectral plots and well-calibrated targets), for instance, due to poor accessibility (e.g., peatlands), field spectroscopy data are still limited to minimal sampling efforts (e.g., [
21]). The rapid evolution in the development and implementation of relatively inexpensive and “easy to use” UAV platforms for agriculture, forestry and ecological applications over the last decade [
22,
23,
24,
25] have the potential to bridge the gap between
in-situ and airborne observations [
26]. For instance, the Structure from Motion derived orthomosaics and 3D surfaces at ultra-high ground sampling distances (e.g., 1 cm) [
27] to provide extremely detailed biophysical parameters (e.g., vegetation structure) [
28], in some cases with higher accuracy than the more expensive Light Detection and Ranging (LiDAR) systems [
24].
While RGB photogrammetry is becoming a standard practice in the aforementioned fields, UAV-hyperspectral remote sensing is still in its early stages of development and is yet to become fully operational. Noteworthy steps have been made during the last five years testing UAV hyperspectral pushbroom imagers by References [
26,
29]. However, such systems are still expensive, and many challenges remain, including battery performance (i.e., UAV flight duration) and the geocorrection of the imagery derived from the pushbroom systems. Generally, a “fully operational” UAV hyperspectral system costs >US
$100k as it requires a UAV platform that can support a total takeoff weight (including payload) of approximately 10–20 Kg, the hyperspectral sensor, a differential Global Positioning System (GPS), and an Inertial Measurement Unit (IMU). Moreover, as pushbroom scanners record hundreds of adjacent lines in the forward direction of travel, the system is highly sensitive to the sensor’s motion in the 3-axes representing roll, pitch, and yaw. This motion is exaggerated in UAV systems due to the ultra-high resolution at which the data are recorded [
26,
30] and the motion of the relatively lightweight rotorcraft. Therefore, in order to produce usable geocorrected imagery (e.g., minimal no-data pixels and distortions), these systems require, besides the airframe and hyperspectral sensor, a differential GPS and IMU capable of capturing this motion at very high temporal intervals (e.g., 100 Hz attitude data) [
26]. Finally, similar to the airborne hyperspectral systems, UAV hyperspectral systems require ground targets to improve the georeferenced products (centimeter accuracy) as well as known reflectance targets to produce (or validate) radiometrically corrected data (i.e., radiance or reflectance) [
31], especially when a real-time downwelling irradiance sensor is not available.
In this paper, we introduce a novel UAV hyperspectral system composed of a heavy lift hexacopter, the Matrice 600 Pro (DJI, Shenzhen, China), the micro Compact Airborne Spectrographic Imager (μCASI; 288 spectral bands from 401–996 nm) (ITRES Research Ltd., Calgary, AB, Canada) and an IMU/GNSS unit developed for the μCASI by the Flight Research Laboratory (FRL) of the National Research Council of Canada (NRC). To the best of our knowledge, this system is the first integration of the μCASI on a low-altitude UAV platform. The μCASI is also usable in an airborne platform (e.g. potential for upscaling comparative studies), providing a new alternative to other commercial systems previously described in the literature. Innovative UAV hyperspectral systems are important to diversify the currently limited availability of options on the market. The UAV-μCASI system is the product of a collaborative effort between the Hyperspectral & Aeromagnetics Group at NRC and the Applied Remote Sensing Laboratory at McGill University. The objective of this system is to complement airborne hyperspectral research for environmental applications as well as to advance the understanding of UAV hyperspectral systems (e.g., geometric and radiometric calibration) as described here. In addition to the description of the UAV-μCASI system, in this work, we present mission planning aspects important for optimizing hyperspectral image quality. We also show results from three case studies addressing peatland research, invasive species monitoring, and endangered tree species mapping as part of the Canadian Airborne Biodiversity Observatory (CABO) project. Our study adds novel aspects related to UAV hyperspectral system implementation and highlights its significant potential for environmental monitoring. The challenges that must be overcome for this type of system to become fully operational (e.g., turnkey) are addressed in the discussion section.
3. Results
Our UAV-μCASI system’s payload weighs a total of 7895 g (
Table 1). Allowing for a remaining 30% battery charge at landing as a safety measure resulted in flight times of 10–12 min in low wind conditions with high-performance 5700 mAh batteries. At a flight speed of 2.7 m/s it is possible to acquire a flight line of up to approximately 1200 m. At an altitude of 45 m AGL the image swath is 27 m wide with a 4 cm geocorrected pixel size.
Figure 8 shows an example of the uncorrected GPS (
Figure 8A) and RTK (
Figure 8B) logs for a flight line from Ile Grosbois. Values in
Figure 8A,B,D represent orthometric height, whereas 8C illustrates relative height (AGL) as recorded by the M600P’s barometer. Differences in height recorded by the three GPS receivers are 0.18 m between GPS0 (μ = 56.60 m) and GPS1 (μ = 56.78 m), and 1.37 m and 1.19 m compared to GPS2 (μ = 57.97 m), respectively. The RTK corrected data in
Figure 8B reveals no difference in the mean orthometric height (μ = 54.82 m) between the three receivers. Height measurements above ground provided by the barometer (
Figure 8C) indicate a maximum difference of 10 cm variation during flight, while the IGDR (
Figure 8D) recorded an average orthometric height of 55.62 m (range 0.27 m).
The heading recorded by the A3 Pro’s IMUs from the same flight indicates a larger variation between ATTI0 (mean 254.4°) to ATTI1 (mean 259.5°) and ATTI2 (mean 259.1°). ATTI0 recorded a similar mean heading to that of the IGDR (mean 254.3°) (
Figure 9).
Attitude data for the M600P from the same Ile Grosbois flight line illustrate congruent measurements with minimal differences in roll (
Figure 10A) and pitch (
Figure 10B) from the three IMUs. The magnitude of the roll recorded for the airframe is larger than for the payload (
Figure 10C) due to the stabilization provided by the gimbal. Corrections by the airframe in response to wind gusts can be seen in
Figure 10A between 50 and 90 seconds into the flight. The large negative pitch at the beginning of the flight line is characteristic to the manner in which rotorcraft UAVs transition from a stationary position to forward motion; the nose of the airframe initially pitches down and then levels off as the flight speed is attained. The reverse can be seen when the airframe slows down to a stationary position at the end of the flight line. It is important to note that the airframe maintains a slight forward pitch during flight (~3.5°–4°). From the IGDR the large negative pitch recorded as the airframe begins to pick up speed lasts less than 2 seconds before the gimbal compensates for the motion. As expected, due to the stabilization provided by the gimbal during the flight, the average roll (0.4° ± 1.1°) and pitch (−2.7° ± 0.5°) recorded by the IGDR are less than that of the airframe (roll μ = 0.74° ± 1.2°; pitch μ = −3.9° ± 0.96°). It is important to note the maximum value of pitch recorded during flight (outside of the acceleration and deceleration regions) is −7.0° for the airframe and −4.7° for the gimbal.
Our speed test results from the Mer Bleue site indicate that the optimal flight speed to maximize the μCASI image quality is approximately 2.7 m/s (
Figure 11,
Table 2). Pitch and roll recorded by the IGDR at speeds of 8.2–10.9 m/s revealed a proportional increase in the distance traveled in the air to the gimbal beginning to compensate, resulting in longer lead-in distances before quality imagery could be acquired. At 13.6 m/s the gimbal remained at a pitch in excess of −20°, while at 10.9 m/s the pitch ranged from −8° to −7° during the majority of the flight. For the 1.8 to 5.4 m/s speeds the roll and pitch were in the same range as seen in
Figure 10C. Flight speed further affected the number of GNSS observations in each flight line (
Figure 11). As expected, at lower speeds, less interpolation with the Kalman filter between actual observations of the spatial position of the μCASI is necessary; with a constant frequency of the GNSS observations following differential correction, faster speeds result in larger spatial gaps. At the 5.4–13.6 m/s speeds, the distance required for the M600P to achieve full speed followed by deceleration at the end of the flight line can be seen in the variable distances between observations.
Flight speed also affected the along-track resolution, coverage, and overall pixel aspect ratio (
Table 2). As the speed increases, given a constant altitude, integration time and frame time, the pixel aspect ratio (ATR/XTR) increases. At 13.6 m/s for example, each pixel is nearly 13 times longer than it is wide. The along-track coverage expressed as the percent covered at the Full-Width-Half-Maximum (FHWM) of the point spread function in the along-track direction decreases with increased speed. At faster flight speeds, more pixels would need to be summed in the across-track direction, increasing the overall geocorrected pixel size. At 2.7 m/s we maximize the along-track coverage (98.8%) without oversampling. Also, minimal summation is required in the across-track direction for 3–4 cm geocorrected pixels.
Based on the bundling adjustment process carried out at the Mer Bleue site, our results indicate a decrease of 40.9 cm in the magnitude of the spatial position error compared to the location of the GCPs without the bundling adjustment applied in the geocorrected image (4 cm pixel size) (
Table 3). The spread around the mean also decreases from 0.298–0.185 cm in the Easting direction (Δ
E = 11.3 cm), and from 0.237–0.129 cm (Δ
N = 10.8 cm) in the Northing direction (
Table 3).
Figure 12 illustrates an example of the geocorrected images in radiance (μW/cm
2 sr nm) for the three study areas derived using the bundling adjustment process.
Table 4 indicates that for all lines shown in
Figure 12, the percentage of rejected pixels following geocorrection ranges from 0.1% at Mer Bleue to 0.79% at CGOP. The use of a DSM in the geocorrection process at Mer Bleue (
Table 5) resulted in small differences between the Easting and Northing coordinates between the GCP locations with and without the DSM (Δ
E = 2.7 cm, Δ
N = 0.3 cm). The magnitude of the error for the difference is 2.7 cm.
An example of the spectral profile of a Garry Oak canopy pixel is shown in both radiance (
Figure 13A) and reflectance following atmospheric correction with ATCOR
® 4 (
Figure 13B).
Figure 13C illustrates the similarities between the ASD
in-situ field spectral (
Section 2.3.2) measurements of the Flexispec™ and Permaflect™ panels and corresponding pixels from the atmospherically corrected μCASI image. Across the 400–900 nm wavelength range, for the 18% Flexispec™ panel, the absolute difference is 1.5 ± 0.8% between the ASD and μCASI. For the 10% Permaflect™ panel, the absolute difference is 1.0 ± 0.5%. Due to this high degree of similarity between the
in-situ measurements and the imagery, the scene-based calibration from ATCOR
® 4 was not applied because it would have introduced additional uncertainties rather than refinements of the atmospheric correction.
As seen in
Figure 13D, due to the proximity of the trees to the location of the panels, in-scattering can be seen in the diffuse illumination measurement of the 99% Spectralon™ panel. This contamination is not seen in the measurement from the open field. As seen in
Figure 14, panels in the CGOP Meadow flight line were placed in the largest open area. Nevertheless, due to the height of the trees on either side, in-scattering was unavoidable. Under these circumstances, the panels were used solely for verification of the quality of the atmospheric correction.
4. Discussion
Our study brings to the attention of the remote sensing community a new micro hyperspectral sensor, the μCASI, which had not been previously discussed in the literature (e.g., [
43]). One of the most notable advances made by our study is the demonstration of the radiometric and geometric data quality of the UAV-μCASI system. For example, the agreement between the
in-situ field spectral measurements of the laboratory calibrated reference panels and the atmospherically corrected imagery suggests the imagery from our system can be used with a high degree of confidence for research questions requiring spectral fidelity. Our study further provides insights into aspects of data collection that influence the quality of the imagery and showcases the importance of the collection and rigorous processing of supplementary
in-situ data (e.g., SPN1, field spectra, GCPs) for generating and assessing the quality of the imagery. Because of the ultra-high spatial resolution of the geocorrected imagery, another fundamental contribution of our study is the detailed investigation of the performance of the airframe, gimbal, navigation system, and the IGDR. These aspects, while often overlooked, provide valuable insights into optimizing flight planning for a given system or configuration, as discussed below.
We demonstrate that there are a few fundamental aspects to consider when implementing a UAV based pushbroom hyperspectral system, given the high spectral resolution (e.g., >200 bands) and ultra-high spatial resolution (<5 cm) at which the imagery is acquired. Ultimately, these high-end and expensive systems (~US
$100,000–350,000) have shown great potential for mapping and ecosystem characterization in different applications [
26,
29,
44]. However, they are not 100% ‘out-of-the-box’ operational yet (i.e., “turnkey”), still requiring considerable effort from expert users. Including ancillary data collection equipment (e.g., the RTK GNSS receivers, pyranometer, and field spectrometer), our system is still within the range of other expensive hyperspectral systems and requires a crew of 5–7 people for full deployment. Further research is needed to fully understand the integration of the different components and aspects to decrease system costs. For instance, our research group initially debated whether the hyperspectral sensor should be hard mounted (e.g., [
26]) or mounted on a gimbal (e.g., [
29]). The GNSS positioning and attitude information from the airframe’s IMU(s) are important (but not available for all UAV platforms) to assess the potential errors introduced by the inaccuracies of each component. Moreover, the UAV’s flight logs provide data on the precision of the flight lines and whether there are notable changes to planned flight paths caused by external forces (e.g., wind) or system failure. These logs are also used for identifying geolocation errors to within a few centimeters of accuracy [
45], and with the right configuration, they could be used for direct georeferencing when no GCPs are available [
46]. For instance, our results indicated important differences between the three GNSS receivers of the A3 Pro flight controller and the integrated RTK corrected altitudes (
Figure 8A,B). As expected, variations in the GNSS receivers are minimized by the RTK enabled flight controller. Height variations from the corrected position data are very similar to those of the μCASI’s IGDR (
Figure 8B,D). A direct comparison between the absolute values can only be done after standardization of the horizontal and vertical coordinate systems. For example, the IGDR records the 3D position information in latitude/longitude WGS84 (horizontal datum), and EGM96 for the geoid, whereas the A3 Pro uses the same horizontal reference frame, but at the time of data collection it calculated the orthometric height based on the EGM2008 geoid. A further complication is that the base station data from Smartnet is provided in NAD83CSRS for the horizontal reference frame (with the CGVD2013 geoid). Furthermore, the requirements of the geocorrection module are input data in WGS84 with EGM96 providing an output in UTM (WGS84). As such, care must be taken with all spatial coordinates to ensure no error is introduced through reprojection.
Heading data obtained from the UAV platform and the μCASI’s IGDR showed better congruence with the primary IMU (ATTI0) (mean difference 0.3°) than the redundant systems (ATTI1, ATTI2) (
Figure 9A,B). Furthermore, extracting the attitude logs from the platform allowed us to assess the utility of the gimbal. Roll and pitch results revealed that once the system is stable (i.e., flight speed has been achieved), lower roll and pitch are recorded by the IGDR than from the M600P airframe, as expected. In order to travel in a forward motion, a greater force from the rotors in the back cause the nose to tilt down; at a speed of 2.7 m/s, the M600P maintained a forward pitch of 3.5–4°. Commercial off-the-shelf systems such as the M600P are subject to ongoing mandatory firmware and software updates by the manufacturer. These updates may include “minor” changes that could have profound impacts on the data quality when the airframe is used for the hyperspectral image collection. For example, the geoid used by the M600P at the time of data acquisition may change in a future update (DJI Technical Support, personal communication, 2018).
A thorough technical understanding of the hyperspectral sensor and mission planning are also essential for collecting geometrically and radiometrically accurate data. For example, because the μCASI is a pushbroom sensor, the speed and integration and frame times play a role in determining the pixel size. At a given altitude, integration, and frame time, faster speeds result in longer pixels in the along-track (forward) direction than in the across-track direction (
Table 2). Moreover, flight speed also affects the number of GNSS positions recorded by the IGDR (
Figure 11) that are necessary for image post-processing. Our ideal speed of 2.7 m/s is comparable to that used by Reference [
29] in two study areas in France (i.e., 3 m/s and 4 m/s), and by Reference [
26] (i.e., 2.5 m/s) using a different hyperspectral pushbroom sensor. Also, given the configuration of the UAV-μCASI system, to account for the boresight offset (i.e., offset between INS and sensor) [
29,
36], the bundling adjustment results (
Table 3) showed an improvement of approximately 2–3 geocorrected image pixels in the Easting and Northing directions (Δ
E = 11.3 cm; Δ
N = 10.8 cm). Mission planning also depends on the sensor characteristics. For example, we currently hover the system within the geofence at the beginning of each line in order for the μCASI to complete its internal calibration. The UAV must then exit the geofence at the end of each flight line in order to complete writing the file to the disk.
Because mission planning takes into account wind speed, solar illumination, ambient temperature, topography and current UAV regulations in Canada, deployments with the UAV-μCASI are still an intricate task. Wind (sustained wind speed and gusts) is a primary factor that affects the system stability and quality of the data. We limit data collection to sustained wind speeds below 9.7 knots (5 m/s) with maximum wind gusts less than 19.4 knots (10 m/s). Not only is this important for the actual system, but also in high wind conditions the movement of the vegetation increases, resulting in increased motion blur and an overall decrease in image quality both for the μCASI and the SFM-MVS data. The requirement for clear sunny days with only minimal homogeneous cloud cover affects the number of days that the system can be deployed. Because in many instances we deploy in areas with variable sky conditions, the use of ultra-wide-angle photographs and the pyranometer for incident solar radiation improves our understanding of the radiance products (
Figure 7A,B). To date, we have operated the UAV-μCASI in air temperatures from 10 °C at Mer Bleue in the spring to 34 °C at CGOP in the summer, where a cooling system was necessary for the batteries before recharging them.
Topography is a variable that can greatly affect the geocorrected imagery, and therefore, implementing a DSM is recommended [
35]. We consider vegetation to be a considerable component of the topography. For example, at CGOP, neighboring pixels on the border of a tree crown could have 20 m vertical separation between them (i.e., tree crown vs. ground) which leads to changes in viewing geometry. For pixels <5 cm, parallax errors induced by this difference require additional research. Currently, the geocorrection module cannot correct for such large parallax errors with the very small pixels, but improvements to the module are currently under development and will be tested in the future. While we are continuing to improve the geocorrected output for the three sites, the number of rejected pixels is minimal (
Table 4) and the spatial errors are within 2–3 pixels. As an initial assessment, the comparison between the atmospherically corrected μCASI pixels and the ASD
in-situ field spectra indicate minimal differences (
Figure 13). Additional improvements are necessary to remove the residual atmospheric features in the NIR wavelengths. The spot size for the ASD
in-situ field spectral measurements is 13.9 cm diameter. With the along-track resolution of 3.24 cm (XTR = 1.62 cm, ATS = 2.43 cm) for this flight line and taking into consideration the theoretical point spread function of the μCASI, the area contributing to the signal from both instruments is similar. Further work also needs to consider characterizing and minimizing BRDF effects. Finally, regulations play a significant role, and they evolve rapidly. Airspace and proximity to aerodromes, for instance, are factors affecting the maximum altitude at which the data can be recorded.
Most airborne hyperspectral image acquisition systems and associated processing theory and methods have been developed over the last two decades for larger pixels obtained from aircraft platforms (e.g., 1–5 m) [
1,
4]. In these cases, positional errors of tens of centimeters (or even several meters) were considered excellent because they were subpixel. With the ultra-high spatial resolution obtained from UAV platforms, these relatively ‘small’ errors become increasingly important. An error of 10 cm is no longer subpixel; it is on the order of 3–4 pixels. Furthermore, the parallax errors that were readily dealt with for most vegetated terrain (even forest) from an aircraft acquiring imagery from 1000–3000 ft. and above are magnified in the low altitude UAV flights with ultra-fine pixels, resulting in complications analogous to extremely mountainous terrain. These new challenges will require development or refinement of algorithms specifically targeted to low altitude UAV based hyperspectral imagery, as well as more precise hardware such as the GNSS systems specifically designed with the ultra-high spatial resolution data in mind. For increasing the operational environment of the UAV-based hyperspectral data, advances are needed in battery power for longer flight times. However, this is a trade-off because illumination conditions and the ability of the system to record and store the data onboard must also be considered. An extremely long flight time (e.g., several hours) is not necessarily useful if the window of appropriate illumination conditions for acquiring the data is narrow, or the system cannot store such a large volume of data onboard. A single 600 m long geocorrected flight line (4 cm geocorrected pixel size) from the UAV-μCASI system is >50 GB (raw file size ~22 GB). Computational resources for not only writing the file to the disk upon acquisition (i.e., I/O throughput, storage media write speed) but also for preprocessing, analysis, and long term storage need to be kept in mind when planning UAV hyperspectral data acquisition campaigns.
5. Conclusions
Our hyperspectral system in combination with a custom-built INS with associated processing routines resulted in geocorrected lines with minimal distortion and rejected pixels for three sites with different structural and ecological characteristics: an abandoned agricultural herbaceous field, a Garry Oak forest, and a peatland. The comparison with GCPs indicates positional errors of a few centimeters (
Table 3 and
Table 5). Given the ultra-high spatial resolution, high geo-positional accuracy is needed for ecosystem characterization. The minimal differences between the atmospheric correction and the
in-situ field spectra of the panels indicate that a coincident irradiance sensor (currently under development) may not be necessary for producing high-quality reflectance images. Ancillary data is further an important component in the implementation of a hyperspectral system as this allows for characterizing potential environmental changes. Characterization of the UAV platform is important for operating a pushbroom system. Although we tested the Matrice 600 Pro airframe, we believe that any platform that can safely carry the payload, minimize vibration, roll, pitch, and yaw of the payload and provide detailed flight records could be used. Optimal environmental conditions for deployments will lead to less noise (radiometric and geometric) in the raw data. Therefore, consideration of wind speed, illumination angles, vegetation height, and temperature are all important factors for producing the best imagery possible. Many commercial systems are marketed as full or near turnkey, but as our study illustrates, there are many steps and considerations involved in data collection (including ancillary data) and image processing before the generation of final data products. Users must be aware of all these components in order to maximize the data quality.
Our system is now implemented in the CABO project, which is a Canadian effort aiming to understand and forecast how Canadian terrestrial ecosystems adapt to global change drivers (e.g., invasive species). The ultrafine spatial resolution of UAV hyperspectral imagery offers a unique opportunity to quantify the subpixel components of larger airborne or satellite pixels. As such, it can be used to complement field spectroscopy in order to increase the spectral sampling size and account for canopy architecture and structural influences on reflectance. Because each ecosystem is unique in its environmental (e.g., climate) and physical (e.g., topography, vegetation) characteristics, UAV deployments must be customized taking into considerations the characteristics of each site. There is no single operational model that is appropriate in all cases. However, with careful consideration and planning, high-quality UAV-based hyperspectral imagery can be acquired for different ecosystems to allow for investigations of the vegetation species and functional traits. It can thus begin to answer questions related to spectral and taxonomic diversity at a scale that was not possible before.