Next Article in Journal
Satellite Observations of El Niño Impacts on Eurasian Spring Vegetation Greenness during the Period 1982–2015
Next Article in Special Issue
Estimation of Winter Wheat Above-Ground Biomass Using Unmanned Aerial Vehicle-Based Snapshot Hyperspectral Sensor and Crop Height Improved Models
Previous Article in Journal
On-Ground Retracking to Correct Distorted Waveform in Spaceborne Global Navigation Satellite System-Reflectometry
Previous Article in Special Issue
Integrated System for Auto-Registered Hyperspectral and 3D Structure Measurement at the Point Scale
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The DOM Generation and Precise Radiometric Calibration of a UAV-Mounted Miniature Snapshot Hyperspectral Imager

1
Key Laboratory of Quantitative Remote Sensing in Agriculture, Ministry of Agriculture, Beijing Research Center for Information Technology in Agriculture, Beijing 100097, China
2
National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China
3
School of Surveying and Land Information Engineering, Henan Polytechnic University, Jiaozuo 454000, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work and should be considered co-first authors.
Remote Sens. 2017, 9(7), 642; https://doi.org/10.3390/rs9070642
Submission received: 5 April 2017 / Revised: 1 June 2017 / Accepted: 16 June 2017 / Published: 22 June 2017
(This article belongs to the Special Issue Earth Observations for Precision Farming in China (EO4PFiC))

Abstract

:
Hyperspectral remote sensing is used in precision agriculture to remotely and quickly acquire crop phenotype information. This paper describes the generation of a digital orthophoto map (DOM) and radiometric calibration for images taken by a miniaturized snapshot hyperspectral camera mounted on a lightweight unmanned aerial vehicle (UAV). The snapshot camera is a relatively new type of hyperspectral sensor that can acquire an image cube with one spectral and two spatial dimensions at one exposure. The images acquired by the hyperspectral snapshot camera need to be mosaicked together to produce a DOM and radiometrically calibrated before analysis. However, the spatial resolution of hyperspectral cubes is too low to mosaic the images together. Furthermore, there are no systematic radiometric calibration methods or procedures for snapshot hyperspectral images acquired from low-altitude carrier platforms. In this study, we obtained hyperspectral imagery using a snapshot hyperspectral sensor mounted on a UAV. We quantitatively evaluated the radiometric response linearity (RRL) and radiometric response variation (RRV) and proposed a method to correct the RRV effect. We then introduced a method to interpolate position and orientation system (POS) information and generate a DOM with low spatial resolution and a digital elevation model (DEM) using a 3D mesh model built from panchromatic images with high spatial resolution. The relative horizontal geometric precision of the DOM was validated by comparison with a DOM generated from a digital RGB camera. A surface crop model (CSM) was produced from the DEM, and crop height for 48 sampling plots was extracted and compared with the corresponding field-measured crop height to verify the relative precision of the DEM. Finally, we applied two absolute radiometric calibration methods to the generated DOM and verified their accuracy via comparison with spectra measured with an ASD Field Spec Pro spectrometer (Analytical Spectral Devices, Boulder, CO, USA). The DOM had high relative horizontal accuracy, and compared with the digital camera-derived DOM, spatial differences were below 0.05 m (RMSE = 0.035). The determination coefficient for a regression between DEM-derived and field-measured crop height was 0.680. The radiometric precision was 5% for bands between 500 and 945 nm, and the reflectance curve in the infrared spectral region did not decrease as in previous research. The pixel and data sizes for the DOM corresponding to a field area of approximately 85 m × 34 m were small (0.67 m and approximately 13.1 megabytes, respectively), which is convenient for data transmission, preprocessing and analysis. The proposed method for radiometric calibration and DOM generation from hyperspectral cubes can be used to yield hyperspectral imagery products for various applications, particularly precision agriculture.

Graphical Abstract

1. Introduction

Hyperspectral remote sensing can be used to analyze the biophysical and biochemical characteristics of crops [1]. Such information is crucial for improving crop management (e.g., optimizing fertilizers, pesticides, seeds, etc.), achieving high yield, maximizing profits and avoiding needless waste of resources [2,3,4]. Traditionally, field spectrometers and aerial- or satellite-based sensors have been used to acquire hyperspectral data [5,6,7]. Aerial and satellite hyperspectral imaging technologies have shown potential for commercial agricultural applications, but expectations have not been fully realized due to the low spatial resolution of satellite imagery and the high cost of aerial platforms. Techniques for collecting remote sensing data are becoming more feasible due to the emergence of new carrier platforms and imaging sensors such as unmanned aerial vehicles (UAVs) and lightweight hyperspectral cameras. UAVs are being increasingly employed in fields such as agriculture, environmental monitoring, topographic cartography, military reconnaissance, geological prospecting, grassland monitoring and urban planning [8,9,10,11,12,13]. UAVs are becoming a competitive platform for remote sensing-based precision agriculture because they can be easily and cost-effectively deployed to acquire geospatial data with high temporal and spatial resolutions [14]. UAVs equipped with global navigation satellite systems (GNSS) and inertial navigation systems (INS) can derive geo-referenced images for applications in various geographic locations [15].
There are two main types of lightweight hyperspectral imaging sensors for UAVs: pushbroom and snapshot hyperspectral imagers. The main representatives for pushbroom ones are the Standard Micro-Hyperspec (Headwall Photonics, Fitchburg, MA, USA) and OCI-1000 (Bayspec, San Jose, CA, USA), while the representatives for frame-area ones are UHD 185, UHD 199 (Cubert GmbH, Ulm, Baden-Württemberg, Germany) and OCI-2000 (Bayspec, San Jose, CA, USA). Pushbroom hyperspectral imagers use a 2D detector array to obtain one spatial and one spectral dimension image simultaneously, whereas a snapshot sensor obtains two spatial dimensions and one spectral dimension in one shot. The geometric quality of images scanned from pushbroom hyperspectral scanners is sensitive to position and orientation, which are established through a direct geo-referencing unit onboard the imaging platform [15]. Snapshot imaging sensors can obtain bundles of frames at the same time to form an image with two spatial dimensions with tens to hundreds of bands, and therefore, only the exterior orientation of the entire 3D image needs to be obtained, thus avoiding geometric instability. In this study, we adopted the UHD 185 imaging spectrometer, which simultaneously captures a low spatial resolution hyperspectral cube and a high spatial resolution panchromatic (PAN) image in one shot.
Snapshot hyperspectral sensors show potential for further applications in precision agriculture. Existing studies utilizing hyperspectral imaging sensors are sparse compared with those utilizing non-imaging hyperspectral sensors. Field ASD hyperspectral radiometers have long been used in agricultural applications [16], such as the inversion of crop parameters [17,18,19], crop disease and pest monitoring [20,21,22], etc. Pushbroom hyperspectral sensors have been developed and deployed for agricultural applications. In many cases, they are mounted on a rotatable platform, which is placed on a tripod and remotely controlled by a computer [23,24]. Compared with field spectrometers, pushbroom hyperspectral imaging sensors are harder to deploy and require more people for operation, making them inconvenient and less efficient for proximal measurements. In contrast, snapshot hyperspectral devices produce smaller data products, require less computing and human resources, are easier to integrate with UAVs and avoid difficulties in the image mosaic, geometric distortions and data acquisition efficiency.
Snapshot hyperspectral cubes need to be mosaicked together to obtain a panoramic hyperspectral DOM for an entire area. This is typically accomplished with structure-from-motion (SfM) photogrammetry [25,26], which is a process for extracting geometric structures from camera images taken from different camera stations. General steps for the 3D reconstruction are as follows: (1) feature point extraction using the scale-invariant feature transform (SIFT) algorithm [27]; (2) feature matching [28]; (3) applying the SfM algorithm [29] and a bundle block adjustment [30] to recover the image poses [31] and build sparse 3D feature points; (4) building dense 3D point clouds from camera poses estimated from Step (3) and sparse 3D feature points using the multi-view stereo (MVS) algorithm [31]; and (5) building a 3D polygonal mesh of the object surface based on the dense cloud. Good results from the SfM algorithm rely significantly on the quality (proper image texture, image block geometry, divergence, etc.) and resolution of input images, and it is beneficial to use high-quality images [32]. Many studies have proven the capacity of SfM photogrammetry for reconstructing 3D models using snapshot images [33,34,35,36]. However, due to technological limitations, the image resolution of snapshot hyperspectral cameras is not high enough for feature point extraction. Conventional SfM photogrammetry is not suited for direct mosaicking of snapshot hyperspectral cubes.
The aim of radiometric calibration is to correctly convert raw data into physically-meaningful reflectance values, while removing deviations caused by the hyperspectral sensor itself and the atmosphere. Raw data from hyperspectral imaging sensors contain information about the interaction of solar radiation with the atmosphere, the Earth’s surface and the imaging sensor [37], and they are significantly affected by these environmental conditions and sensor characteristics [38]. Accurate radiometric calibration of hyperspectral imaging data is necessary to improve the consistency of datasets from different sources and to perform quantitative remote sensing analyses by reducing spatiotemporal variability in environmental and instrumental effects [39]. Radiometric calibration can be achieved by physically- or empirically-based methods [40]. In low-altitude applications, a linear approximation is feasible for generating reflectance products [41].
The objectives of this study were to generate a geometrically-registered DOM from hyperspectral cubes captured by a lightweight imaging spectrometer mounted on a UAV platform equipped with a position and orientation system (POS) and an inertial measurement unit (IMU) that performs a precise radiometric calibration. The paper is structured as follows: The hyperspectral data collection system and data acquisition processes are presented in Section 2, as well as the methods used to evaluate the radiometric response linearity (RRL), evaluate and correct the RRV, interpolate POS information, generate the DOM and perform the radiometric calibration. Section 3 presents the results and precision of the interpolated POS information and spectral and radiometric calibrations. The advantages, disadvantages and further improvements for the snapshot hyperspectral imager are discussed in Section 4. The advantages and disadvantages of the methods employed in this study are analyzed and compared with other methods used in previous studies. We also discuss other factors that may influence the geometric precision and radiometric calibration of the DOM (e.g., accuracy of POS information recorded with the global position system (GPS) and IMU and changes in environmental temperature). Finally, we discuss the potential applications of the hyperspectral camera in agriculture.

2. Materials and Methods

2.1. Experimental Equipment

2.1.1. Hyperspectral Imaging System

The UAV-based hyperspectral imaging system includes four main components: a hyperspectral camera, a miniaturized computer and a multi-rotor UAV with GPS and INS modules. The UHD 185 imaging spectrometer features a relative balance between spatial and spectral resolutions. The main performance parameters of the hyperspectral camera are listed in Table 1. The UHD 185 is tiny and lightweight (as low as 470 g). It captures 138 spectral bands with a sampling interval of 4 nm. The manufacturer recommends the use of 125 bands ranging from 454–950 nm for better imaging quality. The light passing through the lens of the UHD 185 is split into two beams. The first beam, containing 20% of the entire light power, enters the panchromatic camera to create high-resolution PAN images, while the other, containing 80% of the entire light power, enters the hyperspectral camera to create a hyperspectral cube. The image cube is created by projecting different bands onto different parts of a CCD [39]. A 50 × 50 pixel hyperspectral cube with a 12-bit dynamic range and a PAN image with a resolution of 1000 × 1000 pixels are created in one shot. A microcomputer is connected to the camera to control the hyperspectral sensor; it then receives the data and stores them in a flash drive.
A DJI Spreading Wings S1000+ multi-rotor UAV (DJI, Shenzhen, China) was employed as the carrier platform. It has a total weight of 4.2 kg and a takeoff weight of up to 11 kg. It has a flight endurance of approximately 15 min (for 9.5 kg takeoff weight and 15,000-mAH battery capacity). It features eight wings for stability in windy conditions, even if it loses a rotor.
The GPS and IMU modules were integrated onto the UAV, which boasts horizontal and vertical position errors of approximately 2.0 m and 5.0 m, respectively, with orientation precision of approximately one degree. The position and orientation information were logged to an onboard memory card during the UAV flight. A Sony DSC-QX100 lens-style digital camera, which is small (4.5 × 4 × 4.5 inches) and captures images up to 5472 × 3648 pixels, was deployed alongside with the hyperspectral sensor.

2.1.2. Illumination Sources and Spectrometers

We used a wavelength calibration source for UV-Vis shortwave NIR spectrophotometric systems (HG-1 Mercury Argon Calibration Source, Ocean Optics, Dunedin, FL, USA) as the light source to implement the spectral calibration. The calibration source features about thirty emission lines (at 404.656, 435.833, 546.074, 696.543, 706.722, 727.294, 738.393, 763.511, 772.4, 811.531, 826.452, 842.465, 912.297 nm, etc.); however, not all are evident, depending on the grating configuration of the spectrometer.
A certified optical integrating sphere (USR-SR 2000, LabSphere, North Sutton, NH, USA) was used to evaluate the RRL of the imaging sensor. It was equipped with a 150-Watt argon lamp and a 175-Watt xenon lamp for producing calibrated irradiance levels of traceable uncertainties and an SC6000 spectrometer to measure the radiance intensity of light.
An ASD Field Spec Pro spectrometer was used to measure the ground reflectance curves of the field samples. The ASD spectrometer consists of a spectrometer unit, a computer and a fiber optic probe. The instrument features a measurable spectral range from 350–2500 nm with 2151 continuous bands. It operates with a 5° full-field-of-view fore-optics. A white Spectralon® panel (LabSphere, North Sutton, NH, USA) was used as a standard reference target to obtain reflectance from the spectrometer.

2.2. Experimental Field and Data Acquisition

The flight was conducted at the National Research and Demonstration Station for Precision Agriculture (40°10′31′′N, 116°26′38′′E) located in Xiaotangshan, Changping District, Beijing, China (Figure 1), on 26 April 2015. The average altitude of the site is 35 m above sea level. The day was sunny, and the sky was clear. The flight campaign was conducted between 10:00 and 13:00 to allow collection under prevailing direct solar irradiation rather than prevailing diffuse irradiation conditions. Five artificial directionally isotropic (near-Lambertian) tarps were placed on flat ground. The reflectance curves of these tarps were measured with an ASD Field Spec Pro spectrometer before the flight operation and can be deemed stable as long as the tarps remain uncontaminated. Before the flight, we collected a hyperspectral cube of the Spectralon® panel, which can be considered a Lambert reflector, for correction and calibration of the hyperspectral images. We flew the UAV at an altitude of 50 m above the ground, and the estimated ground sampling distances (GSD) were 0.017 m and 0.34 m for PAN and hyperspectral images; the speed was fixed at 5 m/s; and the image forward and side overlaps were set to 70% and 60%, respectively. The Sony DSC-QX100 digital camera, whose shooting was in sync with the recording of POS (Figure 2), was also used to acquire images.

2.3. Radiometric Response Linearity and Radiometric Response Variation

2.3.1. Radiometric Response Linearity

Spectral and radiometric calibration of the UHD 185 imaging spectrometer were carried out before operation to reveal sensor-specific spectral characteristics and facilitate the transformation of digital number (DN) counts into physically-meaningful radiance units (mW cm2 sr−1 µm−1) [42]. A linear radiometric response is ideal for obtaining quantitative measurements of radiance or reflectance [43]. A non-linear radiometric response may result in inaccurate reflectance values if a linear calibration model is used.
The linear relationship between the DN counts and incident radiation intensity (W sr−1 m−2 nm−1) was determined using Equations (1) and (2). Eighteen groups of DN counts and radiance intensity values were measured, and the linear model (Equation (1)) was obtained with the least squares method (LSM). Raw images from the UHD 185 were digitized in the 12-bit sensor range, and the maximal signal strength was 4096 DNs. We subtracted the dark current (DC) cube from the hyperspectral cubes before calculating the RRL.
Y ^ i = g a i n i × DN i + offset   i   ( i   =   1 ,   2 ,   3 )
R R L i = 1 i n ( Y i     Y ^ i ) ( i n Y i ) 2     i n ( Y i ) 2 n   ( i   =   1 ,   2 ,   3 )
where DN i is the mean value of a specific band of the hyperspectral cube; gain and offset are coefficients of the linear model; n denotes the number of samples; and Y i and Y ^ represent the measured and estimated radiance values, respectively. The RRL is band dependent, and the RRL values of all bands were calculated.

2.3.2. Radiometric Response Variation

The radiometric response patterns in a single band of the hyperspectral cube vary from pixel to pixel due to the influence of sensor-based factors such as noise and vignetting. The vignetting effect (i.e., a drop in intensity from the image center to its edges) is a common artifact in photography due to the foreshortening of rays at oblique angles to the optical axis and the obstruction of light by the stop or lens rim [44]. The prominent cause of the vignetting effect is off-axis illumination falloff or the cos law [45]. The RRV effect is not desirable in hyperspectral applications, where precise spectral information is needed, and will cause errors in ground object identification and image classification.
Accurate quantitative evaluations and measurements are needed to correct the vignetting effect. A parametric model can be built to simplify estimation and minimize the influence of the image. Aasen [46] built a look-up table containing the correction coefficients for each band of the UHD 185 image and applied it to the hyperspectral cube. However, this method is time consuming, and the RRV effect may still be evident after applying the correction.
To eliminate the RRV effect, we employed the ratio of two bands with the same wavelength, but captured under different illumination intensities (Equation (3)), and the corresponding bands of the DC cube were subtracted from both bands. This method is based on the hypothesis that the radiometric response for every pixel is linear. One band was chosen as a reference band, as shown in the denominator of Equation (3). The hyperspectral cube of the Spectralon® panel was chosen as the reference cube.
M R i , j , λ = D N i , j , λ D C i , j , λ D N i , j , λ c D C i , j , λ
where D N i , j , λ , D N i , j , λ c and D C i , j , λ denote the DN counts of the hyperspectral cube, reference hyperspectral cube and DC cube, respectively; and   i , j and λ represent the pixel row number, column number and wavelength, respectively. M R i , j , λ is the result after correction.
The radiance response variation was quantified using the formula below:
RRV   λ = v λ / m λ
where I λ represents the radiance response variation and v λ and m λ are the variance value and the mean DN value of the band at wavelength λ .

2.4. Generation of the DOM

2.4.1. Preliminary Processing of POS Data and Hyperspectral Images

POS information for every cube is needed to obtain a georeferenced DOM from hyperspectral cubes [15]. The POS information includes parameters for estimating the exterior orientation of the aerial vehicle-based images, which is necessary for approximate measurements of the bundle adjustment [47]. One of the disadvantages of the UHD 185 is the lack of a compatible hardware interface through which other devices can control the image acquisition. The UHD 185 continuously shoots at a pre-set time interval once it is configured and triggered by the control program installed on the miniature PC. Shooting and recording of the POS items are independent and are not recorded synchronously, which will cause discrepancies in recording time and frequency.
We used two processing procedures to obtain POS information for the hyperspectral images. First, the hyperspectral images shot while the UAV was taking off, adjusting itself, switching between flight strips and landing were discarded. Second, we modeled the POS information and applied the models to calculate the POS information of the hyperspectral images. Useless images were easy to identify because the UAV flew in a specific direction, and the direction of ground features in the images changed when the UAV adjusted itself. Linear and polynomial models were used to describe changes in UAV positions and orientations. POS items for each hyperspectral image were obtained according to the fitted model in Equation (5), which was applied in Equations (6) and (7):
y = f ( x )   ( x = 1 ,   2 ,   3 n )
y = f ( X ) ( x = 1 ,   2 ,   3 n )
X =   n x / n
where n and n are the number of POS items and hyperspectral cubes for a flight strip ;   x and x are the serial numbers from 1 –   n and n , respectively ;   and   y and y are corresponding POS parameters. Different models were employed for different parameters: a linear model for longitude and latitude (Equation (8)), a cubic polynomial model for pitch and roll angle (Equation (9)) and a mean value for flight altitude and yaw angle (Equation (10)), where a 0 , a 1 , a 2 and a 3 are polynomial coefficients. We used the IDL platform (Version 8.0, Harris Corporation, Melbourne, FL, USA) to perform the interpolation process.
y = a 0 +   a 1 X
y = a 0 + a 1 X + a 2 X 2 + a 3 X 3
y = ( y 1 + y 2 + y n ) / n

2.4.2. Generation of the DOM and CSM

Generating a DOM using hyperspectral images involves 3D reconstruction and texture building. We adopted Agisoft’s PhotoScan Professional Pro (Version 1.1.6, referred to as PhotoScan in this paper), which can obtain fine reconstruction detail with high accuracy and computational efficiency [35]. A general flow of 3D reconstruction includes aligning photos and building a dense point cloud, mesh and texture. High-quality input images with no divergence, high overlaps and proper image block geometry, and so on, are needed to successfully align photos. PhotoScan requires the sharpness of every input image to exceed 0.5. The sharpness of a hyperspectral cube estimated by PhotoScan is equal to 0 for low-resolution images, whereas the average sharpness of panchromatic (PAN) images is 0.5–0.55. The sharpness of a hyperspectral cube can be improved by fusing with a PAN image; however, it will bring spectral distortion after that [48]. To keep the original spectral information, in this study, we firstly used the PAN images and the interpolated POS information to align photos, build a dense point cloud and build the 3D mesh model (see Figure 3), then exported a digital elevation model (DEM). Instead of building texture using PAN images, we adopted the 3D mesh model plus hyperspectral cubes to build texture and export a DEM. However, the hyperspectral cubes should first be converted to TIF format and resized to the image size of the PAN images to become 1000 × 1000 in size, then their file names were changed to the corresponding PAN images. We used the hyperspectral cubes to substitute the PAN images in PhotoScan, and PhotoScan will build the texture using the pixel values from hyperspectral cubes instead from the pixel values of PAN images. As the original pixel size of a hyperspectral cube was twenty-times larger than the pixel size of the PAN image, the default output pixel size was estimated from the PAN images. To avoid data redundancy, the output pixel size should be set to 20-times larger than the value estimated by PhotoScan. The detailed processes for generating hyperspectral DOM are described in Figure 3, which also includes the steps to interpolate the POS information of hyperspectral cubes. The images from DSC-QX100 were also processed using PhotoScan to generate DOM and DEM according to standard processing steps that included aligning photos, building dense point cloud, building mesh, building texture and exporting DOM and DEM.
The DEM was exported from PhotoScan and used to generate a CSM, which represents the ups and downs of crop height. However, a digital terrain model (DTM) representing the soil surface is needed to get the CSM. About 156 discrete points evenly distributed over the research area were used to indicate the altitude of the soil surface and interpolate the DTM. The x-y coordinates and corresponding altitudes of selected points were extracted from the DEM. We interpolated the DTM using ArcGIS software (Version 9.3, ESRI Inc., Redlands, CA, USA) with linear kriging as the interpolation method. We acquired the CSM by subtracting the DTM from the DEM. The crop heights of sampling plots were extracted from the CSM with the zonal statistics tool in ArcGIS, using the shape file of sampling plots and average values.

2.5. Spectral Calibration and Radiometric Calibration

2.5.1. Spectral Calibration

Change in spectral calibration parameters can largely affect the quality of hyperspectral data. Many studies have suggested that small shifts in spectral calibration parameters can lead to noticeable deviations in corresponding reflectance and other derived physical variables, especially in spectral regions affected by sharp gaseous absorption [37]. Accurate spectral calibration is essential for further application of the raw hyperspectral data. In precision agriculture, accurate spectral calibration contributes significantly to the discovery and quantitative analysis of the absorption and reflectance features of vegetation. Spectral calibration is employed to determine the wavelength corresponding to every image band. Spectral calibration can also be used to calculate the FWHM of every band, which determines the spectral resolution of the hyperspectral sensor. The spectral calibration of the UHD 185 imaging spectrometer was conducted in a factory test; however, wavelengths may shift over time, and recalibration is needed.
The UHD 185 was exposed to the HG-1 light source to collect a hyperspectral cube. The wavelength of each band was calibrated based on a linear function representing the relationship between wavelengths of the light source and the corresponding spectral bands of the hyperspectral cube, as described in Equation (11):
λ i = i + a 1 p ,   i = 1 ,   2 ,   3
where i denotes the serial number for every band, λ i is the wavelength of band i, i is the wavelength of the first band and a 1 is a coefficient. The least squares adjustment method was adopted because the number of known recognized spectral wavelengths exceeded the number of function parameters.

2.5.2. Radiometric Calibration and Verification

We introduced two radiometric calibration methods (Equations (12) and (13)). In Equation (8), M R i , j , λ represents the relative reflectance, whose reference is the standard calibration panel; however, absolute reflectance is needed for comparison with reflectance products acquired from other sensors. To obtain absolute reflectance, we multiplied the DOM by the reflectance of the standard calibration panel, R e f λ panel . To eliminate other sensor-related or atmospheric effects, we designed another radiometric calibration method using another ground reference object M R λ w (Equation (13)).
R e f i , j , λ = M R i , j , λ × R e f λ panel
R e f i ,   j ,   λ = M R i , j , λ m e a n ( M R λ w ) × R e f λ w
where R e f i , j , λ , R e f i ,   j ,   λ represent the reflectance image after radiometric calibration, M R i , j , λ is the DOM after correcting the RRVs of the pixels, m e a n ( M R λ w ) is the mean DN value of pixels of the reference target for calibration and R e f λ w denotes the reflectance of the reference object. R e f λ panel and R e f λ w were resampled from the reflectance curves of the Spectralon® panel provided by the manufacturer and a white tarp measured in the field. Spectral response functions approximated by Gaussian functions [49] were adopted for the spectral resampling. The radiometric calibration was performed using the IDL and ENVI (Version 4.8, Harris Corporation, Melbourne, FL, USA) functions.

3. Results

3.1. Linearity of Pixel Responses Test and RRV Correction

Figure 4a,b shows the results of linear regressions for the 650-nm and 946-nm bands, respectively. Near perfect linear fits between DNs and radiance values were obtained for all spectral channels of the UHD 185 (Figure 5). The coefficients of determination for all regressions were >0.998. The RRL values varied from band to band; the RRV values for spectral ranges from 470–610 nm and from 694 nm–892 nm were higher than those for other spectral regions. However, the values were significantly lower at approximately 454 nm, 638 nm and 882 nm. The RRV values in Figure 5 were calculated from regressions of pooled average mean values versus the corresponding radiance intensities of all bands; the linearity for a single pixel may vary across a specific band.
The radiometric response variation for different bands was significantly lower (<0.01) after performing the correction (Figure 6). Visible vignetting effects and image strips were evident in uncorrected hyperspectral cubes (Figure 7(a1)). In (Figure 7(a2)), radiometric response variation was eliminated in corrected cubes, and no apparent brightness gradients or image strips were apparent. The DN values were more randomly distributed after correction, and the range of radiometric response variation narrowed from 0.70–1.15 to 0.95–1.04. This suggested that the radiometric response for each pixel was approximatively linear, and M R i , j , λ was close to a constant for each pixel.

3.2. Generated DOM and CSM

3.2.1. Interpolated POS, Generated DOM and CSM

POS items recorded by the INS displayed regular patterns for different flight strips and could be simulated by specific mathematical models (Figure 8(a1,b1,c1,d1,e1,e2)), except for altitude, whose values do not vary stably around the mean value of 116.5 m (Figure 8(e1)). Thus, only linear or non-linear fits for POS items in the first flight strip are presented. Because the UAV flew in an approximately east-west direction, longitude varied significantly in a single flight strip. The R2 of the fit for the first flight strip was 0.999 (Figure 8(a2)). The latitude within a single flight strip did not change significantly with longitude, and accordingly, the R2 of the linear fit for the first flight strip was not as high as that for longitude (Figure 8(b2)). Pitch was represented by a cubic polynomial model, which resulted in a high R2 (Figure 8(c2)). Roll was also modeled with a cubic polynomial curve, but the R2 for the first strip was only 0.4572 (Figure 8(d2)). Yaw angle was relatively stable for all flight strips and varied around a fixed value.
No evident cavities or geometric distortions were in the UHD 185 DOM (Figure 9a). The pixel size of the orthomosaic is about 0.25 m, with the storage space occupied, is approximately 13.1 MB, and the field range is 85.8 m × 34.1 m. Boundaries around the research units (approximately 7 m × 5 m) are clear and easy to distinguish. A DOM derived from PAN images, for which the pixel size was about 0.01 m, was also exported. Figure 9b shows the DOM generated from digital RGB images, and the pixel size of it is about 0.01 m. Figure 10 shows that for the generated crop CSM, there were significant differences among sampling plots.

3.2.2. Precision Verification

The accuracy of interpolated POS information for hyperspectral images was accessed by comparison of geometric differences between DOMs generated from digital RGB images and hyperspectral images. The greyscale images shared the same spatial range with the UHD 185 hyperspectral cubes, but had higher spatial resolution. We performed the comparison using DOM generated from PAN images rather than hyperspectral cubes.
Because we did not use ground control points (GCPs), the absolute location accuracy and vertical accuracy of the generated DOM and DEM were significantly lower than the relative accuracy [50]. We evaluated the relative geometric accuracy by comparing the lengths of eight ground features (e.g., tarps, equipment boxes and solar panels) measured from DOMs generated from the PAN images and digital RGB images. Figure 11a shows the differences between the eight group lengths measured from the digital camera and PAN DOMs. The maximum discrepancy was less than 0.05 m, and the RMSE was 0.0035 m. The DEM-derived and field-measured crop heights of 48 sampling plots were strongly correlated, with a determination coefficient of 0.680 and RMSE of 0.069 m (Figure 11b). The maximum discrepancy was 18.02 cm, while the minimum was below 1 cm. However, the DEM-derived crop heights were generally lower than the field-measured values.

3.3. Spectral Calibration

Peaks in the spectral curve of the pre-calibrated UHD 185 slightly lagged behind the corresponding emission lines of the Hg-Ar calibration source, (Figure 12). The FWHM of the UHD 185 showed an increasing trend (from 4 nm–28 nm): the longer the wavelength, the larger the FWHM and the lower the spectral resolution. We chose six evident radiometric peaks of the Hg-Ar calibration source and employed a linear model for the calibration. The wavelengths of the bands (nm) were calculated using a linear model, and the result is shown in Equation (14), where B i represents the wavelength of band i (R2 = 0.999). There were only some slight changes compared from previous wavelengths (1 nm). Considering the relatively large FWHMs, these changes in wavelength could be neglected, and the wavelength of UHD 185 was stable.
B i = 4 × i + 451   nm   ( i = 1 ,   2 ,   3 125 )

3.4. Radiometric Calibration Evaluation

The calibration based on Equation (13) performed better than the other method, especially for near-infrared bands (Figure 13). The reflectance curves calibrated using Equation (12) were significantly lower than the ones measured by the ASD Field Spec Pro spectrometer within the infrared region from 722 nm–950 nm and dropped quickly after 882 nm. The reflectance curves calibrated based on Equation (13) exhibited robust results, and discrepancies between reflectance values derived from calibrated images and the ASD Field Spec Pro spectrometer were less than 4% from 500–950 nm for the green tarp. For celery, discrepancies were less than 3% between 458 nm and 910 nm, except for the 758–766-nm region, where atmospheric absorption at approximately 760 nm could be seen in the field-measured reflectance curve, but was not significant in the reflectance curve derived from the calibrated image and the discrepancies within 910–950 nm up to 3–5%.

4. Discussion

We described a method for generating a DOM from hyperspectral images taken with a snapshot camera and accurately converting DN counts to meaningful reflectance values. The proposed method resolves the contradiction between the low spatial resolution of hyperspectral cubes and the need to mosaic them together to form a panoramic DOM with 125 spectral bands. Large data size is a main obstacle restricting the application of hyperspectral imaging data. The low spatial resolution DOM generated in this research is suited for further analyses and applications. Precise radiometric calibration is necessary to obtain reliable reflectance data from hyperspectral images. To perform precise radiometric calibration, we considered and evaluated spectral deviation, RRL, RRV and other factors that may result in radiometric deviations. Deviation of ground feature reflectance curves is caused by diverse factors, and it is difficult to correct them all. However, the primary causes of deviations were considered and removed to obtain reliable data.
We also introduced a method for interpolating POS information of hyperspectral images. The POS information of hyperspectral data was used to obtain a geometrically-registered DOM. The generated DOM and DEM from PAN and digital camera images were compared to determine their relative geometric deviations, which indicated the accuracy of interpolated POS data. Compared with GCPs, the POS information acquired from the GPS and IMU modules is easier to access and use. PAN images were used to construct a 3D mesh model to mosaic together low spatial resolution hyperspectral cubes, providing an effective way to generate DOMs from snapshot hyperspectral cubes. This method establishes a foundation for the application of miniaturized snapshot hyperspectral cameras.

4.1. Spectral and Radiometric Calibration

The aim of spectral calibration is to evaluate and eliminate spectral deviations of the UHD 185, which can cause errors when searching for ground absorption features such as absorption peaks of blue and red light from chlorophyll and the red-edge position of green vegetation. We employed a linear model instead of a polynomial fitting model because only several sample points identified as FWHM values were high, especially in the near-infrared spectral range.
RRL is a fundamental performance factor for hyperspectral images; the higher the RRL, the more stable the radiometric response. High RRL for a hyperspectral image is necessary to achieve robust calibration results. The RRL values for all bands of the UHD 185 exceeded 0.998. The RRV within hyperspectral bands is also a key factor for obtaining a robust calibration result. We found significant variations for UHD 185 bands; after the correction, variations were almost zero, indicating that the proposed method was effective for correcting these variations. The calibration method is easy to implement and requires the hyperspectral cube of a reference object with relatively homogeneous reflective characteristics. This suggests that the radiometric response is linear for a single pixel, and the variation can be removed after determining the band ratio.
Two radiometric methods were adopted to produce hyperspectral data with absolute reflectance. The first calibration method (Equation (12)) did not generate ideal results. The second calibration method (Equation (13)) eliminated sensor-derived and the atmospheric effects to some extent. However, other factors may influence the precision of radiometric calibration. We did not consider sensor temperature, which may be relevant to the DC [51]. Aasen [46] evaluated the impact of temperature changes after initiation of the UHD 185 and found that DC values increased over time. However, band differences in the DC did not significantly change. To avoid the impact of temperature, we preheated the sensor for more than 10 min before collecting data. Furthermore, there were some obvious fluctuations in the near-infrared spectral region after radiometric calibration, which were caused by the sensor (Figure 14). However, this effect can be reduced by using the mean spectrum of ROI to generate the DOM.

4.2. Generated Hyperspectral DOM and DEM

Because the UHD 185 hyperspectral cubes were composed of 50 × 50 pixels, they were not sharp enough for DOM generation. Therefore, PAN images were used to help mosaic together the cubes. PAN images coupled with POS information were needed to generate a georeferenced 3D mesh model. Relative position and orientation can be inferred from the images using existing algorithms, but the orientation information recorded by the INS deployed on the UAV has limited accuracy. Input position and orientation information were treated as primary POS information, and these parameters were optimized with high accuracy during the geometric correction of PAN images. POS information indicated that the relative positions of images are useful for the alignment of PAN images when building a 3D mesh model. Furthermore, the longitude, latitude and altitude values of images are also needed to generate the DEM, which is the source data for the CSM. The CSM was generated by subtracting the DEM from the DTM, which was produced by interpolating discrete ground altitude values. Hence, both the accuracy of the DEM and interpolation process influenced the accuracy of crop height.
The calculated crop heights were lower than field-measured values, which might account for that they were from the mean value of the sampling plots from the image, and heights for soil pixel were also included.

4.3. Potential Advantages of Snapshot Hyperspectral Cameras Coupled with Other Imaging Sensors

A miniature hyperspectral sensor can be complemented by other sensors that are suited for UAV deployment. Chance [52] combined the attributes of hyperspectral imagery and LiDAR for invasive shrub mapping. Rischbeck fused hyperspectral, thermal and canopy height parameters to predict the yield of spring barley [53]. Nasrabadi fused SAR and hyperspectral data for the detection of mines [54]. However, few studies have focused on the integration of hyperspectral sensors and other types of sensors on a UAV platform due to the difficulty of obtaining geometrically- and radiometrically-reliable hyperspectral data from UAVs. Potential approaches include combining hyperspectral information with structural information derived from other sensors, such as commercial digital cameras mounted on UAVs. Commercial digital cameras have been widely used in low altitude photogrammetry due to their low cost and easy deployment. Commercial photogrammetric software such as PhotoScan, Pix4D Mapper and ENVI One Button are available to build 3D models with high fidelity and spatial resolution. Compared with hyperspectral imaging sensors, digital cameras have fewer bands and wider FOV and can obtain photos with a greater degree of overlap in and across the flight direction. The DOM and DEM [55] derived from camera photos store information about crop structure such as texture, height and density, which are useful for crop biomass and cover degree inversions and the determination of variation in spatial distributions of crops. After performing accurate image geo-registration, hyperspectral- and digital camera-derived DOMs exhibit spatial consistency, thus enabling point-to-point analysis. This will improve the accuracy of crop physiology, biochemistry and structural parameter estimations when incorporating spectral information with structural information derived from digital cameras.

5. Conclusions

We presented preprocessing steps for hyperspectral images acquired from a snapshot hyperspectral sensor. We evaluated the spectral deviation, radiometric response linearity and radiometric response variation of the CCD of the hyperspectral sensor. The results indicated that the CCD of the hyperspectral sensor exhibited a noticeable vignetting effect and strips. Measures were taken to correct the spectral deviation and radiometric response variation. The radiometric response variation was significantly eliminated after the correction. The UHD 185 hyperspectral sensor has advantages regarding imaging efficiency. We implemented two radiometric calibration methods and evaluated the results by comparing spectra of ground features measured with an ASD with spectra derived from calibrated images. Robust calibration results were achieved after the spectral and radiometric calibration, and there was no significant bias for calibrated images; discrepancies were below 5% for all bands. By employing PAN images with higher spatial resolution, hyperspectral cubes with lower spatial resolution could be mosaicked together without apparent image distortions. A method to calculate the POS information of hyperspectral images was introduced. The geometric accuracy was evaluated by comparing the DOM and DEM generated by conventional UAV photogrammetry with those for a commercial digital camera; the relative horizontal difference was less than 0.05 m. The absolute vertical difference reached up to 2.0 m; the relative vertical difference was low; and the correlation was 0.986 for 48 sampling plots. Crop height for 48 sampling plots was extracted from the DEM generated by PAN images, and the determination coefficient was 0.680 for the regression of these values with field-measured data.
The preprocessing portfolio proposed in this paper was proven to be robust, reliable, time effective and resource efficient. Advantages to using snapshot hyperspectral imagers include a significant increase in light collection efficiency, lack of scanning artifacts and increased robustness or compactness due to the lack of moving components. This type of hyperspectral sensor, which captures both PAN images and hyperspectral cubes, will perform better in future applications if the hardware compatibility and spatial-spectral resolutions are continuously improved.

Acknowledgments

This study was supported by the Natural Science Foundation of China (61661136003, 41471285, 41471351), the National Key Research and Development Program (2016YFD0300602), the Special Funds for Technology innovation capacity building sponsored by the Beijing Academy of Agriculture and Forestry Sciences (KJCX20170423) and the U.K. Science and Technology Facilities Council through the Precision Agriculture for Family-farms in China (PAFiC) project (ST/N006801/1). Thanks to Bo Xu, Zhenhai Li, et al., for the image data and field sampling collection. We are grateful to the anonymous reviewers for their valuable comments and recommendations.

Author Contributions

Guijun Yang and Yanjie Wang analyzed the data and wrote the manuscript; both authors contributed equally to this work and should be considered co-first authors. Changchun Li and Huanhuan Yuan provided comments and suggestions for the manuscript and helped to check the writing. Xiaodong Yang Haikuan Feng and Bo Xu provided data and data acquisition capacity.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mulla, D.J. Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps. Biosyst. Eng. 2013, 114, 358–371. [Google Scholar] [CrossRef]
  2. Schima, R.; Mollenhauer, H.; Grenzdörffer, G.; Merbach, I.; Lausch, A.; Dietrich, P.; Bumberger, J. Imagine all the plants: Evaluation of a light-field camera for on-site crop growth monitoring. Remote Sens. 2016, 8, 823. [Google Scholar] [CrossRef]
  3. Feng, W.; Zhang, H.-Y.; Zhang, Y.-S.; Qi, S.-L.; Heng, Y.-R.; Guo, B.-B.; Ma, D.-Y.; Guo, T.-C. Remote detection of canopy leaf nitrogen concentration in winter wheat by using water resistance vegetation indices from in-situ hyperspectral data. Field Crop. Res. 2016, 198, 238–246. [Google Scholar] [CrossRef]
  4. Conţiu, Ş.; Groza, A. Improving remote sensing crop classification by argumentation-based conflict resolution in ensemble learning. Expert Syst. Appl. 2016, 64, 269–286. [Google Scholar] [CrossRef]
  5. Rickard, L.; Basedow, R. HYDICE: An Airborne System for Hyperspectral Imaging; SPIE: Bellingham, WA, USA, 1993; pp. 173–179. [Google Scholar]
  6. Cocks, T.; Jenssen, R.; Stewart, A.; Wilson, I. The HyMapTM airborne hyperspectral sensor: The system, calibration and performance. In Proceedings of the 1st EARSeL Workshop on Imaging Spectroscopy, Zurich, Switzerland, 6–8 October 1998; pp. 37–42. [Google Scholar]
  7. Pengra, B.; Johnston, C.; Loveland, T. Mapping an invasive plant, Phragmites australis, in coastal wetlands using the EO-1 Hyperion hyperspectral sensor. Remote Sens. Environ. 2007, 108, 74–81. [Google Scholar] [CrossRef]
  8. Ezequiel, C.A.F.; Cua, M.; Libatique, N.C.; Tangonan, G.L.; Alampay, R.; Labuguen, R.T.; Favila, C.M.; Honrado, J.L.E.; Canos, V.; Devaney, C.; et al. UAV aerial imaging applications for post-disaster assessment, environmental management and infrastructure development. In Proceedings of the International Conference on Unmanned Aircraft Systems, Orlando, FL, USA, 27–30 May 2014; pp. 274–283. [Google Scholar]
  9. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  10. Candiago, S.; Remondino, F.; De Giglio, M.; Dubbini, M.; Gattelli, M. Evaluating multispectral images and vegetation indices for precision farming applications from UAV images. Remote Sens. 2015, 7, 4026–4047. [Google Scholar] [CrossRef]
  11. Pajares, G. Overview and current status of remote sensing applications based on Unmanned Aerial Vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–329. [Google Scholar] [CrossRef]
  12. Achille, C.; Adami, A.; Chiarini, S.; Cremonesi, S.; Fassi, F.; Fregonese, L.; Taffurelli, L. UAV-based photogrammetry and integrated technologies for architectural applications—Methodological strategies for the after-quake survey of vertical structures in mantua (Italy). Sensors 2015, 15, 15520–15539. [Google Scholar] [CrossRef] [PubMed]
  13. Erdelj, M.; Natalizio, E. UAV-assisted disaster management: Applications and open issues. In Proceedings of the 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, USA, 15–18 February 2016; pp. 1–5. [Google Scholar]
  14. Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: A review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  15. Habib, A.; Han, Y.; Xiong, W.; He, F.; Zhang, Z.; Crawford, M. Automated ortho-rectification of UAV-based hyperspectral data over an agricultural field using frame RGB imagery. Remote Sens. 2016, 8, 796. [Google Scholar] [CrossRef]
  16. Atzberger, C. Advances in remote sensing of agriculture: Context description, existing operational monitoring systems and major information needs. Remote Sens. 2013, 5, 949–981. [Google Scholar] [CrossRef]
  17. Li, L.; Ren, T.; Ma, Y.; Wei, Q.; Wang, S.; Li, X.; Cong, R.; Liu, S.; Lu, J. Evaluating chlorophyll density in winter oilseed rape (Brassica napus L.) using canopy hyperspectral red-edge parameters. Comput. Electron. Agric. 2016, 126, 21–31. [Google Scholar] [CrossRef]
  18. Liu, X.-D.; Sun, Q.-H. Early assessment of the yield loss in rice due to the brown planthopper using a hyperspectral remote sensing method. Int. J. Pest Manag. 2016, 62, 205–213. [Google Scholar] [CrossRef]
  19. Chi, G.; Huang, B.; Shi, Y.; Chen, X.; Li, Q.; Zhu, J. Detecting ozone effects in four wheat cultivars using hyperspectral measurements under fully open-air field conditions. Remote Sens. Environ. 2016, 184, 329–336. [Google Scholar] [CrossRef]
  20. Wu, X.; Zhang, W.; Qiu, Z.; Cen, H.; He, Y. A novel method for detection of pieris rapae larvae on cabbage leaves using NIR hyperspectral imaging. Appl. Eng. Agric. 2016, 32, 311–316. [Google Scholar]
  21. Li, J.; Huang, W.; Tian, X.; Wang, C.; Fan, S.; Zhao, C. Fast detection and visualization of early decay in citrus using Vis-NIR hyperspectral imaging. Comput. Electron. Agric. 2016, 127, 582–592. [Google Scholar] [CrossRef]
  22. Senthilkumar, T.; Jayas, D.S.; White, N.D.G.; Fields, P.G.; Gräfenhan, T. Detection of fungal infection and Ochratoxin A contamination in stored barley using near-infrared hyperspectral imaging. Biosyst. Eng. 2016, 147, 162–173. [Google Scholar] [CrossRef]
  23. Kang, J.; Ryu, C.; Kim, S.; Kang, Y.; Sarkar, T.K. Estimating moisture content of cucumber seedling using hyperspectral imagery. J. Biosyst. Eng. 2016, 41, 273–280. [Google Scholar] [CrossRef]
  24. Qiong, W.; Cheng, W.; Jingjing, F.; Jianwei, J. Field monitoring of wheat seedling stage with hyperspectral imaging. Int. J. Agric. Biol. Eng. 2016, 9, 143–148. [Google Scholar]
  25. Smith, M.W.; Vericat, D. From experimental plots to experimental landscapes: Topography, erosion and deposition in sub-humid badlands from Structure-from-Motion photogrammetry. Earth Surf. Process. Landf. 2015, 40, 1656–1671. [Google Scholar] [CrossRef]
  26. Woodget, A.S.; Carbonneau, P.E.; Visser, F.; Maddock, I.P. Quantifying submerged fluvial topography using hyperspatial resolution UAS imagery and structure from motion photogrammetry. Earth Surf. Process. Landf. 2015, 40, 47–64. [Google Scholar] [CrossRef]
  27. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
  28. Chen, B.; Chen, Z.; Deng, L.; Duan, Y.; Zhou, J. Building change detection with RGB-D map generated from UAV images. Neurocomputing 2016, 208, 305–364. [Google Scholar] [CrossRef]
  29. Moriondo, M.; Leolini, L.; Staglianò, N.; Argenti, G.; Trombi, G.; Brilli, L.; Dibari, C.; Leolini, C.; Bindi, M. Use of digital images to disclose canopy architecture in olive tree. Sci. Hortic. 2016, 209, 1–13. [Google Scholar] [CrossRef]
  30. Pan, H.; Guan, T.; Luo, Y.; Duan, L.; Tian, Y.; Yi, L.; Zhao, Y.; Yu, J. Dense 3D reconstruction combining depth and RGB information. Neurocomputing 2016, 175, 644–651. [Google Scholar] [CrossRef]
  31. Ishiguro, S.; Yamano, H.; Oguma, H. Evaluation of DSMs generated from multi-temporal aerial photographs using emerging structure from motion-multi-view stereo technology. Geomorphology 2016, 268, 64–71. [Google Scholar] [CrossRef]
  32. Dikovski, B.; Lameski, P.; Zdravevski, E.; Kulakov, A. Structure from Motion Obtained from Low Quality Images in Indoor Environment. Available online: http://s3.amazonaws.com/academia.edu.documents/39380974/Structure_from_motion_obtained_from_low_20151023-8465-1tcc1i9.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1497839798&Signature=AAENrosObAzpDoNcVUebsdTEKzE%3D&response-content-disposition=inline%3B%20filename%3DStructure_from_motion_obtained_from_low.pdf (accessed on 19 June 2017).
  33. Lucieer, A.; de Jong, S.M.; Turner, D. Mapping landslide displacements using Structure from Motion (SfM) and image correlation of multi-temporal UAV photography. Prog. Phys. Geogr. 2014, 38, 97–116. [Google Scholar] [CrossRef]
  34. Turner, D.; Lucieer, A.; Watson, C. An automated technique for generating georectified mosaics from ultra-high resolution Unmanned Aerial Vehicle (UAV) imagery, based on Structure from Motion (SFM) point clouds. Remote Sens. 2012, 4, 1392–1410. [Google Scholar] [CrossRef]
  35. Cunliffe, A.M.; Brazier, R.E.; Anderson, K. Ultra-fine grain landscape-scale quantification of dryland vegetation structure with drone-acquired structure-from-motion photogrammetry. Remote Sens. Environ. 2016, 183, 129–143. [Google Scholar] [CrossRef]
  36. Geipel, J.; Link, J.; Claupein, W. Combined spectral and spatial modeling of corn yield based on aerial images and crop surface models acquired with an unmanned aircraft system. Remote Sens. 2014, 6, 10335–10355. [Google Scholar] [CrossRef]
  37. Liu, Y.; Wang, T.; Ma, L.; Wang, N. Spectral calibration of hyperspectral data observed from a hyperspectrometer loaded on an unmanned aerial vehicle platform. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2630–2638. [Google Scholar]
  38. Mahiny, A.S.; Turner, B.J. A comparison of four common atmospheric correction methods. Photogramm. Eng. Remote Sens. 2007, 73, 361–368. [Google Scholar] [CrossRef]
  39. Aasen, H.; Burkart, A.; Bolten, A.; Bareth, G. Generating 3D hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: From camera calibration to quality assurance. ISPRS J. Photogramm. Remote Sens. 2015, 108, 245–259. [Google Scholar] [CrossRef]
  40. Honkavaara, E.; Arbiol, R.; Markelin, L.; Martinez, L.; Cramer, M.; Bovet, S.; Chandelier, L.; Ilves, R.; Klonus, S.; Marshal, P.; et al. Digital airborne photogrammetry—A new tool for quantitative remote sensing?—A state-of-the-art review on radiometric aspects of digital photogrammetric images. Remote Sens. 2009, 1, 577–605. [Google Scholar] [CrossRef]
  41. Honkavaara, E.; Hakala, T.; Markelin, L.; Rosnell, T.; Saari, H.; Makynen, J. A process for radiometric correction of UAV image blocks. Photogramm. Fernerkund. Geoinf. 2012, 2012, 115–127. [Google Scholar] [CrossRef]
  42. Lucieer, A.; Malenovský, Z.; Veness, T.; Wallace, L. HyperUAS-imaging spectroscopy from a multirotor unmanned aircraft system. J. Field Robot. 2014, 31, 571–590. [Google Scholar] [CrossRef]
  43. Karpel, N.; Schechner, Y.Y. Portable polarimetric underwater imaging system with a linear response. SPIE Def. Secur. 2004, 5432, 106–115. [Google Scholar]
  44. Zheng, Y.; Yu, J.; Bing Kang, S.; Lin, S.; Kambhamettu, C. Single-image vignetting correction using radial gradient symmetry. In Proceedings of the IEEE International Conference on Computer Vision, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  45. Goldman, D.B.; Chen, J.-H. Vignette and exposure calibration and compensation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2276–2288. [Google Scholar] [CrossRef] [PubMed]
  46. Aasen, H.; Bendig, J.; Bolten, A.; Bennertz, S.; Willkomm, M.; Bareth, G. Introduction and preliminary results of a calibration for full-frame hyperspectral cameras to monitor agricultural crops with UAVs. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 1–8. [Google Scholar] [CrossRef]
  47. Barazzetti, L.; Remondino, F.; Scaioni, M.; Brumana, R. Fully Automatic UAV Image-Based Sensor Orientation. Available online: http://www.isprs.org/proceedings/XXXVIII/part1/12/12_02_Paper_75.pdf (accessed on 19 June 2017).
  48. Ehlers, M.; Klonus, S.; Johan Åstrand, P.; Rosso, P. Multi-sensor image fusion for pansharpening in remote sensing. Int. J. Image Data Fusion 2010, 1, 25–45. [Google Scholar] [CrossRef]
  49. Milton, E.J.; Choi, K. Estimating the spectral response function of the CASI-2. In Proceedings of the Annual Conference of the Remote Sensing and Photogrammetry Society, Aberdeen, UK, 7–10 September 2004; pp. 1–11. [Google Scholar]
  50. Harwin, S.; Lucieer, A. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from Unmanned Aerial Vehicle (UAV) imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef]
  51. Burkart, A.; Cogliati, S.; Schickling, A.; Rascher, U. A novel UAV-Based ultra-light weight spectrometer for field spectroscopy. IEEE Sens. J. 2014, 14, 63–67. [Google Scholar] [CrossRef]
  52. Chance, C.M.; Coops, N.C.; Plowright, A.A.; Tooke, T.R.; Christen, A.; Aven, N. Invasive shrub mapping in an urban environment from hyperspectral and LiDAR-derived attributes. Front. Plant Sci. 2016, 7, 1258. [Google Scholar] [CrossRef] [PubMed]
  53. Rischbeck, P.; Elsayed, S.; Mistele, B.; Barmeier, G.; Heil, K.; Schmidhalter, U. Data fusion of spectral, thermal and canopy height parameters for improved yield prediction of drought stressed spring barley. Eur. J. Agron. 2016, 78, 44–59. [Google Scholar] [CrossRef]
  54. Nasrabadi, N.M. Multisensor joint fusion and detection of mines using SAR and Hyperspectral. In Proceedings of the IEEE Sensors, Lecce, Italy, 26–29 October 2008; pp. 1056–1059. [Google Scholar]
  55. Bendig, J.; Bolten, A.; Bennertz, S.; Broscheit, J.; Eichfuss, S.; Bareth, G. Estimating biomass of barley using Crop Surface Models (CSMs) derived from UAV-based RGB imaging. Remote Sens. 2014, 6, 10395–10412. [Google Scholar] [CrossRef]
Figure 1. The experimental site.
Figure 1. The experimental site.
Remotesensing 09 00642 g001
Figure 2. UHD 185, UAV with POS, reference panel and digital camera.
Figure 2. UHD 185, UAV with POS, reference panel and digital camera.
Remotesensing 09 00642 g002
Figure 3. Flowchart for DOM and DEM generation.
Figure 3. Flowchart for DOM and DEM generation.
Remotesensing 09 00642 g003
Figure 4. The linear regression between the (DN-DC) values and corresponding radiance measured by the optical integration system (a) for the band 650 nm and (b) for the band 946 nm.
Figure 4. The linear regression between the (DN-DC) values and corresponding radiance measured by the optical integration system (a) for the band 650 nm and (b) for the band 946 nm.
Remotesensing 09 00642 g004
Figure 5. Linearity of pixel responses for all bands.
Figure 5. Linearity of pixel responses for all bands.
Remotesensing 09 00642 g005
Figure 6. The RRV values in all bands for the uncorrected hyperspectral cube (l1), reference cube (l2) and corrected cube ((l1-DC)/(l2-DC)).
Figure 6. The RRV values in all bands for the uncorrected hyperspectral cube (l1), reference cube (l2) and corrected cube ((l1-DC)/(l2-DC)).
Remotesensing 09 00642 g006
Figure 7. Radiometric response variation maps and corresponding frequency histograms for uncorrected (a1,b1) and corrected (a2,b2) images (550 nm).
Figure 7. Radiometric response variation maps and corresponding frequency histograms for uncorrected (a1,b1) and corrected (a2,b2) images (550 nm).
Remotesensing 09 00642 g007
Figure 8. (a1) Longitude of all flight strips; (a2) linear fit of longitude for first flight strip; (b1) latitude of all flight strips; (b2) linear fit of latitude for first flight strip; (c1) pitch of all flight strips; (c2) linear fit of pitch for first flight strip; (d1) roll of all flight strips; (d2) linear fit of roll for first flight strip; (e1) trajectory of flight altitude; (e2) trajectory of flight yaw (the serial number refers to the numerical order of the photos).
Figure 8. (a1) Longitude of all flight strips; (a2) linear fit of longitude for first flight strip; (b1) latitude of all flight strips; (b2) linear fit of latitude for first flight strip; (c1) pitch of all flight strips; (c2) linear fit of pitch for first flight strip; (d1) roll of all flight strips; (d2) linear fit of roll for first flight strip; (e1) trajectory of flight altitude; (e2) trajectory of flight yaw (the serial number refers to the numerical order of the photos).
Remotesensing 09 00642 g008aRemotesensing 09 00642 g008b
Figure 9. Spectral cube of the hyperspectral DOM (a) (RGB display, R: 670 nm, G: 550 nm, B: 480 nm) and DOM derived from digital RGB images (b).
Figure 9. Spectral cube of the hyperspectral DOM (a) (RGB display, R: 670 nm, G: 550 nm, B: 480 nm) and DOM derived from digital RGB images (b).
Remotesensing 09 00642 g009
Figure 10. Generated surface crop model (CSM) and shape file for sampling plots (red rectangles).
Figure 10. Generated surface crop model (CSM) and shape file for sampling plots (red rectangles).
Remotesensing 09 00642 g010
Figure 11. (a) Comparison between PAN image DOM-derived and digital RGB image DOM-derived ground feature lengths; (b) crop height derived from the hyperspectral DEM (Height 1) and field-measured data (Height 2).
Figure 11. (a) Comparison between PAN image DOM-derived and digital RGB image DOM-derived ground feature lengths; (b) crop height derived from the hyperspectral DEM (Height 1) and field-measured data (Height 2).
Remotesensing 09 00642 g011
Figure 12. Spectral signature of the mercury-argon (Hg-Ar) calibration source HG-1 (Ocean Optics, Dunedin, FL, USA) recorded by the UHD 185 and FWHM provided by the manufacturer; the vertical lines are part of the Hg-Ar calibration source radiometric peaks.
Figure 12. Spectral signature of the mercury-argon (Hg-Ar) calibration source HG-1 (Ocean Optics, Dunedin, FL, USA) recorded by the UHD 185 and FWHM provided by the manufacturer; the vertical lines are part of the Hg-Ar calibration source radiometric peaks.
Remotesensing 09 00642 g012
Figure 13. Reflectance of a green tarp (a) and celery (b), representative of green vegetation; C1 stands for reflectance curves calibrated according to Equation (12); C2 represents reflectance curves calibrated according to Equation (13).
Figure 13. Reflectance of a green tarp (a) and celery (b), representative of green vegetation; C1 stands for reflectance curves calibrated according to Equation (12); C2 represents reflectance curves calibrated according to Equation (13).
Remotesensing 09 00642 g013
Figure 14. Spectra of vegetation. “Single” stands for the spectrum for a single pixel; “ROI” is the mean spectrum of ROI.
Figure 14. Spectra of vegetation. “Single” stands for the spectrum for a single pixel; “ROI” is the mean spectrum of ROI.
Remotesensing 09 00642 g014
Table 1. Main parameters of the UHD 185 snapshot hyperspectral sensor (provided by the manufacturer).
Table 1. Main parameters of the UHD 185 snapshot hyperspectral sensor (provided by the manufacturer).
SpecificationValueSpecificationValue
Wavelength range450–950 nmHousing28 cm × 6.5 cm × 7 cm
Sampling interval4 nmDigitization12 bit
Spectral resolution8 nm at 532 nmField angle19°
Channels125Cube resolution1 megapixel
DetectorSi CCDSpectral throughput2500 spectra/cube
Weight470 gPowerDC 12 V, 15 W

Share and Cite

MDPI and ACS Style

Yang, G.; Li, C.; Wang, Y.; Yuan, H.; Feng, H.; Xu, B.; Yang, X. The DOM Generation and Precise Radiometric Calibration of a UAV-Mounted Miniature Snapshot Hyperspectral Imager. Remote Sens. 2017, 9, 642. https://doi.org/10.3390/rs9070642

AMA Style

Yang G, Li C, Wang Y, Yuan H, Feng H, Xu B, Yang X. The DOM Generation and Precise Radiometric Calibration of a UAV-Mounted Miniature Snapshot Hyperspectral Imager. Remote Sensing. 2017; 9(7):642. https://doi.org/10.3390/rs9070642

Chicago/Turabian Style

Yang, Guijun, Changchun Li, Yanjie Wang, Huanhuan Yuan, Haikuan Feng, Bo Xu, and Xiaodong Yang. 2017. "The DOM Generation and Precise Radiometric Calibration of a UAV-Mounted Miniature Snapshot Hyperspectral Imager" Remote Sensing 9, no. 7: 642. https://doi.org/10.3390/rs9070642

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop