Next Article in Journal
An Editorial for the Special Issue “Aerosol and Atmospheric Correction”
Previous Article in Journal
A Study of the Mixed Layer Warming Induced by the Barrier Layer in the Northern Bay of Bengal in 2013
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Transects during Transient Events in Rivers (HITTER): Framework Development and Application to a Tracer Experiment on the Missouri River, USA

by
Carl J. Legleiter
1,*,
Victoria M. Scholl
2,
Brandon J. Sansom
3 and
Matthew A. Burgess
2
1
U.S. Geological Survey Observing Systems Division, Golden, CO 80403, USA
2
U.S. Geological Survey National Uncrewed Systems Office, Geosciences and Environmental Change Science Center, Lakewood, CO 80225, USA
3
U.S. Geological Survey Columbia Environmental Research Center, Columbia, MO 65201, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(19), 3743; https://doi.org/10.3390/rs16193743
Submission received: 1 September 2024 / Revised: 4 October 2024 / Accepted: 6 October 2024 / Published: 9 October 2024
(This article belongs to the Section Environmental Remote Sensing)

Abstract

:
Rivers convey a broad range of materials, such as sediment, nutrients, and contaminants. Much of this transport can occur during or immediately after an episodic, pulsed event like a flood or an oil spill. Understanding the flow processes that influence the motion of these substances is important for managing water resources and conserving aquatic ecosystems. This study introduces a new remote sensing framework for characterizing dynamic phenomena at the scale of a channel cross-section: Hyperspectral Image Transects during Transient Events in Rivers (HITTER). We present a workflow that uses repeated hyperspectral scan lines acquired from a hovering uncrewed aircraft system (UAS) to quantify how a water attribute of interest varies laterally across the river and evolves over time. Data from a tracer experiment on the Missouri River are used to illustrate the components of the end-to-end processing chain we used to quantify the passage of a visible dye. The framework is intended to be flexible and could be applied in a number of different contexts. The results of this initial proof-of-concept investigation suggest that HITTER could potentially provide insight regarding the dispersion of a range of materials in rivers, which would facilitate ecological and geomorphic studies and help inform management.

1. Introduction

Rivers are vital features of the Earth system that convey water, sediment, and nutrients across the landscape. The movement of these materials is critical to the function of not only aquatic ecosystems but also human societies, which rely on rivers for water supply, transportation, and recreation. Understanding how substances such as soil, carbon, and contaminants travel along rivers and are redistributed by the flow is thus essential to water resource management and habitat conservation. In addition, the ability to predict the fate and transport of materials in rivers is important for mitigating the adverse effects associated with a range of environmental hazards, including oil spills, point source or diffuse pollution, harmful algal blooms, and the spread of invasive species. Various instruments can be used to directly measure variables like turbidity and chlorophyll concentration, and numerical modeling can yield some insight regarding dispersion processes, but remote sensing is uniquely capable of providing the spatially distributed information needed to fully characterize the complex patterns of flow and transport associated with these processes. Motivated to further advance this capacity, our primary objectives herein are to (1) introduce a new, general framework for quantitatively observing dynamic phenomena occurring along channel cross-sections that we call Hyperspectral Image Transects during Transient Events in Rivers, or HITTER for short, and (2) demonstrate the application of this workflow in the context of a tracer experiment on the Missouri River.
The adverse environmental effects of accidental introductions of hazardous materials into rivers create a compelling need for improved techniques for detecting these substances and tracking their movement. For example, Ji et al. [1] identified surrogate methods for identifying submerged oil and reviewed deterministic and probabilistic models for predicting its motion. As discussed in [2], this approach can help guide response efforts by providing estimates of how a pollutant will spread along a river, the travel time involved, and where the material might be deposited. In a biological context, an enhanced understanding of the growth, movement, and decay of potentially toxic algal blooms could facilitate efforts to mitigate their impact on ecosystems and human health [3]. Another ecological application motivated our development of the HITTER framework and provided an initial opportunity for testing the workflow: conserving endangered pallid sturgeon (Scaphirhynchus albus) along the Missouri River [2,4,5,6]. The objective of these ongoing efforts is twofold: (1) gain insight as to how the drift of larval sturgeon is influenced by various mechanisms of dispersion and (2) address pragmatic questions regarding the performance of structures engineered to improve rearing habitat for young sturgeon by creating low-velocity refugia along channel margins.
A common, proven means of examining dispersion processes in aquatic settings, including open channel flows, is to perform a tracer experiment. In a riverine context, these studies typically involve introducing some kind of distinctive substance into the flow and then monitoring changes in its concentration over time as the material spreads along and across the channel. Visible dyes are popular and effective tracers and have been widely used for this purpose, especially Rhodamine Water Tracer (RWT) [7,8]. Whereas traditionally RWT concentrations have been measured at a fixed number of discrete points using instruments placed directly into the water (i.e., in situ), research over the past two decades has demonstrated the potential utility of remote sensing techniques for mapping visible dye concentrations. Building upon a foundation established in coastal environments [9,10], much of this work has focused on rivers, initially in outdoor experimental flumes and clear-flowing natural channels [11,12]. More recently, an experimental investigation evaluated the feasibility of inferring RWT concentrations in far more turbid water [13], followed by a field test on the Missouri River [2]. These two studies indicated that the remote sensing approach was robust to the presence of abundant suspended sediment that could obscure the reflectance signal related to the dye. Other researchers have explored the use of uncrewed aircraft systems (UAS) to acquire images for examining the dispersion of visible tracers in lakes [14], managed watercourses [15], and coastal settings [16,17]. Two studies of particular relevance in the present context are those by Köppl et al. [18], who used a hyperspectral imaging system deployed from a UAS to map dye concentrations in a small stream, and Ámbar Pérez-García et al. [19], who sought to develop a generalizable method for inferring RWT concentrations that does not require site-specific in situ calibration data. In addition to mapping visible dye, hyperspectral images have also been used to estimate suspended sediment concentrations [20,21,22] and distinguish among various bottom types [23,24]. Similarly, Nayeem et al. [25] and Cai et al. [26] showed how detailed spectral information can support urban water quality monitoring. Remote sensing has thus emerged as a powerful tool for characterizing dispersion and other dynamic phenomena in rivers that could be combined with in situ monitoring and numerical modeling to provide a more comprehensive understanding of these systems.
In this paper, our goal is to build upon and enhance these capabilities by providing an overview of the HITTER framework, which is designed to address some important shortcomings of existing approaches. For example, conventional in situ devices, also known as sondes, can be used to record high-frequency time series of key water quality parameters, such as turbidity, chlorophyll fluorescence, and/or the concentration of a visible dye used as a tracer, but typically only at a single, fixed location or along the finite path of a boat or uncrewed vehicle. As a result, this approach yields very limited information on the spatial patterns associated with the dispersion process. Conversely, standard remote sensing instruments yield images that provide spatially distributed information but with a much lower temporal resolution, often only a single instantaneous snapshot. If conditions are changing rapidly, as might be the case during a flood, a contamination event, or a tracer experiment, the inability to capture in detail how a plume of sediment, pollution, or dye evolves over time is a significant limitation. In addition, most imaging systems, such as RGB (red, green, blue) video cameras deployed from UAS, lack the detailed spectral information that might be necessary to even detect certain constituents of interest, let alone infer their concentrations with a high level of accuracy [18,19,20,25,26]. Hyperspectral image transects (HITs), in contrast, represent a distinctive, new type of data that can address these issues by providing the combination of high spatial, spectral, and temporal resolution needed to more fully characterize dynamic phenomena in rivers.
In the following sections, we (1) describe a tracer experiment along the Missouri River that served as an initial case study for demonstrating an application of the HITTER framework; (2) provide information on the field measurements and remotely sensed data collected during the experiment; (3) outline the various components of the HITTER workflow; (4) present results from the tracer experiment that illustrate how HITTER can help characterize spatial and temporal variations in RWT concentration; (5) discuss the limitations of this approach as well as ways in which it could be refined, extended, and applied in other contexts; and (6) conclude with a summary of our key findings.

2. Materials and Methods

The primary objective of this study is to introduce the HITTER framework as a general approach for characterizing dynamic phenomena in rivers via remote sensing. We illustrate the application of this workflow in the context of a specific scientific investigation: a tracer experiment conducted along the Missouri River on 11 May 2024. In the following subsections, we first provide information about the experiment, including a description of the study area, field data collection, and image acquisition. We then outline the various components of the HITTER framework, using the retrieval of visible dye concentration as a specific example. All the field measurements and remotely sensed data used in this study are available through a data release from the U.S. Geological Survey (USGS) [27].

2.1. Study Area

As a follow-up to a similar tracer experiment performed in 2021 farther downstream along the Missouri River [2], this study focused on the Sheepnose Bend reach of the Missouri River near Lexington, MO, USA (Figure 1), located between river miles 308 and 315, as measured upstream from the confluence with the Mississippi River [28]. Herein, we focus on a 6 km subset at the upper end of the overall study area; a more comprehensive analysis of the entire 10 km reach is planned. The mean daily streamflow recorded at a nearby USGS gauging station (Waverly, Site No. 06895500) at the start of the experiment was 2735 m3/s [29], a relatively high value that has been exceeded only 10% of the time based on 75 years of record at this gauge [30]. Due to heavy precipitation throughout the region in the days and weeks leading up to the study, the river was flowing near bankfull level, with the wetted channel width ranging between approximately 300 and 400 m. The Sheepnose Bend reach, like most of the lower Missouri River, is channelized to support navigation, but the wing dikes and other control structures present within the study area were submerged during the experiment due to the elevated streamflow. These conditions also led to high levels of turbidity, with a mean value of 399.74 ± 11.89 NTU recorded by a water quality sonde attached to a buoy anchored within the field of view of the hyperspectral imaging system. This value was an order of magnitude greater than that reported for the previous tracer experiment on the Missouri River [2], which in turn was about 30 times greater than that reported in a similar study on the much more clear-flowing Kootenai River [11]. The high turbidity in the Sheepnose Bend reach at the time of our study complicated the estimation of tracer dye concentrations from remotely sensed data and thus posed a challenging initial test for the HITTER framework.
Much like the Searcy’s Bend area examined in [2], we identified the Sheepnose Bend reach of the Missouri River as an appropriate location for this tracer experiment because the site is typical of the heavily engineered channel below Kansas City, Missouri, and yet continues to provide habitat for endangered pallid sturgeon. For example, annual sampling of the Sheepnose Bend reach by the U.S. Army Corps of Engineers has resulted in consistently high catch rates for age-0 (i.e., zero years old) sturgeon within specific dike fields; the factors driving the relatively large numbers of fish observed in this area remain unknown. The objective of this tracer experiment was to gain insight into why particular dike fields within Sheepnose Bend have had high catch rates of age-0 sturgeon year after year [31]. In addition, we sought to better understand how navigation structures influence the flow field, affect the interception of dispersing larvae, and create low-velocity areas along the channel margins that can serve as rearing habitat for early-life-stage sturgeon. These processes are also being investigated with particle-tracking models [6,32,33], and the tracer experiment provides a means of validating the output from these numerical simulations.

2.2. Tracer Experiment and In Situ Observations of Dye Concentration and Flow Velocity

The Sheepnose Bend tracer experiment began at 09:10:00 CDT, when 285 L of 20% solution RWT, diluted by approximately one-third with river water, was gravity-released into the Missouri River from two 208 L barrels. The RWT was simultaneously released from each barrel through a 0.1 m diameter pipe and discharged into the river at the water surface. Releasing all of the dye via this process required a total time of approximately 45 s. We did not attempt to mix the dye vertically during the injection because only the dye concentration at the water surface was relevant for remote sensing, particularly given the high turbidity at the time of our study. Moreover, we assumed that the RWT would be vertically well mixed by the time the dye pulse reached the HITs located 5.95 km downstream from the release point. In contrast to our previous work [2], in which dye was released across the full channel width from two moving boats, we employed a simpler injection strategy for this experiment: all of the dye was released from one boat positioned at a single location in the channel thalweg near the left bank (when facing downstream). This approach more closely approximated an instantaneous, point source injection of dye than the transect-based release described in [2]. The remainder of the experiment involved tracking the dispersion of the dye along and across the channel using a combination of in situ observations and remotely sensed data.
Direct measurements of RWT concentration and turbidity were made using a pair of Turner C3 submersible fluorometers (Turner Designs, San Jose, CA, USA) [34], which we refer to as sondes. These instruments measured dye concentration and turbidity as functions of fluorescence and were calibrated and tested prior to deployment, as described in [2], with additional detail provided in the metadata associated with [27]. One of the sondes was mounted on the boat from which the field spectra described in Section 2.3 were collected and had a logging interval of 1 s. The other sonde was attached to a fixed buoy located farther downstream within the field of view of the HITs (Figure 1) and recorded dye concentrations once every 5 s. The buoy was anchored to the riverbed, and its position was established using a Trimble R2 real-time kinematic (RTK) integrated global navigation satellite system (GNSS) receiver [35]. All of the instruments used in this study were synchronized to allow the in situ and remotely sensed data sets to be linked by time stamps. The RWT concentrations measured by the sondes were post-processed and smoothed by removing background fluorescence, de-spiking the data, and applying a five-point moving mean. Data from both the sonde deployed from the boat and that attached to the buoy were used to explore relationships between the dye concentration and spectral reflectance. In addition, the buoy sonde provided temporally detailed information on the passage of the dye pulse over the course of the experiment and relative to the timing of the hyperspectral image acquisitions. The concentration curve shown in Figure 2 exhibits two peaks because the movement of the dye was interrupted by wing dikes along the channel margins.
In addition, we made field measurements of flow velocity so that we could account for the spatial offset, and thus the time lag, between the buoy where dye concentrations were measured and the HIT. As part of a broader effort to characterize flow conditions during the tracer experiment, velocity data were collected at a large number of cross-sections (XSs) spaced evenly throughout the entire Sheepnose Bend reach. For the purposes of this study, we used data from two of these XSs, one located approximately 90 m upstream of the HIT and the other approximately 85 m downstream. A Teledyne RDI Workhorse RioGrande (Teledyne RD Instruments, Poway, CA, USA) acoustic Doppler current profiler (ADCP) [36] deployed from the bow of a motorboat was used to measure velocities throughout the water column. The position of each profile was established using a Trimble R7 RTK-GNSS receiver (Trimble, Inc., Westminster, CO, USA). At each XS, velocities were recorded during four passes back and forth across the channel, with data transmitted to a laptop onboard the boat in real time using the WinRiver II software (Version 2.26) package [37]. This program was also used to configure the instrument prior to data collection and then save the raw data in a format that could be imported into other software for further analysis. Although the complete ADCP data set includes many other variables, we used only the spatial coordinates and depth-averaged velocity magnitude from each profile for this study. Additional details on ADCP data collection and processing are provided in the metadata associated with [27].

2.3. Field Spectra

Remote sensing methods can play an important role in tracer experiments like the one we conducted along the Missouri River because a spectrally based approach has a sound physical foundation: a relationship between the concentration of a visible dye and the amount of solar energy reflected from the water. Because our study occurred during daylight hours and did not involve deploying a separate light source at the known excitation wavelength of RWT, any radiant energy due to fluorescence was assumed to be negligible relative to that from reflectance. To calibrate a site-specific, empirical relation between spectral reflectance R ( λ ) , where λ denotes wavelength, and RWT concentration C, we made direct, field-based measurements of both quantities from a boat along an XS 4.35 km downstream from the dye release location (Figure 1). Although the sonde recorded concentrations approximately 30 cm below the water surface, we assumed that these observations were representative of the concentrations at the surface. Field spectra were acquired using an Analytical Spectral Devices HandHeld2 Pro spectroradiometer (Malvern Panalytical Ltd, Malvern, UK) [38], which we refer to as the ASD. Our deployment of this instrument for the Sheepnose Bend tracer experiment was essentially the same as that during our earlier work at Searcy’s Bend, and only a brief summary is provided here. For additional details about the ASD and its use, please refer to [2,27]. Field spectra spanning the wavelength range from 400 to 900 nm with a 1 nm sampling interval were collected in reflectance mode, based on measurements of a white reference panel at the beginning of the channel traverse. For data collection from the moving boat, the ASD saved a spectrum to disk once every 3 s, while the sonde logged an RWT concentration every 1 s. Each spectrum recorded by the ASD was the average of five nearly instantaneous measurements separated only by the instrument’s integration time, which remained consistent at 136 ms. The time stamp for each ASD file was used to link the spectrum to the RWT concentrations measured by the sonde. All 130 field spectra we obtained in this manner were smoothed by applying a third-order Savitzky–Golay filter [39] with a 15 nm window two times, consistent with our previous work [2,13]. Example spectra are shown in Figure 3, with the color of each line representing the RWT concentration recorded by the boat-based sonde at the time of the reflectance measurement.

2.4. Acquisition of Remotely Sensed Data

We conducted seven consecutive flights to collect remotely sensed data in coordination with the dye release and field measurement activities described above. The first three flights occurred before the dye pulse reached the area encompassed by the images, but a range of dye concentrations was observed during the latter four flights (Figure 2); we only used data from flights 4 through 7. Whereas conventional camera systems capture full two-dimensional (2D) spatial images, we used a line scanner to acquire one-dimensional (1D) image transects oriented perpendicular to the river channel. More specifically, we used a Headwall Nano-Hyperspec hyperspectral imaging system [40], which we refer to as the Nano, mounted on a DJI Matrice 600 Pro UAS with approved government edition firmware [41]. The Nano’s detector array consists of 640 pixels in the cross-track direction, but rather than a second spatial dimension, the columns of the detector array record highly detailed spectral information for each pixel across the current scan line, with 274 bands from 398 to 1002 nm. The Nano is considered a pushbroom imaging system and is typically operated so as to assemble a full 2D image as a sequence of cross-track scan lines acquired as the aircraft moves forward along the flight path. For this study, we deployed the Nano in a stationary mode by directing the UAS to hover in place above the channel. This approach allowed us to acquire a series of HITs that quantify variations in spectral reflectance both spatially, in 1D laterally across the channel, and over time, from one scan line to the next.
The four flights that captured the passage of the dye plume spanned a time period ranging from 10:10 to 11:33 local time (CDT), during which the solar azimuth varied from 105° to 126° and the solar elevation varied from 45.6° to 59.4°, as determined using an online solar calculator [42]. During each flight, the UAS was directed to a consistent location near the north bank of the Missouri River, 5.95 km downstream from the dye release point, and then hovered at an altitude approximately 213 m above ground level. At this flying height, the Nano’s 12 mm effective focal length lens yielded a 13 cm ground sampling distance (i.e., pixel size) and an 84 m swath width. Although flying higher would have allowed us to observe a greater proportion of the channel width, we were limited by regulatory constraints on UAS operations and chose to focus our HIT acquisition on a dike field (a series of engineered rock structures placed in the river) along the left bank (when facing downstream) where annual sampling has consistently resulted in high catch rates of age-0 sturgeon. Navigation structures in this portion of the channel create more complex flow conditions, including low-velocity areas that might provide suitable rearing habitat for larval sturgeon.
Prior to each flight, a white reference panel filling the sensor’s entire field of view was placed beneath the Nano to set the exposure time based on the current illumination conditions, with typical values on the order of 3 ms. The frame period (i.e., the amount of time between successive scan lines) was set to 5.5 ms, which allowed the Nano to acquire 180 HITs per second and thus provide a very dense time series of reflectance observations for each pixel across the channel. In addition to the hyperspectral imaging system itself, the Nano was integrated with an Applanix APX-15 GNSS/IMU (inertial measurement unit) that recorded position and orientation at a rate of 200 Hz. To support the post-processing of these data, we placed a Trimble R8 GNSS receiver on the south bank of the river to serve as a base station. The position of the base station was refined in a post-processing mode using the National Geodetic Survey’s Online Positioning User Service (OPUS) [43]. The OPUS-corrected data from the base station and the GNSS/IMU measurements from onboard the UAS were provided as input to the Applanix POSPac UAV (Version 8.8) software package [44] to produce a smoothed best estimate of trajectory (SBET) for each flight. For typical deployments, the SBET would be used to produce an orthorectified 2D image from the raw Nano data using software provided by the manufacturer, but for this study, we instead developed a customized workflow for using the SBET to obtain spatial coordinates for each pixel along each HIT. This process is an important component of the HITTER framework, and further details are provided in Section 2.5.2.
In addition to this geometric post-processing, we also applied a series of radiometric corrections to the Nano data using the Headwall SpectralView (Version v3.3.1.0) software package [45]. The files recorded by the Nano, referred to as data cubes, were imported into SpectralView and converted from raw digital counts to radiance using sensor-specific calibration information provided by Headwall. A reflectance calibration was then performed by using a tarp placed within the Nano’s field of view as an in-scene white reference. Because the riverbank was densely forested, the tarp was secured to a boat anchored along the channel margin and stretched flat. To ensure that at least a few of the scan lines acquired during each flight captured the tarp, the pilot yawed the aircraft by 90° as the UAS approached the hover location and flew slowly toward the bank to sweep the Nano’s swath across the tarp. Conversely, when departing from the hover location at the end of a flight, the pilot yawed the aircraft by 90° and flew slowly away from the bank to again capture the tarp within the sensor’s field of view. The tarp consisted of three panels, each with a distinct shade of gray: nominally 60%, 34%, and 11% reflectance. Sufficient instrumentation was not available to measure the reflectance of the calibration tarp on the day of the tracer experiment, but we used an ASD FieldSpec 4 Hi-Res spectroradiometer to measure the reflectance of each panel of the tarp on a later date at a time selected to match the solar azimuth and elevation at the time of the Nano flights. To perform the reflectance calibration in SpectralView, we opened a data cube that included the tarp, selected a seed pixel within the darkest panel of the tarp, and used a spectral angle mapper algorithm to increase the sample size by identifying similar pixels. The SpectralView software only allows the user to provide a spectrum for one reference material present in the image, so we chose the darkest of the three tarp panels because the relatively low reflectance of this panel was most similar to that of the water and the dye. The mean radiance spectrum for these pixels was then linked to the known reflectance of the tarp panel, as measured with the ASD, and used to linearly rescale the Nano data from units of radiance to units of reflectance between 0 and 1. In addition, we spectrally subset the data to include only the 227 bands within the visible and near-infrared wavelength range from 400 to 900 nm to match the field spectra collected during the experiment and for consistency with previous work [2]. These calibration and spectral subsetting procedures were applied to all data cubes acquired during the four flights that captured the passage of the dye plume.

2.5. Hyperspectral Imaging of Transects during Transient Events in Rivers (HITTER) Framework

The UAS-based hyperspectral images acquired and pre-processed as described above are the primary input to the HITTER workflow. Although we developed the framework in the context of a particular tracer experiment on the Missouri River, we also intend for the approach to be generalizable to other locations and types of dynamic phenomena. For this reason, the following subsections provide greater detail on the individual components of the HITTER workflow illustrated schematically in Figure 4.
In addition, the data release associated with this study includes MATLAB (Version 24.2) [46] functions for implementing the various phases of the HITTER workflow, along with a script called ProcessingLogNanoDye.m that illustrates the use of these functions within the context of our study on the Missouri River. The specific functions involved in each stage of the processing chain are identified in the subsections below, and the source code is thoroughly documented, with numerous comments to facilitate understanding. Although this codebase provides a foundation for applying the HITTER approach to similar data sets, potential users would need to update file paths and other parameters before attempting to use these functions with different inputs. In addition, please note that the code is made available without warranty or support, as described in the distribution liability section of the metadata associated with the data release [27].
Certain aspects of our implementation are specific to the hyperspectral imaging system we employed, but could be adapted to other pushbroom sensors by making appropriate modifications, mainly to account for differences in the file format and naming convention. For this study, the data cubes recorded by the Nano consist of three-dimensional (3D) arrays in which each row comprises 640 pixels across a given scan line. The successive scan lines acquired by the sensor thus form the vertical dimension (rows) of the array, with the horizontal dimension (columns) representing different cross-track positions (i.e., pixels). Each layer of the array in the third dimension represents a different spectral band. For each cube acquired during a flight, the initial file has a prefix (raw_) followed by a numeric identifier XXXXXX that represents a scan line number, typically in multiples of 2000 (for this study, we set the frames-per-cube parameter in the Nano configuration software to 2000). After the radiometric correction and reflectance calibration steps described in Section 2.4 were applied, the suffixes (_rd) and (_rf) were appended, leading to a final file name like raw_XXXXXX_rd_rf. For each of these files, a corresponding ENVI format header (*.hdr) file was generated that contains metadata about the data cube, including the number of rows (scan lines), columns (cross-track pixels), and spectral bands, as well as a list of wavelengths and information on the data type, interleave, and byte order [47]. As a cube is being acquired, the numeric identifier of each individual scan line and its internal time stamp in nanoseconds is written to a separate frame index file (e.g., frameIndex_XXXXXX.txt). These frame index files allow scan lines to be linked to trajectory data recorded by the Applanix GNSS/IMU during the flight. Trajectory information from throughout the entire flight is written to a file named imu_gps.txt that includes one column with Nano internal time stamps that correspond to those in the frame index files and another column with standard GPS (Global Positioning System) times. These GPS times are used as a bridge to link the individual scan line numbers and the Nano’s internal time stamps in the frame index files and imu_gps.txt to the refined, post-processed trajectory in a file named SBET_FlightX.txt, where X denotes the flight number. We developed functions to systematically process Nano data cubes and the associated frame index and trajectory files as initial steps toward further analysis of the resulting HITs.

2.5.1. Trajectory Processing and Data Cube Selection

The first step in the HITTER workflow involves importing the initial trajectory recorded during the flight and the SBET produced afterward and linking the two. These tasks are accomplished using the function nanoTrajectory.m, which reads in both the raw trajectory in the imu_gps.txt file and the SBET in SBET_FlightX.txt. Both trajectories are plotted in map view (Figure 5) along with the time series of altitude and each of the three angles specifying the orientation of the UAS: roll, pitch, and yaw. These graphical outputs provide an overview of the flight path and a means of confirming alignment between the initial and post-processed trajectories. The time stamps from the imu_gps.txt file, which has a sampling frequency of approximately 100 Hz, and the SBET, which is output at 200 Hz, are then converted to a common time zone and format (GPS UTC) so that the position and orientation of the Nano at any time during the flight can be refined using the SBET. More specifically, the easting and northing spatial coordinates (in UTM Zone 15 for this study), altitude, roll, pitch, and yaw are all linearly interpolated at each time point in the imu_gps.txt file based on the SBET. In addition, nanoTrajectory.m reads in all the frameIndex_XXXXXX.txt files within the FlightX directory. These files contain the frame index and the Nano’s internal time stamp for each scan line and thus provide a means of linking individual frames to the refined trajectory based on the internal time stamps. The starting frame numbers (i.e., file names) for all data cubes acquired during the flight are parsed from the frame index file names and used to construct file names for the corresponding RGB preview images. Outputs from nanoTrajectory.m thus include the refined trajectory, a two-column array with frame indices and the Nano’s internal time stamps, and a list of starting frame numbers and preview images for all data cubes acquired during the flight.
This information is used directly in the next stage of the HITTER processing chain, which involves identifying data cubes collected while the UAS hovered in place above the cross-section of interest. Cube selection is performed interactively via the function getHoverCubes.m. The refined trajectory from nanoTrajectory.m is overlain on a background image that provides a graphical interface for the user to zoom in and click on a point in the center of the stationary, hovering portion of the UAS flight; an example of this process is shown in Figure 6. The trajectory is then subset to include only those positions falling within a specified spatial tolerance of this point. The time stamps for this subset of the trajectory are used to identify the range of frame indices, and hence data cubes, acquired during the hover. To provide some visual confirmation, the corresponding RGB preview images are displayed in a montage like that shown in Figure 7. The main outputs from getHoverCubes.m are the coordinates of the hover center point selected by the user, array indices that can be used to access the hovering subset of the refined trajectory, and a list of the data cubes that were collected during the hover. Only the selected cubes are passed on to the subsequent phase of the HITTER workflow.

2.5.2. Scan Line Spatial Referencing

After refining the trajectory and identifying cubes acquired during the hover, the next step is to obtain the real-world spatial coordinates of each pixel along each scan line in each of the selected cubes using projectLine.m. This function is applied on a per-cube basis and begins by displaying the preview image and plotting the UAS trajectory for the time period corresponding to the specified cube. The user is then given the option of selecting a subset of the scan lines for further processing. In most cases, all scan lines within a given cube are retained, but for the first and last cubes identified via getHoverCubes.m, some scan lines at the start or end of the cube might need to be excluded if they were acquired while the UAS was still in motion, either before the hover began or after the hovering portion of the flight had ended. This additional subsetting operation is important because the spatial coordinates of all scan lines retained are averaged to obtain a mean cross-section (MCS); including scan lines acquired while the UAS was moving, tilting, or rotating in this calculation could introduce significant errors.
Once any within-cube subsetting is complete, projectLine.m uses the Nano’s internal time stamps for the selected scan lines, along with the refined trajectory from nanoTrajectory.m, to obtain the position and orientation of the UAS at the time t each scan line was acquired. In addition, projectLine.m imports and parses a settings.txt file within the FlightX directory to obtain two additional parameters needed to determine the pixel size, denoted by p and also known as the ground sampling distance: (1) the effective focal length of the Nano’s lens, denoted by f, and (2) the physical dimensions of each element of the sensor’s detector array, referred to as the array pixel pitch and denoted by p a . The flying height h ( t ) of the UAS is obtained by subtracting the known water surface elevation (WSE), z W S E , for the cross-section from the altitude of the UAS, z U A S ( t ) :
h ( t ) = z U A S ( t ) z W S E .
Note that z W S E must be provided by the user as an input to projectLine.m and is assumed to be constant over time. For our site on the Missouri River, z W S E was an ellipsoid height of 173.91 m, equivalent to a geoid height of 207.32 m. The pixel size for the scan line is then calculated as
p ( t ) = p a × h ( t ) f
and assumed constant across the entire scan line. Because the subsetting operation ensures that viewing angles are near nadir and typical UAS flying heights are relatively low, this approximation is reasonable. For example, at the 213 m flying height used in this study, h ( t ) would be elongated by less than 0.4% relative to the nadir case even for an off-nadir viewing angle of up to 5°, resulting in a pixel size difference of only 0.0006 m. Relative to all the other sources of error inherent to this analysis, this amount of uncertainty is negligible.
The flying height h ( t ) is also used to establish the length of a vector oriented from the UAS toward the river. Initially, the UAS is assumed to be perfectly level such that this vector points straight down. In this scenario, shown schematically in Figure 8a, the Nano would view the channel at nadir, the center pixel of the scan line would be directly below the UAS, and the spatial coordinates [ x p ( t ) , y p ( t ) ] of the center pixel would be identical to those of the UAS [ x u ( t ) , y u ( t ) ] , which are available from the trajectory. However, any tilt of the UAS during image acquisition would lead to a more complex viewing geometry, as illustrated in Figure 8b. We allow for this possibility by refining the initial vector based on the interpolated pitch and roll angles at the time the line was scanned. In addition, the yaw angle ψ ( t ) is used to specify the sensor’s heading and thus the direction of the scan line. The yaw, pitch, and roll are provided as inputs to the built-in MATLAB function makehgtform to obtain a matrix that can be pre-multiplied by the initial nadir-pointing vector to perform rotations about the z-, y-, and x-axes in turn. This process yields calculated shifts x s ( t ) and y s ( t ) in the x- and y-directions, relative to the position of the UAS, that account for the off-nadir viewing geometry (Figure 8c). These shifts are added to the easting and northing coordinates of the UAS from the trajectory to establish the location of the center pixel of the scan line as
[ x p ( t ) , y p ( t ) ] = [ x u ( t ) , y u ( t ) ] + [ x s ( t ) , y s ( t ) ] ,
where [ x s ( t ) , y s ( t ) ] = [ 0 , 0 ] in the ideal case of a nadir view. For the flying height used in this study and near-nadir viewing angles while the UAS was hovering, the [ x s ( t ) , y s ( t ) ] shifts were typically on the order of 2–3 m.
The relative positions of all the other pixels along the scan line are then specified as offsets from the center pixel as follows. The center pixel is placed at the origin ( 0 , 0 ) of a temporary local coordinate system with the y-axis oriented along the scan line, as shown in Figure 8d. The local, relative y-coordinates of each pixel along the scan line, denoted by y r ( t ) , are then established by creating a regularly spaced n × 1 array of offsets that begins at n p / 2 + p / 2 and ends at n p / 2 p / 2 , where n is the number of pixels in the cross-track direction (640 for the Nano) and p is the pixel size calculated via Equations (1) and (2), which sets the distance between each of the n successive offsets. In the x-direction, the corresponding array of local, relative coordinates, denoted by x r ( t ) , consists of an n × 1 vector of zeros (Figure 8d). These local, relative coordinates are thus given by
x r ( t ) = 0
y r ( t ) = n p 2 + p 2 : p : n p 2 p 2 ,
with colons used to separate the starting value, increment, and ending value of the array.
Next, the yaw angle ψ ( t ) is used to rotate the scan line into the proper orientation. Placing the center pixel of the scan line at the origin of the local coordinate system simplifies this operation (Figure 8e). The local coordinates of a pixel after rotation are given by
x r ( t ) = x r ( t ) cos [ ψ ( t ) ] + y r ( t ) sin [ ψ ( t ) ]
y r ( t ) = x r ( t ) sin [ ψ ( t ) ] + y r ( t ) cos [ ψ ( t ) ] ,
where the prime symbols indicate rotated coordinates. The final real-world spatial coordinates of each pixel along the scan line are then obtained by applying a translation (Figure 8f). This operation involves adding the real-world spatial coordinates of the center pixel, [ x p ( t ) , y p ( t ) ] , to the rotated local coordinates of every pixel along the scan line:
x f ( t ) = x r ( t ) + x p ( t )
y f ( t ) = y r ( t ) + y p ( t ) ,
where the subscript f now denotes the final coordinates of each pixel.
Within projectLine.m, these calculations are implemented by passing ψ ( t ) , x p ( t ) , and y p ( t ) as inputs to the MATLAB function rigidtform2d to create a single transformation matrix T that accounts for both rotation and translation. The final spatial coordinates of each pixel along the scan line are obtained by using T and the arrays of local, relative coordinates given by Equations (4) and (5) as inputs to the MATLAB function transformPointsForward. These procedures are performed within a loop over all the scan lines in a given data cube, and the final coordinates for each scan line are compiled in a 2D array with the same number of rows (scan lines) and columns (cross-track pixels) as the data cube. As a result, each pixel in the cube can be associated with a specific, real-world spatial location.

2.5.3. Aggregating Scan Lines to Obtain a Mean Cross-Section

In principle, each scan line in each data cube represents an individual HIT that could be used to examine the transient event of interest, such as the passage of a dye plume. In practice, however, some degree of aggregation is necessary to distill this massive volume of data into a more manageable form. For example, the number of data cubes acquired during the four UAS flights conducted as part of this study varied from 38 to 45, and each cube consisted of 2000 scan lines, so up to 90,000 scan lines were potentially available for a given flight. In addition, the 5.5 ms frame period (i.e., scan line sampling interval) of the Nano implied that these observations were very closely spaced in time, with a sampling frequency three orders of magnitude greater than that of the in situ measurements of dye concentration needed to calibrate a relation between reflectance and concentration. Moreover, although the UAS was essentially stationary during the hover and variations in roll, pitch, and yaw were minimal, even slight fluctuations in the position and/or orientation of the UAS propagate through the spatial referencing process described in Section 2.5.2 and lead to scan lines that are not in perfect alignment with one another. For all these reasons, producing a temporally aggregated, spatially averaged mean cross-section (MCS) was an important next step in the HITTER workflow.
To achieve this distillation, we developed a function called linkLine2cube.m that links data cubes to the projected scan line coordinates from projectLine.m, spatially averages the coordinates to obtain an MCS, and then resamples the hyperspectral data to a more reasonable set of output times. This process results in a HIT that is more suitable for further analysis than the individual scan lines. Whereas projectLine.m operates on a per-cube basis and loops over the scan lines within a given cube, linkLine2cube.m aggregates all the cubes acquired during the hovering portion of a flight. The function begins by importing the spatial coordinates of each pixel along each scan line (i.e., the output from projectLine.m) for each of the cubes and compiling all the x-coordinates into one array and all the y-coordinates into another. The number of rows in these arrays is the total number of scan lines acquired during the hover (up to 90,000 in this study), and the number of columns is the total number of cross-track pixels along each scan line (640 for the Nano). Taking the mean of each column of these arrays thus yields the x- and y-coordinates for each node along the final MCS. To provide the geographic context and visual confirmation of the spatial referencing process, the resulting MCS is overlain on an orthophoto provided as an additional input to linkLine2cube.m. The MCSs for the four flights that captured the passage of the dye plume along the Missouri River are overlain on a background orthophoto in Figure 9. The calibration tarp placed on the boat anchored near the bank and used to convert the hyperspectral data cubes from radiance to reflectance is also visible in the background image.
In addition to the spatial coordinates, linkLine2cube.m also compiles the time stamps associated with each scan line in each of the cubes acquired during the hovering portion of the flight. The total duration of the hover is established by taking the difference between the first and last of these time stamps. One of the inputs to linkLine2cube.m is the time increment Δ t in seconds between the final output HITs. We used Δ t = 1 to obtain a set of output times that were integer numbers of seconds equally spaced 1 s apart.
With the spatial coordinates of the MCS and the output times established, the main processing performed by linkLine2cube.m is implemented within a loop over the cubes acquired during the hover. First, the original hyperspectral data cube and corresponding scan line spatial coordinates from projectLine.m are imported. The x , y -coordinates are averaged over all scan lines in the cube and overlain on the orthophoto to verify that this particular cube was spatially referenced successfully. The data cube is then smoothed over time by applying a moving median filter to every column of the cube, with a window size set to encompass all the scan lines acquired during the time period Δ t . For example, for a Nano frame period of 5.5 ms and Δ t = 1 s, the window size would be 182 scan lines. Next, every pixel of the time-smoothed data cube is spectrally smoothed using a Savitzky–Golay filter [39]. The number of iterations of the Savitzky–Golay filter, its window size, and its degree are inputs to linkLine2cube.m; in this study, we applied the filter twice with a window size of 7 and a third-order polynomial. The resulting temporally and spectrally smoothed data cube is then resampled to the output times using the synchronize function in MATLAB. At this point, the original set of 82,000 scan lines acquired during 7.5 min of hovering for Flight #4, for example, has been reduced to a set of 451 HITs. The final operation performed within the main loop of linkLine2cube.m is spatially interpolating the reflectance values for each pixel of the reduced data cube from the mean scan line coordinates for the current cube onto the nodes of the MCS, which was computed based on all the cubes acquired during the entire hover. The interpolation is applied to each spectral band in turn and is implemented using a natural neighbor method with MATLAB’s scatteredInterpolant function. The final output from linkLine2cube.m is a 3D array with temporally and spectrally smoothed reflectance values in each band for each node along the MCS at each of the output times. For our example, this array would consist of 451 rows, each corresponding to one of the output times; 640 columns, each representing a cross-track pixel and thus a node of the MCS; and 227 layers, one for each spectral band.

2.5.4. Synchronization with In Situ Measurements

Now that the remotely sensed data have been processed to obtain a set of temporally and spectrally smoothed HITs spatially interpolated onto an MCS, the next step in the HITTER workflow is to link these data to, in our case, field measurements of dye concentration. For other applications of this approach, the target variable could be different (e.g., suspended sediment concentration), but the procedures outlined herein are intended to be generic and could be adapted for use with other types of in situ observations. For the tracer experiment on the Missouri River, we developed a function called sonde4nano.m to connect the HITs produced via linkLine2cube.m to the RWT concentrations recorded by the sonde placed within the sensor’s field of view. This function takes as input the spatial coordinates of the sonde and plots its position on an orthophoto along with the MCS.
This graphical display is important because the sonde is unlikely to lie directly on the MCS; this was the case for all four flights shown in Figure 9. To allow for this possibility, the user is prompted to inspect the map and specify whether the MCS is located upstream or downstream of the sonde; both of these geometries are illustrated schematically in Figure 10. The MCS node closest to the sonde is then identified, and the distance d from that node to the sonde is calculated. This information is critical because the water in a river is in motion, which implies that any spatial offset between the MCS and the sonde will lead to a time lag between the HITs and the in situ concentration measurements. For example, if the MCS is located 5 m upstream of the sonde and the river is flowing at a velocity of 1 m/s, the water observed in the HIT at time t will not reach the sonde until 5 s later. This time lag is calculated by sonde4nano.m based on field measurements of flow velocity made at one XS located upstream of the MCS and another downstream of the MCS, as shown in Figure 10. For our case study on the Missouri River, these velocity data were acquired using an ADCP, as described in Section 2.2. The point closest to the MCS on the velocity XS upstream of the MCS is identified, the distance d u from this point calculated, and the velocity v u at this point extracted from the ADCP data. The same procedure is applied to the velocity XS downstream of the MCS to obtain the analogous quantities d d and v d . A representative mean velocity v is then calculated as the distance-weighted average of the velocities observed upstream and downstream of the MCS:
v = v u d u d u + d d + v d d d d u + d d .
The travel time required for the water to pass from the MCS to the sonde is then given by
l = d / v if the MCS is upstream of the sonde
l = d / v if the MCS is downstream of the sonde ,
where l is the lag time that must be added to the time t s at which a sonde observation is made to determine the time stamp t of the HIT that captured the same parcel of water:
t = t s + l .
For situations like that shown in Figure 9 and Figure 10a, where the MCS is upstream of the sonde, l is negative, and a HIT from a few seconds earlier than the sonde observation is associated with the in situ concentration measurement. Conversely, if the MCS were located downstream from the sonde (Figure 10b), l would be positive, and a HIT from a few seconds later than the sonde observation would be linked to the concentration measured in situ.
With the time lag l established, sonde4nano.m then links the sonde observations to individual HITs as follows. The entire time series of concentration measurements is the first subset to encompass only the hovering portion of the current flight. All the cubes acquired during this time period and processed as described in the preceding sections are then aggregated and used to create a new tabular data structure with two fields: (1) the time stamp for each HIT and (2) the spectrum associated with the MCS node located closest to the sonde. The time lag calculated by Equations (10) and (11) or (12) is then added to the time stamps for the sonde observations, per Equation (13), to obtain the time stamps for the HITs that capture the same parcel of water as those observations. Finally, the remotely sensed and in situ data sets are connected to one another by using MATLAB’s synchronize function to linearly interpolate the HITs to the lagged sonde time stamps. The final output from sonde4nano.m is a tabular data structure with three fields: (1) time stamps that are based on those from the sonde but account for the time lag resulting from any spatial offset between the MCS and the sonde and the river’s flow velocity; (2) in situ measurements of dye concentration; and (3) reflectance spectra from the MCS nodes located closest to the sonde and interpolated to the lagged sonde time stamps. This table contains all of the information needed to proceed to the final phase of the HITTER workflow: connecting reflectance to concentration.

2.5.5. Retrieving Dye Concentration from Remotely Sensed Data

The key to mapping the dispersion of a visible tracer by remote sensing is a quantitative relation between the concentration of the dye and the reflectance in different wavelength bands. For this investigation, we established such a relationship using Optimal Band Ratio Analysis (OBRA). This spectrally based technique was initially developed for estimating water depth [48,49] but has also been used to infer RWT concentrations from various types of remotely sensed data in previous tracer studies [11,13], including earlier work on the Missouri River [2]. The basic premise of OBRA is to select a specific combination of wavelengths that is strongly correlated with some water attribute of interest, in this case, dye concentration C, and then calibrate a relation between an image-derived quantity based on this pair of bands and the parameter to be mapped. The OBRA algorithm thus takes as input paired observations of reflectance and RWT concentration and calculates the log-transformed band ratio X as
X = ln R ( λ 1 ) R ( λ 2 ) ,
where R ( λ ) denotes the reflectance at wavelength λ for all possible combinations of numerator λ 1 and denominator λ 2 wavelengths. A separate regression of X against C is performed for each ( λ 1 , λ 2 ) pair, and that which yields the highest coefficient of determination R 2 is the optimal band ratio. The corresponding regression equation provides a site-specific, tuned relation for inferring C from the spectrally based quantity X. In addition to the optimal band ratio, the OBRA algorithm also outputs a matrix of R 2 values for all the other X versus C regressions that summarizes spectral variations in the strength of the relationship between reflectance and concentration. This information can be visualized by creating 2D plots with the denominator and numerator wavelengths on the horizontal and vertical axes, respectively, and colors representing the R 2 values for each ( λ 1 , λ 2 ) combination.
OBRA is a general, flexible technique that can be applied to various types of remotely sensed data. In this study, we performed OBRA for both field spectra acquired just above the water surface from a boat traversing the Missouri River and HITs initially acquired from a UAS hovering above the channel and then processed via the workflow outlined above. We implemented OBRA using a custom MATLAB function called genObraLin.m, which was first developed for estimating water depth from multi- and hyperspectral images [49], was subsequently incorporated into the Optimal River Bathymetry Toolkit (ORByT) [50,51], and is now made available as part of the data release associated with this study [27]. No modification of the underlying source code from ORByT was required to apply OBRA in this context because the algorithm is generic; we simply substituted dye concentration for water depth as the response variable. The main inputs to genObraLin.m were a vector of concentration measurements and a matrix of spectra in which each row corresponds to a measurement of C and each column represents a different wavelength. Outputs from the function include the numerator and denominator wavelengths for the optimal band ratio, the corresponding X versus C regression equation, and the R 2 values for all possible band combinations. Although genObraLin.m can perform OBRA based on several functional forms of the X versus C relation, including linear, quadratic, exponential, and power-law variants [49], we only used the linear form in this study because the other, more complex versions did not yield any meaningful improvements in R 2 .
For the field spectra, our data set encompassed RWT concentrations from 0 to 12.34 ppb, and the observations were relatively evenly distributed across this range (Figure 3), so we used all of the available field spectra to perform OBRA. The remotely sensed data, however, were not as conducive to OBRA and thus required some additional preparation before being used as input to genObraLin.m. More specifically, the timing of the UAS flights relative to the passage of the dye plume, summarized in Figure 2, was less than ideal, with the initial peak in dye concentration occurring after the third flight but before the fourth. We thus missed the highest concentration recorded by the sonde anchored on the buoy, and the highest value of C observed during one of the flights was a relatively low value of 21.72 ppb. More importantly, once the main pulse of dye had passed, concentrations quickly dropped to much lower levels and varied little during flights 5, 6, and 7. As a result, the distribution of measured concentrations that could be paired with HITs was uneven and heavily skewed toward low values. Previous research in a depth retrieval context showed that stratifying the data provided as input to OBRA to obtain a more even distribution of the response variable (initially depth, but concentration in our case) can lead to a stronger correlation between X and the parameter of interest [52]. We thus followed a stratified OBRA approach, outlined in [52], based on the distribution of dye concentrations measured during the four UAS flights that captured the passage of the dye plume. The cumulative distribution function and frequency distribution of C are shown in Figure 11a,b, and we used these plots to partition the data set into six bins with manually selected lower limits of 0, 2.27, 5.89, 12.6, 16.7, and 20 ppb. We then calculated the number of observations within each bin of the final, stratified frequency distribution shown in Figure 11c and used the smallest of these counts (17 for the bin with C from 5.89 to 12.6 ppb) as the number of paired observations of reflectance and concentration to be selected at random from each of the other bins. The final data set provided as input to OBRA consisted of 102 spectra, 17 from each of the six bins, and was evenly distributed across the range of observed concentrations rather than being skewed toward low concentrations like the original data set.
Once a calibrated relation between reflectance and concentration has been established by stratified OBRA, the final X versus C regression equation can be used to estimate dye concentrations and characterize their variation across time and space. The final function we developed as part of the HITTER workflow, cube2dyeMap.m, was designed for this purpose and takes as input the final HITs and the OBRA output from genObraLin.m. The calibrated OBRA relation is applied to each pixel in each HIT to obtain a 2D array of inferred dye concentrations in which each row represents a transect for a specific point in time and each column represents a time series at a fixed spatial location across the channel. The user can specify a particular row and column as input to cube2dyeMap.m to plot as a cross-section and time series, respectively. The former is useful for visualizing the spatial structure of the dye plume at a given time, while the latter is useful for examining the temporal evolution of dye concentration at a fixed location. For example, for a 7.5 min hovering occupation by the Nano, selecting the 224th row would plot a transect halfway through the flight, and selecting the 320th column would plot a time series at the midpoint of the MCS. In addition, the full 2D array of estimated dye concentrations is rendered as an image for visualizing how C varies both laterally across the channel and over time as the dye plume passes beneath the hovering UAS. To facilitate comparison across different flights, the user can also specify the limits of the color scale used to represent C in these displays. Examples of the graphical output from cube2dyeMap.m are provided in the following section.

3. Results

3.1. Characterizing the Relationship between Reflectance and Concentration with Field Spectra

To characterize the relationship between reflectance and RWT concentration, we collected field spectra from a boat 4.35 km downstream of the dye release point on the Missouri River (Figure 1). These spectra are plotted in Figure 3 with lines colored by concentration, which ranged from 0 to 12.34 ppb. Due to the extremely high turbidity at the time these data were collected (approximately 400 NTU), the overall shape of the spectra is unlike those from previous studies in clear-flowing rivers [11] or even on the Missouri River when turbidity was much lower (approximately 35 NTU) [2]. The spectra recorded during the present experiment thus more closely resembled those of soil than water, but the influence of the dye on reflectance was still evident. As the amount of RWT increased, the reflectance peak around 590 nm became higher, while greater absorption by the dye caused the reflectance trough around 560 nm to deepen. As a result, the spectral slope d R ( λ ) / d λ between these two regions steepened as concentration increased. A comparison of our Figure 3 with Figure 3 of [2] indicates that even though the range of concentrations represented by the field spectra from this study was slightly less than that observed during the previous tracer experiment on the Missouri River, the difference in spectral shape between low and high concentrations was much more pronounced here than in the earlier study, despite turbidity being an order of magnitude greater than in 2021.
We performed an OBRA of the field spectra to more systematically examine and quantify the relationship between reflectance and concentration. The results of this analysis are summarized in Figure 12, which indicates that the strongest correlation between X and C occurred for the R ( 579 ) / R ( 593 ) band ratio. The denominator wavelength was located near the reflectance peak, and R ( 593 ) took on higher values at greater concentrations. The numerator wavelength was closer to but not coincident with the reflectance trough, and R ( 579 ) became lower as the concentration increased. The resulting linear relation between X and C was very strong, with an R 2 of 0.98, and had a negative slope (Figure 12a). Because R ( 579 ) was consistently less than R ( 593 ) , the ratio R ( 579 ) / R ( 593 ) was less than one. Moreover, the decrease in R ( 579 ) and increase in R ( 593 ) associated with an increase in concentration dictated that this ratio became smaller at greater concentrations. Taking the natural logarithm of the ratio per Equation (14) thus led to increasingly negative values of X. Relatively small concentrations of dye were thus associated with small values of X closer to zero, and higher concentrations led to negative values of X that were greater in absolute magnitude, resulting in an inverse relationship between X and C.
The OBRA algorithm also performs an X versus C regression for all wavelength combinations, not just the band ratio identified as optimal. The resulting R 2 values are represented graphically in Figure 12b, which provides insight on spectral variations in the strength of the relationship between reflectance and concentration. Although 579 and 593 nm were selected as the optimal ( λ 1 , λ 2 ) pair, the extensive bright-red tones in Figure 12b indicate that the correlation between X and C was nearly as high for many other combinations of wavelengths. For example, the horizontal swath of red centered on 590 nm implies that pairing any numerator band between 490 and 650 nm with any denominator band greater than about 470 nm would lead to an R 2 in excess of 0.9. The narrow horizontal strip of light-blue to white tones in the center of this swath is associated with R 2 values near 0 and is located around 570 nm. This wavelength corresponds to the crossover point in Figure 3 that separates shorter wavelengths, where reflectance decreases as C increases, from longer wavelengths, where reflectance increases with concentration. At the crossover wavelength itself, reflectance is the same for all concentrations, and any X defined using this band would be uncorrelated with C. However, this crossover region is very narrow and can be avoided by selecting a denominator band to either side of 570 nm. Overall, our analysis of the field spectra acquired during this tracer experiment implied that a strong, persistent relationship between reflectance and concentration could provide a sound physical basis for remote sensing of RWT dispersion.

3.2. Inferring Dye Concentrations from Hyperspectral Image Transects

To assess the potential to characterize the dispersion of a visible tracer by hyperspectral imaging, we applied the HITTER workflow outlined above to the Nano data acquired during the 2024 tracer experiment on the Missouri River. Pairing HITs with in situ measurements of dye concentration and randomly selecting a stratified sample, as described in Section 2.5.5, led to the 102 spectra shown in Figure 13. As in Figure 3, the spectra are colored by concentration, which ranged from nearly 0 to 21.72 ppb during the four flights conducted as the dye plume traveled downstream beneath the hovering UAS. Although the overall shape of the Nano spectra was similar to that of the field spectra, with a steady increase in reflectance with wavelength due to the abundant suspended sediment, the influence of the dye on reflectance was much less evident. Whereas the field spectra featured a clear peak and trough that became more pronounced as the concentration of RWT increased, the Nano spectra were more uniform and lacked any obvious shapes or patterns that could be easily linked to reflection or absorption by the dye. The lack of a strong spectral expression of RWT in the Nano data was likely due to the high turbidity of the water, which presumably obscured the reflectance signal associated with the dye. The tracer experiment on the Missouri River was our initial test of the feasibility of inferring RWT concentrations with this particular hyperspectral imaging system, and we hypothesize that absorption and reflection features associated with the dye would be better expressed in a more clear-flowing river. The Nano spectra were also more noisy than the field spectra, with erratic fluctuations from band to band that persisted even after the extensive temporal and spectral smoothing applied by the linkLine2cube.m function described in Section 2.5.3.
Whereas the clear expression of the dye signal in the field spectra made a strong relationship between reflectance and concentration readily apparent with even a cursory visual inspection, inferring any such relation from the noisy, relatively featureless Nano spectra leaned much more heavily upon the OBRA algorithm. More specifically, this procedure systematically evaluated the correlation between X and C for all possible band combinations and thus provided a means of identifying subtle connections between reflectance and concentration that were not immediately obvious when viewing the spectra used as input. In this case, despite the absence of strong reflectance peaks or deep troughs in the Nano spectra, OBRA identified a moderately strong ( R 2 = 0.77 ) relationship between X and C for the R ( 593 ) / R ( 697 ) band ratio (Figure 14a). Although this correlation was weaker than for the field spectra, the R 2 was also higher than we had expected when first viewing the Nano spectra shown in Figure 13. We also considered this to be an encouraging result, given the relatively low concentrations observed during the Nano flights, with a maximum of 21.72 ppb. The regression equation derived from the Nano spectra is more difficult to interpret than that from the field spectra because the numerator and denominator wavelengths do not correspond to any obvious reflection or absorption features. However, a closer examination of Figure 13 revealed that, overall, reflectance tended to increase with increasing concentration. The separation between relatively dark, low-C spectra and brighter, high-C spectra was greater at the shorter 593 nm wavelength selected as the numerator band than at the longer 697 nm wavelength selected as the denominator. R ( 593 ) thus tended to increase to a greater degree than R ( 697 ) as concentration increased, leading to a direct relationship between X and C for the ratio R ( 593 ) / R ( 697 ) .
The much weaker effect of RWT concentration on the Nano spectra compared to the field spectra also led to a stark contrast between the OBRA matrices for the two data sets. Whereas the plot of X versus C  R 2 values for all possible ( λ 1 , λ 2 ) combinations for the field spectra was dominated by broad areas of bright red that indicated strong correlations between reflectance and concentration (Figure 12b), the corresponding plot for the Nano spectra featured only a few small, isolated patches of pink (Figure 14b). The lack of a strong, consistent relationship between reflectance and concentration for the Nano spectra was thus evident not only in the lower R 2 value for the band ratio identified as optimal but also in the dearth of alternative band combinations that would have yielded correlations anywhere near as high. Instead, Figure 14b is dominated by light-blue and white tones that indicate weak correlations between X and C for the vast majority of wavelength combinations. However, the optimal band ratio was located within a narrow vertical swath of relatively high R 2 values greater than about 0.6 that encompassed a small range of denominator wavelengths from 695 to 700 nm but a broader set of numerator wavelengths from 575 to 625 nm. This result suggests that a small group of bands centered around 697 nm was more strongly influenced by variations in concentration and could be coupled with a broader range of shorter wavelengths to link X to C. More generally, our findings suggest that the availability of many narrow bands in the hyperspectral data was critical to establishing even a moderately strong relationship between reflectance and concentration based on spectra that at first appeared featureless.

3.3. Mapping Spatial and Temporal Variations in Concentration from Remotely Sensed Data

Having established a calibrated relationship for inferring concentration from reflectance, the final phase of the HITTER workflow involved applying the regression equation shown in Figure 14a to the remotely sensed data acquired from the UAS to characterize the dispersion of the dye plume over the course of the experiment. More specifically, we used cube2dyeMap.m to calculate an estimated dye concentration for each pixel in each HIT acquired during each of the four UAS flights that captured the passage of the dye. As described in Section 2.5.5, the primary output from this function is a 2D array of concentration estimates like that shown in Figure 15a for Flight #4. Each row of this matrix represents a transect across the channel at a specific point in time; an example XS of image-derived concentrations from halfway through the hovering portion of the flight is shown in Figure 15b. For this particular transect, the inferred dye concentration varies smoothly from about 15 ppb near the bank to 20 ppb at the other end of the XS closer to the center of the channel, where more of the original pulse of RWT was conveyed by the main downstream flow. This general spatial pattern is evident throughout the full array of HITs represented by the matrix depicted graphically in Figure 15, with higher concentrations on the right, farther from the bank. Also, note that all transects included vegetation along the north (left) bank, which rendered concentration estimates along this bank inaccurate.
When comparing one HIT in the sequence to the next 1 s later in time, the noise inherent to the concentration estimates becomes evident. Each column of the array produced by cube2dyeMap.m represents a time series of estimated dye concentrations at a fixed spatial position across the channel; an example from the center of the MCS for Flight #4 is shown in Figure 15c. The second-to-second fluctuations in concentration over time are much greater in magnitude than the cross-stream spatial variation shown in Figure 15b, with C ranging from less than 5 to more than 25 ppb during the 7.5 min hover. The noise in this time series remained even after the data were temporally and spectrally smoothed and spatially interpolated onto the MCS and overwhelmed any weak trend in concentration that might be associated with the passage of the dye plume. This persistent noise is most likely due to the highly turbid conditions during the experiment, which complicated the detection of a dye signal in the Nano spectra and thus compromised our ability to identify trends in concentration from the HITs for any one flight.
Over the longer time scale of all four flights, which spanned a period of 1:16:05, the decrease in dye concentration as the plume traveled downstream past the UAS hover location became more apparent. A single transect extracted from the middle of each flight is shown in Figure 16, and the comparison of these concentration XSs is consistent with the in situ concentration measurements summarized in Figure 2. Concentrations were highest during the first flight (#4), up to about 20 ppb, but had declined to much lower values, around 5 ppb, by the time the next flight (#5) began. The high concentration estimates where the HITs extended onto the bank should be disregarded because these pixels captured the reflectance of land rather than water. Concentrations generally remained low during the last two flights, but a zone of higher estimated concentrations on the order of 10 ppb in the transect from Flight #7 suggests that some of the dye initially released as a single pulse dispersed much more slowly and could lead to episodic upticks in concentration long after the initial peak had passed. Developing a process-based understanding for these patterns is the topic of planned work, but the present study demonstrates how the HITTER framework can provide detailed, quantitative information on spatial and temporal variations in dye concentration to support this type of analysis.

4. Discussion

4.1. Limitations and Uncertainties Associated with the HITTER Framework and Its Application to the Missouri River Tracer Experiment

The tracer experiment on the Missouri River provided an opportunity to develop and test the HITTER workflow in the context of an applied investigation focused on habitat conditions for larval sturgeon. Although this effort yielded some encouraging results, the study also highlighted several limitations and uncertainties associated with the initial version of the framework and its application as part of this particular experiment. For example, we were not able to rigorously and systematically evaluate the accuracy of the scan line spatial referencing process outlined in Section 2.5.2 due to specific circumstances at the time of our data collection. Because the Missouri River was near the bankfull stage during the experiment, we were not able to place ground control targets along the channel margins, which would have been exposed under more typical flow conditions. In the absence of such targets, and with only uniform riparian vegetation along the banks, we were not able to compare the positions of distinct features in the spatially referenced HITs we produced to their independently surveyed true locations. As a surrogate, we applied the spatial referencing process to a Nano data cube acquired while the UAS was exiting the hover site to obtain a more conventional 2D image that included the calibration tarp we had stretched across a boat anchored near the bank. We then manually digitized six unique points on the tarp on both the orthophoto and the projected Nano image. Comparing the two sets of coordinates yielded a root mean squared error of 2.29 m, which we considered to be an acceptable level of spatial uncertainty, representing only 2.73% of the approximately 84 m overall length of one of our HITs and the roughly 400 m width of the Missouri River.
Another potential source of spatial uncertainty inherent to the HITTER workflow was the aggregation of all the individual scan lines acquired during the hovering portion of a UAS flight onto a single MCS. As described in Section 2.5.3, linkLine2cube.m takes the mean of the projected ( x , y ) coordinates of each cross-track pixel along all the scan lines to obtain the coordinates of each node of the MCS. To quantify the amount of error associated with this averaging operation, we revisited this portion of the code using data from Flight #4 and calculated the standard deviation of the scan line coordinates in addition to their mean. Taking the square root of the sum of the squares of the standard deviations for the x- and y-coordinates yielded 0.85 m as a typical distance between the position of a pixel in any one scan line and the corresponding node along the final MCS. As for the spatial referencing error described above, we considered this level of uncertainty to be acceptable, given the length of the HITs and the width of the river. Overall, the UAS we used in this study was able to maintain a remarkably stable position and orientation during the hovering portion of each flight. Because variations in location, altitude, pitch, roll, and yaw were minimal and the raw GNSS/IMU data were post-processed using a GNSS base station to obtain an SBET, we were able to produce HITs that were internally consistent and could be aggregated onto a single MCS with a high degree of confidence. In general, given all the other sources of error involved in the processing chain and the persistent noise inherent to the Nano spectra, we did not consider spatial uncertainty to be a major limitation of the HITTER framework as applied in the context of this experiment.
Another potential limitation of the HITTER workflow as implemented herein is the need for independent field measurements of flow velocity to account for the spatial offset, and thus the time lag, between the MCS and the sonde. As described in Section 2.5.4, we used ADCP data from cross-sections above and below the HIT to estimate a representative flow velocity, which we then combined with the distance between the MCS and the sonde to calculate how much time was required for the water captured in the HIT to reach the sonde. In this study, the sonde was located downstream of the MCS, and so we paired each in situ concentration measurement with a HIT from a few seconds earlier. If the sonde had been upstream of the MCS, the concentration data would be linked to HITs acquired a few seconds later. For the tracer experiment on the Missouri River, these adjustments were minor and might not have been necessary. For example, for Flight #4, the sonde was only 2.78 m downstream of the closest MCS node, and the flow velocity, calculated as the mean of the nearest ADCP measurements on the XS above and below the MCS, was a modest 1.17 m/s, leading to an MCS-to-sonde time lag of only 2.37 s. Given the 5 s logging interval of the sonde, not accounting for the time lag probably would have had little, if any, impact on the final results.
More generally, however, incorporating information on the relative positions of the MCS and sonde and the local flow velocity so that an accurate time lag can be calculated is a valuable component of the HITTER framework. Ultimately, the remotely sensed and in situ data sets must be linked to one another based on time stamps to calibrate a relationship between reflectance and the water attribute of interest; using an appropriate offset between the two allows this connection to be made in a more rigorous and precise manner. This aspect of the analysis takes on greater significance as the distance between the MCS and the sonde increases and/or as the flow velocity becomes greater, either or both of which would lead to longer lag times that would need to be factored in when synchronizing the two data sets. In this study, we used a simple distance-weighted averaging of ADCP measurements from two XSs, one 89.2 m upstream and the other 83.9 m downstream of the MCS for Flight #4, for example. In other situations, more closely spaced ADCP transects, a more sophisticated interpolation scheme, or a hydrodynamic model could be used to refine the velocity estimates used in the time lag calculation. Conversely, if the MCS and sonde are close together and the flow velocity is low, the time lag could be negligible, and the two data sets could be synchronized directly based on their original time stamps.
The single most significant source of uncertainty associated with this study was the noise in the Nano spectra, which largely obscured any reflectance signal related to variations in RWT concentration (Figure 13). This noise persisted even after the temporal and spectral smoothing and spatial interpolation operations implemented within linkLine2cube.m and described in Section 2.5.3. The lack of a clear, consistent relationship between reflectance and concentration for the Nano spectra was evident in the OBRA output, not only as a lower R 2 value for the optimal band ratio but also as a dearth of alternative band combinations with correlations nearly as high (Figure 14). The field spectra collected from a boat, in contrast, featured a pronounced peak and trough that became more accentuated as the concentration of RWT increased (Figure 3), which led to a much stronger relationship between reflectance and concentration that was readily apparent even before performing OBRA and was then substantiated by the quantitative analysis.
Neither the source of the noise in the Nano data nor the reason for the disparate OBRA results for the field spectra and for the HITs was immediately obvious. However, the high turbidity of the Missouri River during the experiment was likely the primary contributing factor. This study took place under near-bankfull conditions when the river was conveying a large volume of suspended sediment, leading to turbidity values an order of magnitude greater than those observed during a 2021 tracer study at another site on the Missouri River [2]. We hypothesize that the abundant suspended material imparted a strong background color to the water itself that essentially overwhelmed and masked the reflectance signal associated with the dye, at least for the remotely sensed data acquired from a UAS. The field spectra might have been less affected because these data were collected from just above the water surface and might have recorded a reflectance signal that was primarily influenced by dye present right at the surface, as opposed to the sediment distributed throughout the water column. The Nano, in contrast, might have captured a depth-integrated signal that was more heavily impacted by the turbidity of the water and less sensitive to dye that might have been present only near the surface. Further testing would be needed to evaluate this hypothesis, but we are confident that the extremely turbid conditions were at least partially responsible for the noisy Nano spectra and the resulting relatively weak correlation between reflectance and concentration. We suspect that applying the same workflow to data collected from a more clear-flowing river would yield more promising results. Future studies across a broader range of river environments would provide a more robust assessment of the general utility of the HITTER framework.

4.2. Potential Extensions of the HITTER Framework

Our primary objective in this paper was to introduce the HITTER framework as a general, flexible means of characterizing spatial and temporal variations in some water attribute of interest via hyperspectral imaging of a channel cross-section. This approach could become a useful tool for capturing transient events in rivers as they occur. Acquiring detailed spectral information at a high sampling frequency provides a rich data set that could subsequently be explored in greater depth to gain additional insight regarding these dynamic phenomena. As an initial case study to demonstrate how the HITTER workflow might facilitate dispersion studies, we presented results from a tracer experiment on the Missouri River. In this context, the key water attribute was the concentration of a visible dye. We showed how HITs could be correlated with in situ concentration measurements to establish a relationship between reflectance and concentration that could then be used to produce transects and time series of concentration estimates. However, this is but one example of the many ways in which the HITTER framework might be applied across the riverine sciences. The approach is well suited to quantifying the passage of any substance that is suspended or dissolved within the river, carried downstream by the flow, and has a detectable influence on the reflectance of the water. Such materials might include sediment, algae, oil, and a host of other contaminants. Various nutrients or chemicals also might lend themselves to this type of analysis if their absorption, scattering, or fluorescence properties create a distinct spectral signal. By imaging the same cross-section repeatedly over time, the HITTER workflow could also be used to count various floating or submerged objects as they pass through the sensor’s field of view and thus estimate fluxes of these materials, provided that information on flow velocities is also available. Non-contact techniques like large-scale particle image velocimetry [53] could be employed for this purpose. Pieces of vegetation or other types of debris, fish or invasive species of concern, ice, and algal mats are potential targets for this kind of inventory. Repeat imaging also lends itself to monitoring how the river channel itself changes over time in response to erosion and deposition of the streambed, provided a relationship between reflectance and water depth can be established.
In this study, we acquired hyperspectral data from a UAS hovering above the river, but the HITTER framework could be modified to accommodate other modes of sensor deployment. For example, data collected from a stationary, nadir-viewing perspective, such as a bridge or boom extended out over the channel, would be simpler to process in many ways. Rigidly mounting the sensor on a fixed support would obviate the need for the trajectory processing and cube selection phases of the workflow. The scan line spatial referencing would be streamlined considerably; establishing the coordinates of the sensor and its height above the water surface might be sufficient for defining the location and scale of the HITs. Alternatively, the imaging system could be mounted on the bank to view the river at an oblique angle, but the more complex, off-nadir viewing geometry would have to be taken into account. The primary advantage of airborne data collection is the ability to capture a broader field of view and thus a larger portion of the river in each individual scan line. Bridge- or bank-mounted systems would provide much more limited coverage, but using a wider-angle lens or some kind of scanning mirror could help to mitigate this limitation. Another way to obtain more spatially extensive data from a ground- or water-based vantage point would be to deploy the imaging system from a boat or uncrewed surface vessel that could then be anchored in place at different locations within the channel, analogous to a hovering UAS. The field of view would still be smaller than for the airborne case, but a mobile platform would provide greater flexibility than a fixed mount. Depending on the level of precision required, correcting the various geometric distortions associated with any kind of near-field deployment might be necessary.
As currently conceptualized, the HITTER framework relies upon in situ measurements of the water attribute of interest to relate the attribute to reflectance, but this requirement could be relaxed. A calibration step of some kind is needed before attribute values can be inferred from the remotely sensed data. In this study, for example, we linked the HITs to measurements of dye concentration from a sonde. However, an alternative approach to the calibration process could eliminate the need for simultaneous field data collection. One potential strategy might involve using a physics-based radiative transfer model to simulate how the reflectance of the water varies as a function of the concentration of the constituent of interest (e.g., RWT), given information on the optical properties of the water and the spectral characteristics of the constituent. Running this forward model for a range of concentrations would provide a synthetic data set from which a relation for inferring concentration from reflectance could be derived. To implement this approach, a detailed understanding of the interactions between light, water, and not only the constituent of interest but also any other optically significant suspended or dissolved materials would be needed. Efforts to develop such knowledge could be justified by obviating the need for in situ data collection, which would allow the HITTER framework to be applied more broadly. Alternatively, data collected from a range of river environments, or even laboratory settings [19], could be used to train a transferable machine learning model that ideally would be applicable from one site to the next, even in the absence of site-specific calibration data. Pursuing this approach would involve acquiring a large number of extensive, diverse data sets for training and validation; the HITTER workflow could play a key role in such an effort.

5. Conclusions

The movement of various materials through river channels is a fundamental component of geomorphic and ecological systems. Understanding the processes driving this motion is important for many aspects of river management, including water quality monitoring and hazard response. Remote sensing techniques are uniquely capable of providing the kind of spatially distributed information required to support such a broad range of applications. Our goal in this study was to augment this capability by introducing a new framework for characterizing dynamic phenomena at the scale of a channel cross-section: Hyperspectral Image Transects during Transient Events in Rivers (HITTER). We present an end-to-end workflow for processing data from a hyperspectral line scanner deployed from a UAS hovering above a river to obtain cross-sections and time series that quantify how the water attribute of interest varies laterally across the channel and temporally over the course of an event. The framework is intended to be flexible and could be applied in a range of different contexts; we present results from a tracer experiment on the Missouri River to demonstrate the potential utility of the approach. The principal findings from this initial proof-of-concept investigation can be summarized as follows:
  • The HITTER framework provides a means of taking raw hyperspectral data cubes, UAS trajectory information, and field measurements of the water attribute of interest as input and generating dense, spatially distributed time series of estimated attribute values at each node along a mean channel cross-section.
  • The workflow includes modules for initial trajectory processing, hyperspectral data cube selection, scan line spatial referencing, temporal and spectral smoothing and spatial interpolation of the aggregated spectral data onto a mean cross-section, synchronization with in situ measurements, calibration of a relationship between reflectance and the water attribute of interest, and generation of output time series and transects. These steps are implemented with a series of custom MATLAB functions made available through a data release associated with this study [27].
  • When applied to data collected during a tracer experiment on the Missouri River, the HITTER workflow allowed us to discern a moderately strong relationship between reflectance and the concentration of a visible dye even in highly turbid water that obscured the spectral signal associated with the dye. We used this relation to produce sequential cross-sections of estimated dye concentrations that quantified the movement of the dye pulse beneath a hovering UAS during the experiment.
  • Key uncertainties associated with the HITTER framework include the spatial referencing and aggregation of scan lines, the use of independent data on flow velocity to account for the effects of a spatial offset and thus time lag between the image transects and in situ measurements, and persistent noise in the hyperspectral data, which was particularly evident in our case study on the Missouri River.
  • Future studies could apply the HITTER workflow to characterize the dispersion of a range of other materials, such as sediment, algae, and pollutants; count the passage of submerged or floating objects; or even monitor changes in the shape of the channel itself. The framework could also be adapted to accommodate different modes of sensor deployment and alternative approaches to calibrating relationships between reflectance and various water attributes.

Author Contributions

Conceptualization, C.J.L., V.M.S., B.J.S. and M.A.B.; methodology, C.J.L. and V.M.S.; software, C.J.L.; validation, B.J.S.; formal analysis, C.J.L. and V.M.S.; investigation, C.J.L.; resources, B.J.S., V.M.S. and M.A.B.; data curation, C.J.L. and V.M.S.; writing—original draft preparation, C.J.L.; writing—review and editing, V.M.S., B.J.S. and M.A.B.; visualization, C.J.L.; supervision, B.J.S.; project administration, B.J.S.; funding acquisition, B.J.S. All authors have read and agreed to the published version of the manuscript.

Funding

Funding for this project was provided by the U.S. Army Corps of Engineers Missouri River Recovery Program and the USGS Ecosystems Mission Area.

Data Availability Statement

The data acquired to support this study and used herein are available through a USGS data release [27].

Acknowledgments

Numerous staff from the USGS Columbia Environmental Research Center contributed to the field data collection during the experiment, including Carrie Elliott, Ty Helmuth, Bruce Call, Brian Anderson, Aaron DeLonay, Chad Vishy, Sabrina Davenport, Ross Burlbaw, Chris Green, Killian Kelly, Joe Bell, Parker Golliglee, Logan Sleezer, Matthew Struckhoff, and Keith Grabner. Surdex Corporation, Chesterfield, MO, flew the crewed fixed-wing aircraft and collected the orthophotos that we used as background images of the study reach. Private land-owner access adjacent to the Missouri River was granted by Gary Hoeppner and was used as the UAS launch site.

Conflicts of Interest

The authors declare no conflicts of interest. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
1D/2D/3DOne-dimensional/two-dimensional/three-dimensional
ADCPAcoustic Doppler current profiler
ASDAnalytical Spectral Devices HandHeld2 Pro spectroradiometer
GNSSGlobal navigation satellite system
GPSGlobal Positioning System
HITHyperspectral Image Transect
HITTERHyperspectral Imaging of Transects during Transient Events in Rivers
IMUInertial measurement unit
MCSMean cross-section
NanoHeadwall Nano-Hyperspec hyperspectral imaging system
OBRAOptimal Band Ratio Analysis
ppbParts per billion
RReflectance
RWTRhodamine Water Tracer dye
SBETSmoothed best estimate of trajectory
UASUncrewed aircraft system
USGSU.S. Geological Survey
UTCCoordinated Universal Time
UTMUniversal Transverse Mercator
XSCross-section

References

  1. Ji, C.; Beegle-Krause, C.J.; Englehardt, J.D. Formation, Detection, and Modeling of Submerged Oil: A Review. J. Mar. Sci. Eng. 2020, 8, 642. [Google Scholar] [CrossRef]
  2. Legleiter, C.J.; Sansom, B.J.; Jacobson, R.B. Remote Sensing of Visible Dye Concentrations During a Tracer Experiment on a Large, Turbid River. Water Resour. Res. 2022, 58, e2021WR031396. [Google Scholar] [CrossRef]
  3. Schmadel, N.M.; Harvey, J.W.; Choi, J.; Stackpoole, S.M.; Graham, J.L.; Murphy, J.C. River Control Points for Algal Productivity Revealed by Transport Analysis. Geophys. Res. Lett. 2024, 51, e2023GL105137. [Google Scholar] [CrossRef]
  4. Jacobson, R.; Annis, M.; Colvin, M.; James, D.; Welker, T.; Parsley, M. Missouri River Scaphirhynchus albus (Pallid sturgeon) Effects Analysis—Integrative Report 2016; Scientific Investigation Report 2016-5064; U.S. Geological Survey: Reston, VA, USA, 2016.
  5. Erwin, S.O.; Bulliner, E.A.; Fischenich, J.C.; Jacobson, R.B.; Braaten, P.J.; DeLonay, A.J. Evaluating flow management as a strategy to recover an endangered sturgeon species in the Upper Missouri River, USA. River Res. Appl. 2018, 34, 1254–1266. [Google Scholar] [CrossRef]
  6. Sansom, B.J.; Call, B.C.; Legleiter, C.J.; Jacobson, R.B. Performance evaluation of a channel rehabilitation project on the Lower Missouri River and implications for the dispersal of larval pallid sturgeon. Ecol. Eng. 2023, 194, 107045. [Google Scholar] [CrossRef]
  7. Viriot, M.L.; Andre, J.C. Fluorescent dyes: A search for new tracers for hydrology. Analusis 1989, 17, 97–111. [Google Scholar]
  8. Runkel, R.L. On the use of rhodamine WT for the characterization of stream hydrodynamics and transient storage. Water Resour. Res. 2015, 51, 6125–6142. [Google Scholar] [CrossRef]
  9. Clark, D.B.; Lenain, L.; Feddersen, F.; Boss, E.; Guza, R.T. Aerial imaging of fluorescent dye in the near shore. J. Atmos. Ocean. Technol. 2014, 31, 1410–1421. [Google Scholar] [CrossRef]
  10. Feddersen, F.; Olabarrieta, M.; Guza, R.T.; Winters, D.; Raubenheimer, B.; Elgar, S. Observations and modeling of a tidal inlet dye tracer plume. J. Geophys. Res. Ocean. 2016, 121, 7819–7844. [Google Scholar] [CrossRef]
  11. Legleiter, C.J.C.; Mcdonald, R.R.; Nelson, J.M.; Kinzel, P.J.; Perroy, R.L.; Baek, D.; Seo, I.W. Remote sensing of tracer dye concentrations to support dispersion studies in river channels. J. Ecohydraulics 2019, 4, 131–146. [Google Scholar] [CrossRef]
  12. Baek, D.; Seo, I.W.I.; Kim, J.S.J.; Nelson, J.J.M. UAV-based measurements of spatio-temporal concentration distributions of fluorescent tracers in open channel flows. Adv. Water Resour. 2019, 127, 76–88. [Google Scholar] [CrossRef]
  13. Legleiter, C.; Manley, P.V., II; Erwin, S.O.; Bulliner, E.A. An Experimental Evaluation of the Feasibility of Inferring Concentrations of a Visible Tracer Dye from Remotely Sensed Data in Turbid Rivers. Remote Sens. 2020, 12, 57. [Google Scholar] [CrossRef]
  14. Powers, C.; Hanlon, R.; Schmale, D. Tracking of a Fluorescent Dye in a Freshwater Lake with an Unmanned Surface Vehicle and an Unmanned Aircraft System. Remote Sens. 2018, 10, 81. [Google Scholar] [CrossRef]
  15. Burdziakowski, P.; Zima, P.; Wielgat, P.; Kalinowska, D. Tracking Fluorescent Dye Dispersion from an Unmanned Aerial Vehicle. Sensors 2021, 21, 3905. [Google Scholar] [CrossRef]
  16. Filippi, M.; Hanlon, R.; Rypina, I.I.; Hodges, B.A.; Peacock, T.; Schmale, D.G. Tracking a Surrogate Hazardous Agent (Rhodamine Dye) in a Coastal Ocean Environment Using In Situ Measurements and Concentration Estimates Derived from Drone Images. Remote Sens. 2021, 13, 4415. [Google Scholar] [CrossRef]
  17. Johansen, K.; Dunne, A.F.; Tu, Y.H.; Almashharawi, S.; Jones, B.H.; McCabe, M.F. Dye tracing and concentration mapping in coastal waters using unmanned aerial vehicles. Sci. Rep. 2022, 12, 1141. [Google Scholar] [CrossRef]
  18. Köppl, C.J.; McKnight, U.S.; Lemaire, G.G.; Nørregaard, A.M.; Thiim, T.C.; Bjerg, P.L.; Bauer-Gottwein, P.; Garcia, M. Tracer Concentration Mapping in a Stream with Hyperspectral images from Unoccupied Aerial Systems. Adv. Water Resour. 2023, 182, 104567. [Google Scholar] [CrossRef]
  19. Pérez-García, Á.; Lorenzo, A.M.; Hernández, E.; Rodríguez-Molina, A.; van Emmerik, T.H.M.; López, J.F. Developing a Generalizable Spectral Classifier for Rhodamine Detection in Aquatic Environments. Remote Sens. 2024, 16, 3090. [Google Scholar] [CrossRef]
  20. Kwon, S.; Shin, J.; Seo, I.W.; Noh, H.; Jung, S.H.; You, H. Measurement of suspended sediment concentration in open channel flows based on hyperspectral imagery from UAVs. Adv. Water Resour. 2022, 159, 104076. [Google Scholar] [CrossRef]
  21. Kwon, S.; Seo, I.W.; Noh, H.; Kim, B. Hyperspectral retrievals of suspended sediment using cluster-based machine learning regression in shallow waters. Sci. Total Environ. 2022, 833, 155168. [Google Scholar] [CrossRef]
  22. Kwon, S.; Seo, I.W.; Lyu, S. Investigating mixing patterns of suspended sediment in a river confluence using high-resolution hyperspectral imagery. J. Hydrol. 2023, 620, 129505. [Google Scholar] [CrossRef]
  23. Kwon, S.; Noh, H.; Seo, I.W.; Park, Y.S. Effects of spectral variability due to sediment and bottom characteristics on remote sensing for suspended sediment in shallow rivers. Sci. Total Environ. 2023, 878, 163125. [Google Scholar] [CrossRef] [PubMed]
  24. Kwon, S.; Gwon, Y.; Kim, D.; Seo, I.W.; You, H. Unsupervised Classification of Riverbed Types for Bathymetry Mapping in Shallow Rivers Using UAV-Based Hyperspectral Imagery. Remote Sens. 2023, 15, 2803. [Google Scholar] [CrossRef]
  25. Nayeem, H.; Syed, A.; Khan, M.Z.A. Towards Development of a Simple Technique Based on Wavelength Specific Absorption for Quality Measurement of Flowing Water. IEEE Sens. J. 2020, 20, 14780–14790. [Google Scholar] [CrossRef]
  26. Cai, J.; Chen, J.; Dou, X.; Xing, Q. Using Machine Learning Algorithms with In Situ Hyperspectral Reflectance Data to Assess Comprehensive Water Quality of Urban Rivers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  27. Legleiter, C.; Scholl, V.; Sansom, B.; Burgess, M. Hyperspectral Image Transects and Field Measurements of Reflectance, Visible Dye Concentration, and Flow Velocity Acquired during a Tracer Experiment on the Missouri River Near Lexington, MO, on May 11, 2024; U.S. Geological Survey data release; U.S. Geological Survey: Reston, VA, USA, 2024.
  28. U.S. Army Corps of Engineers Geospatial. USACE River Mile Markers. 2024. Available online: https://geospatial-usace.opendata.arcgis.com/datasets/604cdc08fe7d43cb90a0584a0b198875/explore?location=39.185460%2C-93.805502%2C13.21 (accessed on 8 October 2024).
  29. U.S. Geological Survey. USGS Water Data for the Nation: U.S. Geological Survey National Water Information System Database; U.S. Geological Survey: Reston, VA, USA, 2024. [CrossRef]
  30. Wolock, D. Flow Characteristics at U.S. Geological Survey Streamgages in the Conterminous United States; U.S. Geological Survey Open-File Report; U.S. Geological Survey: Reston, VA, USA, 2003; 03-146.
  31. Elliott, C.; Jacobson, R.; Call, B.; Roberts, M. Bedform distributions and dynamics in a large, channelized river: Implications for benthic ecological processes. In Proceedings of the SEDHYD 2019, Reno, NV, USA, 24–28 June 2019. [Google Scholar]
  32. Li, G.; Wang, B.; Elliott, C.M.; Call, B.C.; Chapman, D.C.; Jacobson, R.B. A three-dimensional Lagrangian particle tracking model for predicting transport of eggs of rheophilic-spawning carps in turbulent rivers. Ecol. Model. 2022, 470, 110035. [Google Scholar] [CrossRef]
  33. Li, G.; Elliott, C.M.; Call, B.C.; Chapman, D.C.; Jacobson, R.B.; Wang, B. Evaluations of Lagrangian egg drift models: From a laboratory flume to large channelized rivers. Ecol. Model. 2023, 475, 110200. [Google Scholar] [CrossRef]
  34. Turner Designs. C3 Submersible Fluorometer. 2024. Available online: https://www.turnerdesigns.com/c3-submersible-fluorometer (accessed on 8 October 2024).
  35. Trimble Geospatial. Tribmle R2 GNSS Systems. 2024. Available online: https://geospatial.trimble.com/en/products/hardware/trimble-r2 (accessed on 8 October 2024).
  36. Teledyne RDI. Workhorse Rio Grande ADCP. 2024. Available online: https://www.comm-tec.com/Prods/mfgs/RDI/brochures/rio_grande_ds_lr.pdf (accessed on 8 October 2024).
  37. U.S. Geological Survey Office of Surface Water. OSW Hydroacoustics: WinRiver II. 2024. Available online: https://hydroacoustics.usgs.gov/movingboat/WinRiverII.shtml (accessed on 8 October 2024).
  38. Malvern Panalytical. ASD HandHeld 2 Pro: VNIR Hand-Held Spectroradiometer. 2024. Available online: https://www.malvernpanalytical.com/en/support/product-support/asd-range/fieldspec-range/handheld-2-pro-vnir-hand-held-spectroradiometer (accessed on 8 October 2024).
  39. Savitzky, A.; Golay, M.J.E. Smoothing and differentiation of data by simplified least squares procedures. Anal. Chem. 1964, 36, 1627–1639. [Google Scholar] [CrossRef]
  40. Headwall Photonics. Nano HP (400–1000 nm) Hyperspectral Imaging Package. 2024. Available online: https://headwallphotonics.com/products/remote-sensing/nano-hp-400-1000nm-hyperspectral-imaging-package/ (accessed on 8 October 2024).
  41. DJI. Support for Matrice 600 Pro. 2024. Available online: https://www.dji.com/support/product/matrice600-pro (accessed on 8 October 2024).
  42. Global Monitoring Laboratory. NOAA Solar Calculator. 2024. Available online: https://gml.noaa.gov/grad/solcalc/ (accessed on 8 October 2024).
  43. National Geodetic Survey. OPUS: Online Positioning User Service. 2024. Available online: https://geodesy.noaa.gov/OPUS/ (accessed on 8 October 2024).
  44. Applanix. Trimble Applanix: POSPac MMS. 2024. Available online: https://www.applanix.com/products/pospac-mms.htm (accessed on 8 October 2024).
  45. Headwall Photonics. Hyperspec III and SpectralView. 2024. Available online: https://headwallphotonics.com/products/software/hyperspec-iii-and-spectralview/ (accessed on 8 October 2024).
  46. MathWorks. MATLAB. 2024. Available online: https://www.mathworks.com/products/matlab.html (accessed on 8 October 2024).
  47. NV5 Geospatial Software. ENVI Header Files. 2024. Available online: https://www.nv5geospatialsoftware.com/docs/ENVIHeaderFiles.html (accessed on 8 October 2024).
  48. Legleiter, C.; Roberts, D.A.; Lawrence, R.L. Spectrally based remote sensing of river bathymetry. Earth Surf. Process. Landf. 2009, 34, 1039–1059. [Google Scholar] [CrossRef]
  49. Legleiter, C.; Harrison, L.R. Remote Sensing of River Bathymetry: Evaluating a Range of Sensors, Platforms, and Algorithms on the Upper Sacramento River, California, USA. Water Resour. Res. 2019, 55, 2142–2169. [Google Scholar] [CrossRef]
  50. Legleiter, C. ORByT—Optical River Bathymetry Toolkit; U.S. Geological Survey software release; U.S. Geological Survey: Reston, VA, USA, 2020. [CrossRef]
  51. Legleiter, C.J. The optical river bathymetry toolkit. River Res. Appl. 2021, 37, 555–568. [Google Scholar] [CrossRef]
  52. Legleiter, C.; Overstreet, B.; Kinzel, P. Sampling Strategies to Improve Passive Optical Remote Sensing of River Bathymetry. Remote Sens. 2018, 10, 935. [Google Scholar] [CrossRef]
  53. Muste, M.; Fujita, I.; Hauet, A. Large-scale particle image velocimetry for measurements in riverine environments. Water Resour. Res. 2008, 44. [Google Scholar] [CrossRef]
Figure 1. (a) An overview of the tracer experiment conducted along the Missouri River showing the dye pulse shortly after injection, visible as a cloud of red in the water near the left edge of the figure, and the locations of field spectra, velocity measurements made with an acoustic Doppler current profiler (ADCP) at two cross-sections (XSs), a buoy with a sonde measuring dye concentration and turbidity, and hyperspectral image scan lines from four flights that captured the passage of the dye. Because the four transects overlap at this scale, only the line for Flight #7 is visible. The sonde is located between the two ADCP transects near the middle of the hyperspectral image scan lines. The dye release point was farther upstream to the left of the area depicted in this figure. The background image is an orthophoto available from [27]. (b) An inset map showing the location of the study area within the contiguous United States.
Figure 1. (a) An overview of the tracer experiment conducted along the Missouri River showing the dye pulse shortly after injection, visible as a cloud of red in the water near the left edge of the figure, and the locations of field spectra, velocity measurements made with an acoustic Doppler current profiler (ADCP) at two cross-sections (XSs), a buoy with a sonde measuring dye concentration and turbidity, and hyperspectral image scan lines from four flights that captured the passage of the dye. Because the four transects overlap at this scale, only the line for Flight #7 is visible. The sonde is located between the two ADCP transects near the middle of the hyperspectral image scan lines. The dye release point was farther upstream to the left of the area depicted in this figure. The background image is an orthophoto available from [27]. (b) An inset map showing the location of the study area within the contiguous United States.
Remotesensing 16 03743 g001
Figure 2. Time series of dye concentrations measured in situ by a sonde anchored to a buoy during the tracer experiment. The start of the hovering subset for each flight of the Nano hyperspectral imaging system that captured the passage of the dye is indicated by a blue vertical line. The end of each hover is indicated by the subsequent red vertical line.
Figure 2. Time series of dye concentrations measured in situ by a sonde anchored to a buoy during the tracer experiment. The start of the hovering subset for each flight of the Nano hyperspectral imaging system that captured the passage of the dye is indicated by a blue vertical line. The end of each hover is indicated by the subsequent red vertical line.
Remotesensing 16 03743 g002
Figure 3. Field spectra acquired from a boat traversing the channel during the dye release sorted by dye concentration. The color of each line represents the dye concentration in parts per billion (ppb) indicated in the legend.
Figure 3. Field spectra acquired from a boat traversing the channel during the dye release sorted by dye concentration. The color of each line represents the dye concentration in parts per billion (ppb) indicated in the legend.
Remotesensing 16 03743 g003
Figure 4. A flow chart illustrating the stages of the Hyperspectral Image Transects during Transient Events in Rivers (HITTER) workflow and the functions developed to implement these operations. OBRA = Optimal Band Ratio Analysis.
Figure 4. A flow chart illustrating the stages of the Hyperspectral Image Transects during Transient Events in Rivers (HITTER) workflow and the functions developed to implement these operations. OBRA = Optimal Band Ratio Analysis.
Remotesensing 16 03743 g004
Figure 5. A comparison of the raw trajectory recorded by the Nano hyperspectral imaging system during the flight and the smoothed best estimate of trajectory (SBET) produced during post-processing.
Figure 5. A comparison of the raw trajectory recorded by the Nano hyperspectral imaging system during the flight and the smoothed best estimate of trajectory (SBET) produced during post-processing.
Remotesensing 16 03743 g005
Figure 6. Interactive selection of the subset of the smoothed best estimate of trajectory (SBET) during which the uncrewed aircraft system (UAS) was hovering in position. Only hyperspectral data acquired during this portion of the flight are retained for further analysis.
Figure 6. Interactive selection of the subset of the smoothed best estimate of trajectory (SBET) during which the uncrewed aircraft system (UAS) was hovering in position. Only hyperspectral data acquired during this portion of the flight are retained for further analysis.
Remotesensing 16 03743 g006
Figure 7. Example preview images of the hyperspectral data cubes acquired during the hovering subset of the flight illustrated in Figure 6.
Figure 7. Example preview images of the hyperspectral data cubes acquired during the hovering subset of the flight illustrated in Figure 6.
Remotesensing 16 03743 g007
Figure 8. A schematic representation of the scan line spatial referencing process. (a) The ideal viewing geometry when the uncrewed aircraft system (UAS) is perfectly level and the sensor is viewing at nadir such that the spatial coordinates of the center pixel of the scan line are equivalent to those of the UAS: ( x u , y u ) . (b) In general, the UAS is not perfectly level, and the sensor views the river at some off-nadir angle such that the spatial coordinates of the center pixel of the scan line are offset from those of the UAS and are denoted by ( x p , y p ) . (c) A pointing vector based on the flying height, yaw, pitch, and roll of the UAS is used to calculate the shift ( x s , y s ) that must be added to the UAS position ( x u , y u ) to obtain the coordinates of the center pixel of the scan line ( x p , y p ) . (d) Using the pixel size p determined via Equations (1) and (2), the position of each of the n pixels along the scan line relative to the center pixel is established in a temporary, local coordinate system with its origin ( 0 , 0 ) at the center pixel. The end points have local coordinates of ( 0 , n p / 2 + p / 2 ) and ( 0 , n p / 2 p / 2 ) . (e) The direction of the scan line is established by performing a rotation around this local origin based on the yaw angle (i.e., heading), denoted by ψ . (f) Finally, a translation is applied to place the scan line in the real-world coordinate system by adding the coordinates of the center pixel ( x p , y p ) to each pixel.
Figure 8. A schematic representation of the scan line spatial referencing process. (a) The ideal viewing geometry when the uncrewed aircraft system (UAS) is perfectly level and the sensor is viewing at nadir such that the spatial coordinates of the center pixel of the scan line are equivalent to those of the UAS: ( x u , y u ) . (b) In general, the UAS is not perfectly level, and the sensor views the river at some off-nadir angle such that the spatial coordinates of the center pixel of the scan line are offset from those of the UAS and are denoted by ( x p , y p ) . (c) A pointing vector based on the flying height, yaw, pitch, and roll of the UAS is used to calculate the shift ( x s , y s ) that must be added to the UAS position ( x u , y u ) to obtain the coordinates of the center pixel of the scan line ( x p , y p ) . (d) Using the pixel size p determined via Equations (1) and (2), the position of each of the n pixels along the scan line relative to the center pixel is established in a temporary, local coordinate system with its origin ( 0 , 0 ) at the center pixel. The end points have local coordinates of ( 0 , n p / 2 + p / 2 ) and ( 0 , n p / 2 p / 2 ) . (e) The direction of the scan line is established by performing a rotation around this local origin based on the yaw angle (i.e., heading), denoted by ψ . (f) Finally, a translation is applied to place the scan line in the real-world coordinate system by adding the coordinates of the center pixel ( x p , y p ) to each pixel.
Remotesensing 16 03743 g008
Figure 9. Projected Nano mean cross-sections (MCSs) for each flight that captured the passage of the dye plume. For each flight, the pixel along the mean cross-section located closest to the sonde used to measure dye concentrations is represented by a circle symbol.
Figure 9. Projected Nano mean cross-sections (MCSs) for each flight that captured the passage of the dye plume. For each flight, the pixel along the mean cross-section located closest to the sonde used to measure dye concentrations is represented by a circle symbol.
Remotesensing 16 03743 g009
Figure 10. A schematic representation of the spatial relationship between the mean cross-section (MCS) derived from the hyperspectral data cubes, the sonde used to make in situ measurements of dye concentration, and velocity measurement cross-sections, abbreviated XS, upstream and downstream of the MCS. (a) The MCS is located upstream of the sonde, the time lag l calculated with Equations (10) and (11) is negative, and a parcel of water is captured in a hyperspectral image transect (HIT) before reaching the sonde. (b) The MCS is located downstream of the sonde, the time lag l calculated with Equations (10) and (12) is positive, and a parcel of water is captured in a HIT after passing by the sonde.
Figure 10. A schematic representation of the spatial relationship between the mean cross-section (MCS) derived from the hyperspectral data cubes, the sonde used to make in situ measurements of dye concentration, and velocity measurement cross-sections, abbreviated XS, upstream and downstream of the MCS. (a) The MCS is located upstream of the sonde, the time lag l calculated with Equations (10) and (11) is negative, and a parcel of water is captured in a hyperspectral image transect (HIT) before reaching the sonde. (b) The MCS is located downstream of the sonde, the time lag l calculated with Equations (10) and (12) is positive, and a parcel of water is captured in a HIT after passing by the sonde.
Remotesensing 16 03743 g010
Figure 11. Spectra from the hyperspectral image transects (HITs) were stratified prior to Optimal Band Ratio Analysis (OBRA). The manually selected bins of dye concentration are overlain on the (a) cumulative distribution function, (b) frequency distribution, and (c) final stratified frequency distribution. Seventeen spectra selected at random from each bin were used as input to OBRA.
Figure 11. Spectra from the hyperspectral image transects (HITs) were stratified prior to Optimal Band Ratio Analysis (OBRA). The manually selected bins of dye concentration are overlain on the (a) cumulative distribution function, (b) frequency distribution, and (c) final stratified frequency distribution. Seventeen spectra selected at random from each bin were used as input to OBRA.
Remotesensing 16 03743 g011
Figure 12. (a) The calibrated relationship between reflectance and dye concentration identified via the Optimal Band Ratio Analysis (OBRA) of field spectra. (b) Spectral variation in the strength of the relationship between the log-transformed band ratio X and dye concentration C as a function of the numerator and denominator wavelengths λ 1 and λ 2 , respectively.
Figure 12. (a) The calibrated relationship between reflectance and dye concentration identified via the Optimal Band Ratio Analysis (OBRA) of field spectra. (b) Spectral variation in the strength of the relationship between the log-transformed band ratio X and dye concentration C as a function of the numerator and denominator wavelengths λ 1 and λ 2 , respectively.
Remotesensing 16 03743 g012
Figure 13. Spectra extracted from hyperspectral image transects (HITs) from all four flights that captured the passage of the dye pulse, sorted by dye concentration. The color for each line represents the dye concentration in parts per billion (ppb) indicated in the legend.
Figure 13. Spectra extracted from hyperspectral image transects (HITs) from all four flights that captured the passage of the dye pulse, sorted by dye concentration. The color for each line represents the dye concentration in parts per billion (ppb) indicated in the legend.
Remotesensing 16 03743 g013
Figure 14. (a) The calibrated relationship between reflectance and dye concentration identified via stratified the Optimal Band Ratio Analysis (OBRA) of spectra extracted from hyperspectral image transects (HITs). (b) Spectral variation in the strength of the relationship between the log-transformed band ratio X and dye concentration C as a function of the numerator and denominator wavelengths λ 1 and λ 2 , respectively.
Figure 14. (a) The calibrated relationship between reflectance and dye concentration identified via stratified the Optimal Band Ratio Analysis (OBRA) of spectra extracted from hyperspectral image transects (HITs). (b) Spectral variation in the strength of the relationship between the log-transformed band ratio X and dye concentration C as a function of the numerator and denominator wavelengths λ 1 and λ 2 , respectively.
Remotesensing 16 03743 g014
Figure 15. Example output from the first of the four flights that captured the dye pulse. (a) Time series of hyperspectral image transects (HITs) acquired during the flight. Each row of this array represents a HIT at a given time, and each column represents a time series of estimated dye concentrations at a fixed spatial position across the channel. The horizontal green line indicates the time at which the transect shown in (b) was extracted, and the vertical green line indicates the spatial position for which the time series shown in (c) was extracted. (b) Dye concentrations estimated for an example mean cross-section from halfway through the hovering portion of the flights. (c) A time series of estimated dye concentrations for a fixed spatial position in the middle of the mean cross-section.
Figure 15. Example output from the first of the four flights that captured the dye pulse. (a) Time series of hyperspectral image transects (HITs) acquired during the flight. Each row of this array represents a HIT at a given time, and each column represents a time series of estimated dye concentrations at a fixed spatial position across the channel. The horizontal green line indicates the time at which the transect shown in (b) was extracted, and the vertical green line indicates the spatial position for which the time series shown in (c) was extracted. (b) Dye concentrations estimated for an example mean cross-section from halfway through the hovering portion of the flights. (c) A time series of estimated dye concentrations for a fixed spatial position in the middle of the mean cross-section.
Remotesensing 16 03743 g015
Figure 16. Example output from all four of the flights that captured the dye pulse. For each flight, the dye concentration estimates were extracted for a transect from halfway through the hovering portion of the flight.
Figure 16. Example output from all four of the flights that captured the dye pulse. For each flight, the dye concentration estimates were extracted for a transect from halfway through the hovering portion of the flight.
Remotesensing 16 03743 g016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Legleiter, C.J.; Scholl, V.M.; Sansom, B.J.; Burgess, M.A. Hyperspectral Image Transects during Transient Events in Rivers (HITTER): Framework Development and Application to a Tracer Experiment on the Missouri River, USA. Remote Sens. 2024, 16, 3743. https://doi.org/10.3390/rs16193743

AMA Style

Legleiter CJ, Scholl VM, Sansom BJ, Burgess MA. Hyperspectral Image Transects during Transient Events in Rivers (HITTER): Framework Development and Application to a Tracer Experiment on the Missouri River, USA. Remote Sensing. 2024; 16(19):3743. https://doi.org/10.3390/rs16193743

Chicago/Turabian Style

Legleiter, Carl J., Victoria M. Scholl, Brandon J. Sansom, and Matthew A. Burgess. 2024. "Hyperspectral Image Transects during Transient Events in Rivers (HITTER): Framework Development and Application to a Tracer Experiment on the Missouri River, USA" Remote Sensing 16, no. 19: 3743. https://doi.org/10.3390/rs16193743

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop