Next Article in Journal
A Reference Paper Collection System Using Web Scraping
Previous Article in Journal
MOMTA-HN: A Secure and Reliable Multi-Objective Optimized Multipath Transmission Algorithm for Heterogeneous Networks
Previous Article in Special Issue
Analysis of Training Data Augmentation for Diabetic Foot Ulcer Semantic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancements in Remote Compressive Hyperspectral Imaging: Adaptive Sampling with Low-Rank Tensor Image Reconstruction

1
Harbor Branch Oceanographic Institute, Florida Atlantic University, 5600 US 1 North, Fort Pierce, FL 32963, USA
2
Department of Engineering, Texan Christian University, 2840 West Bowie Street, Fort Worth, TX 76109, USA
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(14), 2698; https://doi.org/10.3390/electronics13142698
Submission received: 5 April 2024 / Revised: 1 July 2024 / Accepted: 2 July 2024 / Published: 10 July 2024
(This article belongs to the Special Issue Image Segmentation)

Abstract

:
We advanced the practical development of compressive hyperspectral cameras for remote sensing scenarios with a design that simultaneously compresses and captures high-quality spectral information of a scene via configurable measurements. We built a prototype imaging system that is compatible with light-modulation devices that encode the incoming spectrum. The sensing approach enables a substantial reduction in the volume of data collected and transmitted, facilitating large-scale remote hyperspectral imaging. A main advantage of our sensing design is that it allows for adaptive sampling. When prior information of a survey region is available or gained, the modulation patterns can be re-programmed to efficiently sample and detect desired endmembers. Given target spectral signatures, we propose an optimization scheme that guides the encoding process. The approach severely reduces the number of required sampling patterns, with the ability to achieve image segmentation and correct distortions. Additionally, to decode the modulated data, we considered a novel reconstruction algorithm suited for large-scale images. The computational methodology leverages the multidimensional structure and redundant representation of hyperspectral images via the canonical polyadic decomposition of multiway arrays. Under realistic remote sensing scenarios, we demonstrated the efficiency of our approach with several data sets collected by our prototype camera and reconstructed by our low-rank tensor decoder.

1. Introduction

Hyperspectral (HS) cameras are advanced imaging devices that capture a wide range of information across the electromagnetic spectrum. Unlike traditional cameras that detect only three primary colors (red, green, and blue), hyperspectral cameras can capture hundreds or even thousands of narrow spectral bands, providing highly detailed and precise data about the composition and properties of objects. Hyperspectral cameras excel at distinguishing subtle differences in the spectral signatures of objects. This enables the identification and characterization of materials, even those with similar visual appearances. It is particularly useful in areas such as remote sensing, environmental monitoring [1,2], and agriculture, where precise material identification and image segmentation are crucial.
However, the enhanced spectral information leads to a substantial rise in the volume of raw data captured. In situations where both high spatial and spectral resolution are necessary, the required bandwidths might pose constraints on effectively transmitting images from remote survey sites. This limitation becomes particularly critical in compact devices like CubeSats [3], where the availability of on-payload computing hardware for storage and transmission is restricted.
To overcome such constraints, the practice of compressive hyperspectral imaging was proposed to reduce the volume of collected data, drawing inspiration from the field of compressive sensing [4,5,6,7,8,9,10]. Unlike conventional data acquisition methods, these approaches simultaneously sample and compress HS images by modulating (or encoding) the incoming spectrum. The primary advantage of these novel methods is a significant reduction in the acquisition time and the volume of raw data collected. In this compressive HS imaging framework, dense on-site data acquisition can be alleviated by shifting the focus to large-scale optimization as a post-acquisition step to achieve data reconstruction (i.e., decoding). This acquisition regime is ideal for remote sensing situations, where hardware and transmission requisites are reduced and computational imaging can be achieved at a ground station (as illustrated in Figure 1).
While introducing a distinct sensing paradigm with the potential to miniaturize HS cameras, further efforts are necessary to advance compressive HS imaging for remote sensing missions. In particular, previous studies predominantly focused on numerical aspects. Most experiments in the literature consider pre-processed data captured by cameras whose hardware is not consistent with compressive HS imaging or remote settings with limited chassis space. To comprehensively understand the limitations of compressive sensing-based approaches, it is imperative to capture and experiment with raw HS footage compatible with light-modulation-based sensing.
Furthermore, previous computational approaches for decoding HS images do not scale to the high resolution demanded by modern remote sensing missions. Most studies in the literature conduct tests on small benchmark datasets, which typically have spatial resolutions of 300 × 300 pixels or less (e.g., the reduced AVIRIS image of Cuprite [9,10]). While such benchmark datasets are crucial for a fair comparison of an algorithm’s performance, reconstruction experiments on images of practical resolution would be most informative to advance the technology.
Finally, while the main goal of compressive HS cameras is to reduce the volume of acquired data, another advantage of these designs is the ability to re-configure the sensing patterns on the fly. With prior information of a scene or signatures of interest, the encoding procedure can be adapted to more efficiently extract targeted information in a way that minimizes pattern transmission requisites. Such capabilities are rarely considered in the literature for improved spectrum reconstruction or image segmentation, but are crucial in remote acquisition scenarios.
This study attempted to address these gaps in order to advance the practice of remote compressive hyperspectral imaging. Our contributions can be summarized as follows:
  • We designed a prototype imaging system and captured high-resolution field data. The sampling technique we considered is similar to sensing methods previously proposed in the literature [4,5,6,7,8,9,10,11] that encode the spectral signature at each pixel. Our primary contribution lay in the construction of a camera to gather real data and evaluate the effectiveness of this sensing paradigm in practical settings.
  • We explored adaptive sampling to minimize the encoding complexity and transmission requisites. Our methodology capitalized on remote sensing scenarios where spectral information was gained through periodic surveys. We used estimated endmembers to design encoding patterns that re-configured light modulation hardware (e.g., digital micromirror devices [12]) for subsequent acquisitions. By incorporating spectra of interest, we propose an optimization technique to determine minimal encoding patterns that reduce data transmission while achieving efficient sampling and segmentation.
  • We validated a recently proposed image reconstruction technique. This methodology built upon our initial work [13] by extending the concept of compressive sensing [14] to encompass multiway arrays that adhere to a low-rank tensor structure rather than a sparse signal model. Through experiments on footage collected by our prototype camera, we demonstrated robust imaging that reduced the sampling requisites by a factor of 20.
Our focus was on satellite-based remote sensing missions that conducted multiple surveys of an area. In such situations, encoding patterns (known as the codebook) must be transmitted to the payload in real time to re-program the onboard hardware for subsequent surveys. An illustration of such a sampling workflow is shown in Figure 1. Our adaptive sampling methodology is particularly well suited for this scenario, and was designed to minimize the bandwidth requisites by significantly reducing the number of sampling patterns that efficiently extract the desired spectral information.
The remainder of this paper proceeds as follows: The remainder of Section 1 discusses related works in the literature. Section 2 elaborates on the spectral sensing scheme and specifies the design of our prototype camera. Section 2.3 showcases our acquired field data. Section 2.4 considers the codebook design, including our adaptive sampling methodology. Section 2.5 discusses the tensor-based image reconstruction (decoding) approach, where Section 2.5.1 and Section 2.5.2 provide more details on tensor decompositions and pseudocode for our program. Section 3 provides numerical experiments, presenting reconstructed images captured by our prototype HS camera and comparing our approach to compatible algorithms in the literature. Finally, we conclude with ending remarks and future work in Section 4.

Related Works

The field of compressive hyperspectral imaging has been active since the advent of compressive sensing [15,16]. Most studies focused on sampling and reconstruction approaches [4,5,6,7,8,10,17,18,19,20,21,22,23]. Many approaches require auxiliary data (e.g., training data, RGB images, or a spectral library) [24,25,26,27] or apply sensing that is not compatible with our spectral-only encoding scheme [28,29,30]. Furthermore, fewer authors focused on hardware development to collect encoded hyperspectral measurements. We provide a brief overview of previous approaches most related to our work in order to highlight the novelties of our paper.
In terms of algorithms, to reconstruct HS data from encoded measurements, most approaches in our literature review utilized vector- or matrix-based processing. In other words, these works processed 3D HS volumes by sequentially considering matrix slices or vector fibers of the tensor. Notably, sparse signal reconstruction continues to be a popular approach [31,32,33,34]. However, the approaches in [19,21,22,23] utilized the 3D structure of HS cubes to improve image reconstruction via the tensor ring and Tucker decompositions. Our work also exploited the high dimensional structure of HS data, but we considered the canonical polyadic (CP) decomposition. The CP decomposition was the first method proposed to extend the singular value decomposition to higher-order arrays [35,36]. Compression capabilities of the CP decomposition when applied to HS images are demonstrated in the literature [37,38,39,40,41,42]. Despite its popularity, this factorization approach was not considered for compressive hyperspectral imaging until recently [13]. In this work, we showed that this relatively simple tensor representation was sufficient to provide reliable compressive imaging on real data and also outperformed several lower-dimensional alternatives.
Hardware development in the compressive HS imaging is more rare. Our design was built upon our prior work [11,43], where a DMD-based spectrometer was converted into an HS camera. This previous prototype modified a DLP NIRScan Nano EVM [44] that covered near-infrared bands (900–1400 nm) to provide an imager capable of measuring the visible spectral range of 400–700 nm. Recently, the authors in [30] developed a hyperspectral video imaging system based on a single pixel detector. Their approach applied a spatial–spectral encoding scheme, and severely compressed measurements were obtained after a sparse selection from the full dense measurements is applied. Such a sensing scheme achieved a high compressive ratio, but required additional on board computing. In contrast to many camera design studies (e.g., [28]), our work focused on minimizing such computing hardware for remote sensing missions. In particular, our end goal was to further advance our system for compact CubeSat deployment.

2. Materials and Methods

In this section, we elaborate on our compressive HS sensing scheme and specify our camera design. Our sampling methodology is very similar to other approaches in the literature [4,5,6,7,8,9,10,11], but we focus on the setting of multidimensional arrays. With this in mind, it is best to briefly introduce tensor notation before proceeding.
Notation: to remain consistent with prior work on tensor decompositions, we adopt notation that largely agrees with the literature [45]. Vectors are denoted by lowercase bold letters and matrices are denoted by boldface capital letters, e.g.,  a and A , respectively. The transpose of a matrix is written A T and higher-dimensional arrays (i.e., with three or more dimensions) are denoted by boldface Euler script letters, e.g.,  X . Linear maps that take arrays as an input are denoted by calligraphic letters, e.g.,  D : R I 1 × × I K R I 1 × × I K .
The i-th entry of a vector a is denoted by a i , the  ( i , j ) element of a matrix A is denoted as a i j , and the elements ( i 1 , i 2 , , i K ) of a K-th-order tensor X are denoted as x i 1 i 2 i K . The norm of an array is the square root of the sum of the squares of all its elements, i.e.,
X = i 1 = 1 I 1 i 2 = 1 I 2 i K = 1 I K | x i 1 i 2 i K | 2 1 / 2 .

2.1. Sensing Scheme

We focused on sensing via digital micromirror device (DMD) light-modulation hardware [12]. Let X R I x × I y × I s denote our HS image of interest. The first two dimensions I x and I y are the number of pixels in each spatial axis, while I s is the number of spectral bands for each pixel. Our collected data will be denoted as Y R I x × I y × M , where M is the number of encoding patterns per pixel. For a fixed pixel ( i , j ) { 1 , 2 , , I x } × { 1 , 2 , , I y } , the k-th encoded measurement, where k { 1 , 2 , , M } , is given as
y i j k = 𝓁 = 1 I s d i j 𝓁 k · x i j 𝓁 + n i j k
where D [ 0 , 1 ] I x × I y × I s × M encapsulates our modulation patterns (known as the codebook) and N R I x × I y × M models unwanted additive noise. The codebook entries are restricted to the interval [ 0 , 1 ] due to the DMD hardware, which allows for grayscale attenuation of the incident light.
The entire sampling scheme is represented by a linear map D : R I x × I y × I s R I x × I y × M defined as
D ( X ) = 𝓁 = 1 I s d i j 𝓁 k · x i j 𝓁 .
Notice that D depends implicitly on D and we obtain the samples
Y = D ( X ) + N .
Therefore, Y gives our encoded noisy measurements according to the programmed codebook D . Notice that with M < I s , we reduced the amount of collected data (relative to the ambient dimensions of X ), thereby entering the regime of compressive sensing.

2.2. Camera Design

Our system uses a 260R Plane Ruled Diffraction Grating from the Richardson Grating Lab (Rochester, NY, USA) [46]. It has a nominal blaze wavelength of 500 nm, nominal blaze angle of 8.6°, and 600 grooves per mm. It is suitable for our desired spectral range of 400–800 nm, but our system will collect data that covers the 343–827 nm range. The incoming light collected through the imaging lens will propagate through a 10 μ m slit before reaching the diffraction grating to separate the light into different spectral components. The system Zemax optical model is shown in the top-left and middle images of Figure 2, while the bench setup is shown in the top-right image.
Once diffracted, a 10:90 beam splitter will direct the light to two different paths: 90 % of the light will reach Camera 1 and 10 % of the light will reach Camera 2 (see Figure 2). The input into Camera 1 was not used for this study. Our intended finalized system is not complete due to further optical modifications required for Camera 1, which includes a CEL5500 light-modulation device manufactured by Digital Light Innovations (Austin, TX, USA). The final design is meant to utilize Camera 2 to obtain the image input to the DMD in order to understand the post-modulation distortions of data collected by Camera 1.
To advance the research while the system is finalized, the work presented here only utilized images collected by Camera 2. As a consequence, the data were not modulated via hardware and we instead simulated the encoding process numerically (as in Section 2.1, see Section 3). Camera 2 is a Kiralux Monochrome CMOS Camera (CS235MU) manufactured by Thorlabs (Newton, NJ, USA). This camera has a 1920 × 1080 resolution with a pixel size of 5.86 μ m. The peak quantum efficiency of the camera is 78% at 500 nm [47].
During the field test, a mirror was placed in front of the imaging lens of the bench setup. A Zaber linear stage was used to rotate the mirror to horizontally scan through the field of view, as shown in Figure 2 (bottom image). The spectral sensitivity was evaluated in the lab using three laser diodes at 405 nm, 520 nm, and 650 nm. The normalized response is shown in the top plot of Figure 3. The FWHMs (full widths at half maximum) for the three wavelengths ranged from 2 nm to 2.66 nm.

2.3. Field Data

In this study, we evaluated our prototype system without hardware-based modulation capabilities (i.e., Camera 1 was not implemented). Instead, a hardware-in-the-loop simulation was employed to use the data captured with Camera 2 as the input. To study compressive imaging capabilities, we modeled our encoding scheme numerically using the images acquired by Camera 2 (see Section 3).
The system captured scenes at two locations on the Harbor Branch Oceanographic Institute campus (HBOI), as indicated in Figure 4 (top image). The target in Scene 1 consisted of six different color panels placed 10 m from the system (bottom image in Figure 4). The panels had dimensions 0.3 m × 0.6 m. In Scene 2, the system was mounted on the balcony of the second floor of the building to capture the channel. For this study, we only considered images collected from Scene 1.
Figure 5 and Figure 6 showcase two collected images, labeled as HBOI1 and HBOI2, respectively. The figures also illustrate sample spectral signatures from a grass pixel, the speed limit sign, the green panel, and the blue panel. Notice that the HBOI2 green and blue panel spectra suffer from oversaturation, i.e., the pixels exhibit blown out endmembers that were missing information of the peak wavelengths. This corruption was introduced as a mistake. During the initial data collection, HBOI1 was acquired at dawn with a properly determined shutter speed for the ambient light conditions. However, the authors neglected to readjust the shutter speed later in the day when HBOI2 was recorded with increased sunlight levels.
As a surprise to the authors, our results in Section 3 demonstrate that we could mitigate this type of corruption using our low-rank tensor decoder and adaptive compressive sampling methodology. Saturation is a common type of corruption for cameras and sensors [48]. However, as the authors in [49] noted, approaches for restoring hyperspectral images that suffer from spectral clipping are rare. Additionally, our scenario dealt with encoded measurements, which further complicated the data infilling process. Therefore, the ability of our methodology to provide some level of saturation correction is arguably novel and merits further study. We postpone such research for future work, but present our preliminary findings in Section 3.

2.4. Codebook Design and Adaptive Sampling

This section discusses some choices for the codebook design, i.e., how to generate D from Section 2.1 to program the light-modulation device. When a user has no information of a scene, Section 2.4.1 focuses on initial random patterns that are efficient for imaging in data-oblivious settings. Section 2.4.2 then considers adaptive sampling schemes that are meant to minimize the codebook complexity and detect specific signatures when prior knowledge of the spectral content is available.
Our sensing framework is tailored for remote sensing situations that periodically survey a desired area, e.g., satellite imaging as in Figure 1. Random mirror patterns considered in Section 2.4.1 are adequate to conduct initial compressive probing of an area and provide preliminary spectral signatures. Once this encoded information is relayed to a ground station, an HS image can be reconstructed and the estimated endmembers can be applied to the methodology of Section 2.4.2 for subsequent acquisitions. The efficiency of this entire process is numerically demonstrated in Section 3.1.

2.4.1. Initial Codebook Design

To initially probe the HS content of a survey area, sampling patterns with universal guarantees are most adequate [50]. Intuitively, the term universal refers to the property that such sampling schemes can capture a significant amount of information uniformly for all signals of interest. Universal sensing properties are ideal in situations where practitioners are oblivious to the data’s structure. Many results in the compressive sensing literature are available to provide such guarantees for random sensing ensembles, e.g., via mutual coherence and the restricted isometry property [14].
In particular, a sampling basis with orthogonal elements possesses many benefits for signal acquisition and reconstruction [16]. In our context, an orthogonal sampling scheme would require D to be composed of mutually orthogonal patterns. In other words, for a fixed pixel ( i , j ) { 1 , , I x } × { 1 , , I y } , distinct patterns k k ˜ probing the spectrum at that pixel should obey
𝓁 = 1 I s d i j 𝓁 k d i j 𝓁 k ˜ = 0 .
Such sensing elements are arguably the most efficient at maximizing the amount of information acquired.
In order to design orthogonal patterns that are compatible with DMD-type hardware, notice that with a non-negative codebook D [ 0 , 1 ] I x × I y × I s × M , it is necessary that the M patterns have disjoint support. This means that for a fixed pixel ( i , j ) { 1 , , I x } × { 1 , , I y } and spectral band index 𝓁 { 1 , , I s } , the codebook must satisfy
d i j 𝓁 k d i j 𝓁 k ˜ = 0
when k k ˜ . In words, per pixel, only one pattern can include samples from a given spectral band. We stress that this constraint is only necessary because we focus on DMD-based hardware.
To satisfy this, M fair Bernoulli patterns are generated for each pixel so that each codebook entry d i j 𝓁 k is 0 or 1 with equal probability. Then, a partition is generated that groups the spectral band indices { 1 , 2 , , I s } into M disjoint sets. Each set in the partition is uniquely assigned to one of the M patterns of a given pixel, and each pattern is modified to be zero on the indices not contained in its assigned set of the partition. The process is repeated independently for each pixel, i.e., each pixel is assigned distinct partitions of the spectral bands. For our experiments, we generated the partitions uniformly at random and of equal (or nearly equal) size. Figure 7 provides an example of M = 6 sampling patterns generated for a single pixel.
Independently generated Bernoulli patterns and partitions per pixel are crucial to ensure a rich sampling scheme. This sensing approach is based on the principle that a survey region sampled at a high spatial resolution will contain many pixels with similar endmembers. The success of our probing patterns is due to the assumption that with high probability, each spectral signature is densely sampled in a cumulative manner over the entire image. This approach pairs well with our tensor-based reconstruction program, which is designed to process the entire HS cube in order to optimally exploit this assumption (see Section 2.5).
This is a simple procedure but highly efficient, as seen in our numerical results of Section 3. One benefit of this construction is that the resulting codebook is a binary and sparse array D { 0 , 1 } I x × I y × I s × M . This observation allows for memory-efficient storage and transmission of the encoding patterns [51]. We would like to further stress that in our experience, orthogonal encoding patterns have the additional benefit of expedited and stable numerical convergence. In other words, when orthogonal codebooks are applied, we empirically observe that iterative reconstruction approaches converge faster and avoid overfitting as the iterations proceed.

2.4.2. Adaptive Codebook Design

In this section, we assume that prior spectral information of a scene is available to guide the codebook design. Let X ^ be a given HS image whose endmembers are relevant to the target image X . Our approach in this section applies to prior information in any form, but for simplicity, we assume the prior information can be arranged as a cube matching the target image dimensions X ^ R I x × I y × I s .
Our goal is to provide M patterns that can be uniformly applied to compressively sample the spectrum at all pixels. In other words, given X ^ , our output in this section is a matrix Q [ 0 , 1 ] I s × M whose columns are the M sensing patterns. The codebook D is defined in terms of Q as
d i j 𝓁 k = q 𝓁 k for   all   pixels ( i , j ) { 1 , , I x } × { 1 , , I y } .
In contrast to the initial codebook design of Section 2.4.1, notice that the encoding approach here severely reduces the memory requisite and programming complexity of the DMD. This is ideal for remote sensing settings, where an initial codebook for preliminary probing (as in Section 2.4.1) can be pre-programmed on the hardware prior to deployment, and subsequently, the hardware can be re-configured by minimal transmission of the matrix Q .
To elaborate on the adaptive pattern construction, we reshape our HS cubes X , X ^ R I x × I y × I s as respective matrices X , X ^ R I x I y × I s whose rows are the I x I y spectral signatures. Notice that in terms of the adaptive patterns, our measurements can now be written in matrix form as
Y = X Q + N R I x I y × M ,
where Y and N are analogous reshaped versions of Y and N , respectively (from Section 2.1). Given prior information X ^ , we defined our output patterns as
Q = arg min C [ 0 , 1 ] I s × M X ^ X ^ C C T .
The approach attempts to produce patterns that efficiently capture all the spectral content of the scene in a least squares sense. Intuitively, (3) describes a Q that acts as an orthogonal projection onto an M-dimensional subspace that best approximates the row space of X ^ , while obeying DMD grayscale constraints. To attempt to solve (3), we used a block-coordinate descent method with iterative projections onto [ 0 , 1 ] I s × M (see Algorithm 1). We note that steps 2 and 3 of Algorithm 1 can be solved by the least squares method, e.g., using MATLAB’s lsqr solver. Figure 7 provides an example of M = 6 adaptive patterns generated via (3) using our field data.
Algorithm 1 Adaptive Codebook Construction
     Input: prior information X ^ R I x I y × I s , number of patterns M N , and number of updates L N .
     Output: adaptive codebook Q [ 0 , 1 ] I s × M .
     Define: entry-wise threshold function
thresh [ 0 , 1 ] ( x ) = x if x [ 0 , 1 ] 0 if x < 0 1 if x > 1 .
     Initialize:  Q with entries independently generated from the uniform distribution on [ 0 , 1 ] .
1:
for  𝓁 = 1 , 2 , , L  do
2:
      Q ˜ arg min C R I s × M X ^ X ^ Q C T
3:
      Q arg min C R I s × M X ^ X ^ C Q ˜ T
4:
      Q   thresh [ 0 , 1 ] ( Q )
5:
end for
This approach was shown to achieve accurate HS imaging via a minimized codebook with the ability to combat oversaturation. However, it is important to notice that the approach causes the sensing system to focus on probing endmembers with a certain structure (according to X ^ ). Therefore, this approach is most applicable when the prior information encompasses the entire spectrum of the target image or when only certain pixels are of interest. In particular, the latter scenario is adequate for HS image segmentation, where certain signatures can be identified and efficiently reconstructed. Finally, with M patterns applied uniformly to all pixels, numerical reconstruction approaches can be implemented more efficiently using (2) to recast the sensing model (1) in matrix form. See Section 3.1 for the numerical results.
We warn the reader that our adaptive methodology assumes that subsequent surveys are conducted in similar conditions to the initial imaging. This assumption may be violated under weather conditions that cause severe changes in the spectral properties. For example, clouds and shadows have been known to shift peak wavelengths [52]. We do not explore such situations in this work, but they will likely alter our findings from those obtained in this study and we propose to study these cases in future work.
We end this section by noting that several other authors considered the design of encoding patterns for spectral sampling [6,53,54]. However, to the best of our knowledge, these works in the literature aim to output codebooks that are efficient in a universal sense (i.e., not adaptive). In other words, these methodologies do not incorporate input spectral data to guide the encoding process. While such work is very valuable for more efficient initial probing of a scene (as in the setting of Section 2.4.1), our work here focused on the situation where prior information or specific endmembers were targeted. In particular, we focused our scope on remote satellite imaging, where the light-modulation hardware could be re-configured as more spectral information became available.

2.5. HS Image Reconstruction: HSCP-ALS

In contrast to most numerical work in HS compressive sensing, the novelty of our decoding method is that we exploited the multidimensional structure of HS images. In particular, we employed the redundant representation of many HS images via the canonical polyadic (CP) decomposition of tensors [37,38,39,40,41,42]. Inspired by the compression capabilities of the CP decomposition, our chosen approach utilizes this low-dimensional representation for HS image reconstruction that is compatible with previously proposed sensing schemes [11].
The approach discussed in this section is a continuation of the preliminary work in our conference paper [13]. Here, we further validated this approach by testing its reconstruction capabilities on real footage and other codebook designs (e.g., adaptive patterns from Section 2.4.2). We also compared our decoding approach to other techniques proposed in the literature (see Section 3.3). In the current section, we summarize and expand on the approach proposed in our prior work. Before proceeding to the reconstruction program in Section 2.5.2, we briefly discuss the canonical polyadic decomposition of tensors.

2.5.1. Canonical Polyadic Decomposition

The canonical polyadic (CP) decomposition is one approach to generalize the singular value decomposition of matrices to higher-dimensional arrays (also known as the CANDECOMP/PARAFAC decomposition). This representation decomposes a tensor into a sum of “simple” rank-one tensors. Focusing on the three-dimensional case, a rank-one tensor T can be written as the outer product of three vectors:
T = a b c ,
where “∘” denotes the vector outer product. In other words, each entry of a rank-one tensor T is given as t i j 𝓁 = a i b j c 𝓁 . This is the simplest form a tensor can have under a CP decomposition, and any other tensor can be written as a sum of rank-one components. A rank-R CP decomposition of a three-dimensional tensor X R I x × I y × I s is of the form
X = k = 1 R a k b k c k ,
where a k R I x , b k R I y , and  c k R I s for each k { 1 , 2 , , R } . The smallest integer R that allows X to be written as (4) is called the rank of X . The CP decomposition (4) is concisely expressed as X = A , B , C if a k , b k , and  c k represent the k-th columns of A , B , and C , respectively (known as the tensor factors). Equivalently, in terms of the factors, this notation signifies that each entry of X is given as
x i j 𝓁 = k = 1 R a i k b j k c 𝓁 k .
Compatibility with the linear mixing model: the CP decomposition of HS images is suitable for the linear mixing model (LMM) [55] used to decompose the spectral content of a scene. The LMM assumes that each pixel contains a linear combination of pure spectra (i.e., endmembers). The distribution of the endmembers throughout the image is determined by the abundance, which specifies the fraction a material occupies in each pixel. The CP decomposition (4) also allows for such an interpretation. The factor C R I s × R encompasses the endmembers, where each column contains independent components of spectral signatures throughout the scene. The outer product of the respective columns of A R I x × R and B R I y × R specify the spatial abundance of each corresponding endmember in C . See our references for further discussion [37,38,39,40,41,42], where these observations were used for spectral unmixing.

2.5.2. HS Image Reconstruction via CP-ALS

To decode, i.e., estimate the HS image X given Y and D (encoded image and codebook), we propose to solve a CP decomposition constrained least squares problem. Specifically, we obtain a HS image estimate X ˜ X using
X ˜ = arg min T R I x × I y × I s D ( T ) Y subject   to T = A , B , C ,
where D is defined in Section 2.1 to apply the sensing scheme. Program (5) ensures that our output estimate X ˜ fits the encoded measurements in a least squares sense while obeying the CP decomposition with factors A R I x × R , B R I y × R , and  C R I s × R for some choice of R. Notice that this methodology implicitly depends on R, which will be referred to as the rank parameter. This parameter must be chosen by a practitioner to accurately approximate X . Intuitively, the rank parameter captures the low-dimensional structure of the HS image. An accurate representation of X via a CP decomposition with rank parameter R will reduce the storage of I x I y I s free variables to R ( I x + I y + I s ) . In our experiments discussed in Section 3, the rank was chosen to be as large as possible according to our GPU memory limitations while not exceeding the largest array dimension. We empirically observed that reconstructed HS images can be stored in CP format with less than 1 % of the uncompressed memory requisite.
To solve (5), we adopted a modified version of the popular CP-ALS program that applied our sensing scheme. The CP-ALS algorithm is elaborated in [45] (see Section 3.4 therein) when no linear measurement operator is applied (i.e., D in our scenario). For brevity, we refer the reader to [45] for details and only discuss here the minor changes required to solve (5) in the case of three dimensions.
The alternating least squares (ALS) method fixes two of the tensor factors while updating the remaining factor, then alternates to update each factor iteratively. In this manner, the approach attempts to solve the overall non-convex program via a sequence of least squares sub-problems. The procedure requires the user to input the rank parameter R and initial estimates B and C . In this work, the initial factors were chosen with entries identically and independently distributed from the uniform distribution on [ 0 , 1 ] .
With B and C fixed (as initial estimates), A was updated via
A arg min T R I x × R D ( T , B , C ) Y .
To simplify, we introduce the operator D B , C : R I x × R R I x × I y × M defined as
D B , C ( A ) = D ( A , B , C ) .
Using observations in [45], the action of this operator can be described via the Khatri–Rao product B C , which yields an I y I s × R matrix. The transpose of this product is multiplied with A , giving A ( B C ) T R I x × I y I s to input as the argument of the encoding operator D after properly reshaping A ( B C ) T as an I x × I y × I s tensor. Notice that D B , C is linear, and therefore, our update is given in closed form as
A D B , C ( Y )
where denotes the pseudoinverse. Equivalently, with multiway arrays properly reshaped as matrices (as in Section 2.4.2), (6) can be recast as a standard least squares problem in vector form for which iterative solvers are widely available. Updates for B and C can be achieved via the operators D C , A and D A , B , respectively, which are analogously defined.
This modified CP-ALS procedure is known as HSCP-ALS. See Algorithm 2 for the pseudocode of the entire HSCP-ALS algorithm to solve (5). As described here, the number of factor updates K must be specified by the user but we note that stopping criteria for similar block-coordinate descent methods were proposed to halt factors updates once the overall estimate is deemed to converge [56]. In our experiments of Section 3, we used MATLAB’s lsqr solver limited to 10 iterations to approximate each pseudoinverse (i.e., to achieve steps 2, 3, and 4 in Algorithm 2).
Algorithm 2 HSCP-ALS
     Input: encoded image Y R I x × I y × M , codebook D [ 0 , 1 ] I x × I y × I s × M , rank R N , and number of updates K N .
     Output: estimate HS image X ˜ R I x × I y × I s .
     Initialize: tensor factors B R I y × R and C R I s × R with entries independently generated from the uniform distribution on [ 0 , 1 ] .
1:
for  k = 1 , 2 , , K  do
2:
      A D B , C ( Y )
3:
      B D C , A ( Y )
4:
      C D A , B ( Y )
5:
end for
6:
X ˜ = A , B , C

3. Results

We present the imaging experiment results to demonstrate the efficiency of our reconstruction program and the capabilities of our proposed adaptive sampling techniques. Section 3.1 showcases the image reconstruction on footage captured by our prototype system, which applied the initial and adaptive codebook designs of Section 2.4. Section 3.2 uses selected spectral signatures to design sampling patterns and identify specific endmembers from a scene, i.e., image segmentation. Finally, Section 3.3 compares our HSCP-ALS methodology to other decoding approaches from the literature.
HSCP-ALS implementation: we used gpuArray from MATLAB (R2023b) to implement Algorithm 2 on an NVIDIA RTX A6000 GPU with 48 GB of memory. In other words, variables Y , D , A , B , and C were stored on the GPU to allow for distributed processing. This achieved expedited image reconstruction but imposed a memory limitation. In our experiments, this varied our choice of the rank parameter. As a rule, we chose R as the largest multiple of 50 that allowed for GPU storage while not exceeding max { I x , I y , I s } . The number of updates was always fixed to K = 10 .

3.1. Reconstruction Results: HBOI Field Data

As discussed in Section 2.3, the preliminary data collected in this work did not incorporate light-modulation hardware. To gain instructive insight of our ongoing compressive sensing design, this study numerically simulated DMD-based spectral encoding to the collected noisy footage. HSCP-ALS was then applied to decode the compressed measurements and produced an estimate image.
Our aim was to demonstrate the entirety of our proposed remote sensing scheme from Section 2.4, as in Figure 1 with initial probing of a scene followed by adaptive sampling in subsequent surveys. To elaborate, we considered HS images from Section 2.3 labeled as HBOI1 and HBOI2 (see Figure 5 and Figure 6). These are images of Scene 1 in Figure 4, which were captured at different times of the day. Our strategy applied an initial random codebook from Section 2.4.1 to HBOI1, followed by HSCP-ALS (5) to produce an image estimate X ^ . Using the reconstructed data, we applied the adaptive codebook design of Section 2.4.2 to produce Q as the output of (3) and apply the respective encoding to HBOI2. Finally, an estimate X ˜ of HBOI2 was reconstructed by HSCP-ALS.
We truncated HBOI1 and HBOI2 to have a spatial resolution of 1700 × 1700 pixels with 120 spectral bands (ranging from 343 nm to 827 nm). This was done to remove the field of view obstructed by the camera chassis and to reduce the image size for the GPU distributed processing. Our experiments achieved imaging with 5 % sampling relative to the target 120 wavelengths.
For the initial probing of HBOI1, we generated M = 6 binary patterns independently per pixel, as in Section 2.4.1, with the partitions chosen uniformly at random. The estimate X ^ R 1700 × 1700 × 120 was then produced as the output of Algorithm 2 with rank R = 450 and K = 10 updates. This choice of rank outputs estimated X ^ in a CP format that occupied . 46 % of the bytes relative to the uncompressed HS image. See the left plot of Figure 7 for an example of the generated initial codebook, Figure 8 for the HBOI1 results, and Figure 9 for a close-up of the reconstructed image details.
To achieve adaptive sampling, we used the estimate X ^ of HBOI1 to generate Q [ 0 , 1 ] 120 × 6 as the output of Algorithm 1 with L = 30 updates. These sampling patterns were applied uniformly to all pixels of HBOI2, again reducing the amount of collected data by a factor of 20 ( 5 % sampling). The adaptive estimate X ˜ R 1700 × 1700 × 120 was produced as the output of Algorithm 2, with rank R = 700 and K = 10 updates. The severely reduced codebook size allowed this larger choice of rank parameter for GPU processing. The output HS estimate in a CP format that occupied . 71 % of the uncompressed bytes. The adaptive patterns are shown in the right plot of Figure 7, the final output is illustrated in Figure 10, and Figure 9 provides a close-up of the image details.
As noted in Section 2.3, image HBOI2 suffered from oversaturation in several pixels. The image reconstruction result of Figure 10 demonstrates that our adaptive sampling technique allowed HSCP-ALS to mitigate the saturation by providing some information of the peak wavelengths. Simultaneously, notice that we also reliably estimated non-saturated signatures (e.g., grass and speed limit sign spectra). Therefore, using reliable signatures as prior information, our adaptive codebook design could output sampling patterns that helped to estimate the missing data while not affecting the distortion-free pixels. This source of corruption was unknown to the authors during data collection. These unexpected results merit further research, with the potential to correct other kinds of unwanted distortions. We propose to focus on these aspects of our methodology in future work.
Figure 9 shows the reconstruction of the speed limit sign found in the HBOI images. These results explore the ability of our approaches to recover spatial details. We found that the initial probing scheme (i.e., results on HBOI1) seemed to be better at picking up spectral details in comparison to the adaptive patterns. This is evident in the plots of Figure 8 and Figure 10, where the HBOI1 reconstruction provided an endmember with more curvature details, while the adaptive HBOI2 signature gave a smoother estimate. In terms of spatial details, in Figure 9, it is clear that the adaptive result of HBOI2 provided a sharper image of the speed limit sign compared with the blurry output of HBOI1. These observations imply an interesting tradeoff between adaptive and non-adaptive techniques to reconstruct spatial vs. spectral details, which is informative for practitioners. However, we note that adaptive samples were designed to minimize the amount of patterns that needed to be transmitted. This limitation was due to our focus on remote sensing scenarios, and thus, the adaptive results may be improved by allowing for more spatially varied patterns.

3.2. Hardware-Based Segmentation

This section showcases the image segmentation capabilities of light-modulation-based sensing. The main novelty of our approach is that with prior information available, segmentation and endmember identification can be implemented into the hardware and directly achieved from the compressed samples. This allows one to focus computational resources to specific pixels and improve the estimation quality for signatures of interest.
To demonstrate this, we considered the scenes of our captured footage in Section 2.3 and of a river mouth region surveyed by NASA’s HICO missions [1]. In both scenarios, we used the initial probing techniques of Section 2.4.1 to obtain preliminary HS information. From these estimate images, we selected several pixels with spectral signatures of interest and used these endmembers to specify the sampling patterns for subsequent surveys. The goal of this methodology was to specialize the sensing scheme to identify and efficiently sample regions whose spectrum is similar to the chosen endmembers. We note that the initial probing step is not necessary if a practitioner already has prior information. We focus on this specific workflow that fits our remote sensing scheme illustrated in Figure 1.
HBOI field data: in our captured HS footage (see Section 2.3), we attempted to identify the pixels of the red, green, and blue panels. We used the reconstructed HBOI1 image from Section 3.1 as our initial data estimate; see Figure 8. We manually selected several pixels of the target panels, then averaged and normalized them to produce exemplar signatures r , g , b [ 0 , 1 ] 120 (red, green, and blue, respectively). We constructed our adaptive codebook Q [ 0 , 1 ] 120 × 4 by setting r , g , and b as its first three columns and the “all-ones” vector 1 { 1 } 120 as the last column. The all-ones sensing pattern provides normalization coefficients in order to achieve segmentation in compressed format (i.e., directly from our observed measurements Y R I x × I y × 4 ).
Let Y ˜ [ 0 , 1 ] I x × I y × 3 be defined in each pixel ( i , j ) { 1 , , I x } × { 1 , , I y } as
y ˜ i j k = y i j k y i j 4 k { 1 , 2 , 3 } ,
providing normalized sensing via r , g , and b . Then, simple segmentation techniques can be applied to identify outliers of Y ˜ in each channel and extract pixels with signatures most similar to the chosen endmembers. In this work, we used MATLAB’s grayconnected function to achieve this segmentation step. Using this information, we applied HSCP-ALS only to a 242 × 266 region of identified pixels with the choice R = 266 (largest array dimension).
The segmentation result is shown in Figure 11, providing accurate detection of the three bins of interest. Notice that this approach also mitigates HBOI2’s oversaturated pixels (in the green and blue panel), while providing accurate estimation of non-distorted signatures (i.e., the red panel pixels). In contrast to the previous adaptive sampling result (Figure 10), the result here arguably improved the ability to infill information. The estimated spectral plots in Figure 11 involve more structure with identifiable peak wavelengths. These improved capabilities are likely due to the additional tailoring of the modulation patterns for panel identification. However, as a trade off, the selected patterns are probably not adequate to address the deformations of unrelated endmembers. Further investigation and discussion of the distortion-correcting abilities of adaptive samples is left as future work.
NASA HICO data: the same procedure was used to segment water-related pixels in the HICO satellite data. We considered two HS images labeled as H2011251110401 and H2010261105802, which we refer to as HICO401 and HICO802, respectively. These were images of the same river mouth region surveyed on different days. We truncated these cubes to be of size 508 × 1997 × 120 , removed corrupt pixels, and focused on the wavelengths 369.7–1051.3 nm. We used an HSCP-ALS reconstructed estimate of HICO401 as prior data (with a GPU-allowed rank R = 1800 ) that were synthetically encoded via M = 6 patterns ( 5 % sampling) per pixel as in Section 2.4.1. We then selected several pixels from ocean and water regions of the image to generate the average and normalized signature w [ 0 , 1 ] 120 and codebook Q [ 0 , 1 ] 120 × 2 as before (incorporating the all-ones pattern for normalization). Q -based sensing was applied to HICO802 ( 1.67 % sampling), followed by segmentation of the encoded image (as was done for the HBOI data). Finally, an HSCP-ALS estimate of the HICO802’s water pixels was produced with rank R = 1997 = max { I x , I y , I s } . The segmentation and reconstruction results are shown in Figure 12.

3.3. Comparison of HSCP-ALS to Other Methods: Benchmark Dataset

To further validate our image reconstruction program (HSCP-ALS), we compared it to several approaches from the literature. Our previous work [13] compared our methodology to SpeCA (spectral compressive acquisition) [4] and showed that it performed significantly better. Here, we additionally compared it to CPPCA (compressive projection principal component analysis) [16] and sparsity-based reconstruction schemes. Specifically, we considered 𝓁 1 -norm minimization [31] and a distributed counterpart [25,57].
We used the benchmark Cuprite image [9,10], with 190 × 250 spatial resolution and 188 bands. With the ground truth provided, we used the signal-to-noise ratio (SNR) as the metric of the reconstruction quality:
S N R X , X ˜ = 10 log 10 X 2 X X ˜ 2 ,
where X is the HS image of interest and X ˜ is the provided estimate.
We conducted image reconstruction experiments with varied numbers of measurements and noise levels. We considered sampling percentages of ∼5%, ∼10%, and ∼15% relative to the 188 spectral bands. In other words, three surveys with M = 10 , 19 , and 23 sampling patterns were generated from our initial probing scheme of Section 2.4.1. Our noise model generated random additive noise according to chosen hardware specifications. We incorporated the wavelength-wise grating prism efficiency included in the ZEISS product manual to generate structured noise. This noise was distorted via normally distributed random variables with variance chosen according to a desired SNR and added to the encoded measurements (i.e., additive noise). We added corruption from this synthetic noise model with SNR levels of 10 dB, 15 dB, and 20 dB.
The competitive approaches had several other parameters that needed to be chosen, including the number of “lines” to reconstruct simultaneously (i.e., number of image rows) and number of singular vectors to estimate (for CPPCA). To produce the best result, we exhaustively cycled through all possible variations of these parameters and only showcase the best results. Our methodology (5) requires the rank parameter and number of factor updates, which we limited to K = 10 updates and chose R = 250 = max { I x , I y , I s } according to our rule. The comparisons in Table 1, Table 2 and Table 3 show that HSCP-ALS outperformed these other reconstruction programs in every scenario.

4. Conclusions

This work advanced remote compressive hyperspectral imaging by designing and testing a prototype camera. We showcased our initial sensing system by capturing real hyperspectral volumes of complex scenes through utilizing hardware that was compatible with light-modulation devices. The collected images were used to validate our proposed tensor reconstruction algorithm that decoded spectrally modulated measurements. We further developed an approach to output adaptive samples when prior information was available or gained through subsequent surveys. This sampling workflow was validated on footage collected by our imaging system. The overall approach allowed us to capture, segment, and transmit hyperspectral images from a severely reduced number of samples while producing high quality output images.
To further explore the capabilities and limitations of compressive cameras in remote scenarios, more hardware development is needed. Our camera was designed to be compatible with light-modulation devices, but the work here did not include such components. Instead, our current work numerically simulated this encoding process on densely sampled volumes. Future work will utilize our findings in this study to achieve adequate optical modifications and additionally collect hardware encoded data from Camera 1 (see Section 2). This will allow for a comprehensive study of compressive hyperspectral cameras suitable for remote missions, and further validate our proposed reconstruction algorithm and sampling methodologies on true hardware-modulated data. The work presented here provided instructive findings that will guide our next-generation camera design.
Our numerical experiments demonstrated that our low-rank tensor methodology for image reconstruction was robust and outperformed other lower dimensional approaches in the literature. The main novelty is that our method processes the entire volume in order to fully exploit the multidimensional structure of the data. However, other numerical methods exist that also exploit tensor decompositions and likewise demonstrate their superiority to vector- and matrix-based analytics. It would be of interest to compare and explore the tradeoffs of different tensor decomposition choices for the purpose of compressive hyperspectral imaging.
Unknown to the authors, the collected HBOI data exhibited oversaturation with a loss of peak wavelength structure. Our results demonstrated that the proposed adaptive sampling methodology was able to mitigate this source of corruption and estimate the missing information. Such distortion correction was not the focus of this paper and came as a surprise to the authors. As future work, it would be of interest to systematically explore the ability of adaptive samples to combat oversaturation and other types of image corruption. For example, surveys may be conducted under varying ambient conditions (e.g., clouds and shadows). Such situations will alter spectral signatures, and thus, it is important to identify the limitations of our adaptive approach in this scenario or explore its ability to mitigate such distortions.

Author Contributions

Conceptualization, B.O., M.T. and O.L.; methodology, B.O., M.T. and O.L.; software, O.L.; formal analysis, A.E. and O.L.; investigation, E.M., C.G. and B.O.; resources, M.T. and B.O.; data curation, E.M., C.G. and B.O.; writing—original draft preparation, O.L.; writing—review and editing, O.L.; visualization, A.E. and O.L.; supervision, M.T. and B.O.; project administration, E.M. and B.O.; funding acquisition, M.T. and B.O. All authors read and agreed to the published version of this manuscript.

Funding

This work was supported in part by the National Oceanographic Partnership Program (NOPP), managed by the Office of Naval Research (grants N000141812841 and N000141912504) and Office of Naval Research contract N00014-20-C-2035.

Data Availability Statement

The datasets presented in this article are not readily available due to the Office of Naval Research’s policies regarding controlled unclassified information (CUI). Requests to access the datasets should be directed to Mike Twardowski.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. NASA. Hyperspectral Imager for the Coastal Ocean (HICO) User Data Collection and Dissemination Policy; Johnson Space Center: Houston, TX, USA, 2013.
  2. NASA. Pre-Aerosol, Clouds, and Ocean Ecosystem (PACE) Mission Science Definition Team Report; Johnson Space Center: Houston, TX, USA, 2012.
  3. Gregorio, A.; Alimenti, F. CubeSats for Future Science and Internet of Space: Challenges and Opportunities. In Proceedings of the 2018 25th IEEE International Conference on Electronics, Circuits and Systems (ICECS), Bordeaux, France, 9–12 December 2018; pp. 169–172. [Google Scholar] [CrossRef]
  4. Martín, G.; Bioucas-Dias, J.M. Hyperspectral Blind Reconstruction From Random Spectral Projections. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2390–2399. [Google Scholar] [CrossRef]
  5. Zhuang, L.; Bioucas-Dias, J.M. Hy-Demosaicing: Hyperspectral Blind Reconstruction from Spectral Subsampling. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4015–4018. [Google Scholar] [CrossRef]
  6. Hinojosa, C.; Bacca, J.; Arguello, H. Coded Aperture Design for Compressive Spectral Subspace Clustering. IEEE J. Sel. Top. Signal Process. 2018, 12, 1589–1600. [Google Scholar] [CrossRef]
  7. Bacca, J.; Correa, C.V.; Arguello, H. Noniterative Hyperspectral Image Reconstruction From Compressive Fused Measurements. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1231–1239. [Google Scholar] [CrossRef]
  8. Fowler, J.E. Compressive-Projection Principal Component Analysis. IEEE Trans. Image Process. 2009, 18, 2230–2242. [Google Scholar] [CrossRef] [PubMed]
  9. Busuioceanu, M.; Messinger, D.W.; Greer, J.B.; Flake, J.C. Evaluation of the CASSI-DD hyperspectral compressive sensing imaging system. In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIX; Shen, S.S., Lewis, P.E., Eds.; International Society for Optics and Photonics, SPIE: Baltimore, MD, USA, 2013; Volume 8743, p. 87431V. [Google Scholar] [CrossRef]
  10. Greer, J.B.; Flake, J.C. Accurate reconstruction of hyperspectral images from compressive sensing measurements. In Compressive Sensing II; Ahmad, F., Ed.; International Society for Optics and Photonics, SPIE: Baltimore, MD, USA, 2013; Volume 8717, p. 87170E. [Google Scholar] [CrossRef]
  11. Ouyang, B.; Twardowski, M.; Li, Y.; Dalgleish, F. Investigation of a compressive line sensing hyperspectral imaging sensor. In Unconventional Optical Imaging; Fournier, C., Georges, M.P., Popescu, G., Eds.; International Society for Optics and Photonics, SPIE: Baltimore, MD, USA, 2018; Volume 10677, pp. 774–783. [Google Scholar] [CrossRef]
  12. Dudley, D.; Duncan, W.M.; Slaughter, J. Emerging digital micromirror device (DMD) applications. In MOEMS Display and Imaging Systems; Urey, H., Ed.; International Society for Optics and Photonics, SPIE: Baltimore, MD, USA, 2003; Volume 4985, pp. 14–25. [Google Scholar] [CrossRef]
  13. Lopez, O.; Ouyang, B.; Lawrence, T.; Ernce, A.; Gong, S.; Gilly, N.; Malkiel, E.; Twardowski, M. Frugal hyperspectral imaging via low rank tensor reconstruction. In Ocean Sensing and Monitoring XIV; Hou, W.W., Mullen, L.J., Eds.; International Society for Optics and Photonics, SPIE: Baltimore, MD, USA, 2022; Volume 12118, p. 121180H. [Google Scholar] [CrossRef]
  14. Foucart, S.; Rauhut, H. A Mathematical Introduction to Compressive Sensing, 1st ed.; Birkhauser: New York, NY, USA, 2013. [Google Scholar]
  15. Willett, R.M.; Gehm, M.E.; Brady, D.J. Multiscale reconstruction for computational spectral imaging. In Computational Imaging V; Bouman, C.A., Miller, E.L., Pollak, I., Eds.; International Society for Optics and Photonics, SPIE: Baltimore, MD, USA, 2007; Volume 6498, p. 64980L. [Google Scholar] [CrossRef]
  16. Fowler, J.E. Compressive-Projection Principal Component Analysis for the Compression of Hyperspectral Signatures. In Proceedings of the Data Compression Conference (DCC 2008), Snowbird, UT, USA, 25–27 March 2008; pp. 83–92. [Google Scholar] [CrossRef]
  17. Qu, X.; Zhao, J.; Tian, H.; Zhu, J.; Cui, G. Compressive hyperspectral imaging based on Images Structure Similarity and deep image prior. Opt. Commun. 2024, 552, 130095. [Google Scholar] [CrossRef]
  18. Takizawa, S.; Hiramatsu, K.; Lindley, M.; de Pablo, J.G.; Ono, S.; Goda, K. High-speed hyperspectral imaging enabled by compressed sensing in time domain. Adv. Photonics Nexus 2023, 2, 026008. [Google Scholar] [CrossRef]
  19. Chen, Y.; Huang, T.Z.; He, W.; Yokoya, N.; Zhao, X.L. Hyperspectral Image Compressive Sensing Reconstruction Using Subspace-Based Nonlocal Tensor Ring Decomposition. IEEE Trans. Image Process. 2020, 29, 6813–6828. [Google Scholar] [CrossRef]
  20. Charsley, J.M.; Rutkauskas, M.; Altmann, Y.; Risdonne, V.; Botticelli, M.; Smith, M.J.; Young, C.R.T.; Reid, D.T. Compressive hyperspectral imaging in the molecular fingerprint band. Opt. Express 2022, 30, 17340–17350. [Google Scholar] [CrossRef]
  21. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.W. Nonlocal Tensor Sparse Representation and Low-Rank Regularization for Hyperspectral Image Compressive Sensing Reconstruction. Remote Sens. 2019, 11, 193. [Google Scholar] [CrossRef]
  22. Yang, S.; Wang, M.; Li, P.; Jin, L.; Wu, B.; Jiao, L. Compressive Hyperspectral Imaging via Sparse Tensor and Nonlinear Compressed Sensing. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5943–5957. [Google Scholar] [CrossRef]
  23. Wang, Y.; Lin, L.; Zhao, Q.; Yue, T.; Meng, D.; Leung, Y. Compressive Sensing of Hyperspectral Images via Joint Tensor Tucker Decomposition and Weighted Total Variation Regularization. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2457–2461. [Google Scholar] [CrossRef]
  24. Hsu, C.C.; Jian, C.Y.; Tu, E.S.; Lee, C.M.; Chen, G.L. Real-Time Compressed Sensing for Joint Hyperspectral Image Transmission and Restoration for CubeSat. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
  25. Xiao, H.; Wang, Z.; Cui, X. Distributed Compressed Sensing of Hyperspectral Images According to Spectral Library Matching. IEEE Access 2021, 9, 112994–113006. [Google Scholar] [CrossRef]
  26. He, W.; Yokoya, N.; Yuan, X. Fast Hyperspectral Image Recovery of Dual-Camera Compressive Hyperspectral Imaging via Non-Iterative Subspace-Based Fusion. IEEE Trans. Image Process. 2021, 30, 7170–7183. [Google Scholar] [CrossRef] [PubMed]
  27. Heiser, Y.; Stern, A. Learned Design of a Compressive Hyperspectral Imager for Remote Sensing by a Physics-Constrained Autoencoder. Remote Sens. 2022, 14, 3766. [Google Scholar] [CrossRef]
  28. Saragadam, V.; Sankaranarayanan, A.C. KRISM—Krylov Subspace-based Optical Computing of Hyperspectral Images. ACM Trans. Graph. 2019, 38, 148. [Google Scholar] [CrossRef]
  29. Zhuang, L.; Ng, M.K.; Fu, X.; Bioucas-Dias, J.M. Hy-Demosaicing: Hyperspectral Blind Reconstruction From Spectral Subsampling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5515815. [Google Scholar] [CrossRef]
  30. Xu, Y.; Lu, L.; Saragadam, V.; Kelly, K.F. A compressive hyperspectral video imaging system using a single-pixel detector. Nat. Commun. 2024, 15, 1456. [Google Scholar] [CrossRef] [PubMed]
  31. Oiknine, Y.; August, I.; Farber, V.; Gedalin, D.; Stern, A. Compressive Sensing Hyperspectral Imaging by Spectral Multiplexing with Liquid Crystal. J. Imaging 2019, 5, 3. [Google Scholar] [CrossRef]
  32. Justo, J.A.; Orlandić, M. Study of the gOMP Algorithm for Recovery of Compressed Sensed Hyperspectral Images. In Proceedings of the 2022 12th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Rome, Italy, 13–16 September 2022; pp. 1–5. [Google Scholar] [CrossRef]
  33. Justo, J.A.; Lupu, D.; Orlandić, M.; Necoara, I.; Johansen, T.A. A Comparative Study of Compressive Sensing Algorithms for Hyperspectral Imaging Reconstruction. In Proceedings of the 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Nafplio, Greece, 26–29 June 2022; pp. 1–5. [Google Scholar] [CrossRef]
  34. Webler, F.S.; Spitschan, M.; Andersen, M. Towards ‘Fourth Paradigm’ Spectral Sensing. Sensors 2022, 22, 2377. [Google Scholar] [CrossRef]
  35. Hitchcock, F.L. Multiple Invariants and Generalized Rank of a P-Way Matrix or Tensor. J. Math. Phys. 1928, 7, 39–79. [Google Scholar] [CrossRef]
  36. Hitchcock, F.L. The Expression of a Tensor or a Polyadic as a Sum of Products. J. Math. Phys. 1927, 6, 164–189. [Google Scholar] [CrossRef]
  37. Veganzones, M.A.; Cohen, J.E.; Cabral Farias, R.; Chanussot, J.; Comon, P. Nonnegative Tensor CP Decomposition of Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2577–2588. [Google Scholar] [CrossRef]
  38. Xu, Y.; Wu, Z.; Chanussot, J.; Comon, P.; Wei, Z. Nonlocal Coupled Tensor CP Decomposition for Hyperspectral and Multispectral Image Fusion. IEEE Trans. Geosci. Remote Sens. 2020, 58, 348–362. [Google Scholar] [CrossRef]
  39. Fang, L.; He, N.; Lin, H. CP tensor-based compression of hyperspectral images. J. Opt. Soc. Am. A 2017, 34, 252–258. [Google Scholar] [CrossRef] [PubMed]
  40. Jouni, M.; Dalla Mura, M.; Comon, P. Classification of Hyperspectral Images as Tensors Using Nonnegative CP Decomposition. In Mathematical Morphology and Its Applications to Signal and Image Processing; Burgeth, B., Kleefeld, A., Naegel, B., Passat, N., Perret, B., Eds.; Springer: Cham, Stwizerland, 2019; pp. 189–201. [Google Scholar]
  41. Zhang, Q.; Wang, H.; Plemmons, R.J.; Pauca, V.P. Tensor methods for hyperspectral data analysis: A space object material identification study. J. Opt. Soc. Am. A 2008, 25, 3001–3012. [Google Scholar] [CrossRef] [PubMed]
  42. Imbiriba, T.; Borsoi, R.A.; Bermudez, J.C.M. Low-Rank Tensor Modeling for Hyperspectral Unmixing Accounting for Spectral Variability. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1833–1842. [Google Scholar] [CrossRef]
  43. Ouyang, B.; Twardowski, M.; Caimia, F.; Dalgleish, F.; Gong, C.; Li, Y. Prototyping a compressive line sensing hyperspectral imaging sensor. In Emerging Digital Micromirror Device Based Systems and Applications XI; Douglass, M.R., Ehmke, J., Lee, B.L., Eds.; International Society for Optics and Photonics, SPIE: San Francisco, CA, USA, 2019; Volume 10932, p. 109320U. [Google Scholar] [CrossRef]
  44. Instruments, T. DLP® NIRscan™ Nano EVM User’s Guide; Tex. Instruments: Dallas, TX, USA, 2017; Volume 10, p. 15. [Google Scholar]
  45. Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  46. 260R Ruled Diffraction Grating. Available online: https://www.newport.com/p/33067FL01-260R (accessed on 4 June 2024).
  47. Thorlab Kiralux Monochrome CMOS Camera (CS235MU). Available online: https://www.thorlabs.com/thorproduct.cfm?partnumber=CS235MU (accessed on 4 June 2024).
  48. Karakaya, D.; Ulucan, O.; Turkan, M. Image declipping: Saturation correction in single images. Digit. Signal Process. 2022, 127, 103537. [Google Scholar] [CrossRef]
  49. Yan, L.; Yamaguchi, M.; Noro, N.; Takara, Y.; Ando, F. Effect of the restoration of saturated signals in hyperspectral image analysis and color reproduction. Opt. Rev. 2021, 28, 27–41. [Google Scholar] [CrossRef]
  50. Avron, H.; Kapralov, M.; Musco, C.; Musco, C.; Velingker, A.; Zandieh, A. A Universal Sampling Method for Reconstructing Signals with Simple Fourier Transforms. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, Phoenix, AZ, USA, 23–26 June 2019; pp. 1051–1063. [Google Scholar] [CrossRef]
  51. Shaikh, M.A.H.; Hasan, K.A. Efficient storage scheme for n-dimensional sparse array: GCRS/GCCS. In Proceedings of the 2015 International Conference on High Performance Computing & Simulation (HPCS), Amsterdam, The Netherlands, 20–24 July 2015; pp. 137–142. [Google Scholar] [CrossRef]
  52. Sundberg, R. Forward Modeling of Cloud Shadows And the Impact of Cloud Shadows On Remote Sensing Data Products. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1707–1710. [Google Scholar] [CrossRef]
  53. Arguello, H.; Arce, G.R. Colored Coded Aperture Design by Concentration of Measure in Compressive Spectral Imaging. IEEE Trans. Image Process. 2014, 23, 1896–1908. [Google Scholar] [CrossRef] [PubMed]
  54. Arguello, H.; Arce, G.R. Rank Minimization Code Aperture Design for Spectrally Selective Compressive Imaging. IEEE Trans. Image Process. 2013, 22, 941–954. [Google Scholar] [CrossRef] [PubMed]
  55. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  56. Kim, J.; He, Y.; Park, H. Algorithms for nonnegative matrix and tensor factorizations: A unified view based on block coordinate descent framework. J. Glob. Optim. 2014, 58, 285–319. [Google Scholar] [CrossRef]
  57. Ouyang, B.; Hou, W.W.; Caimi, F.M.; Dalgleish, F.R.; Vuorenkoski, A.K.; Gong, C. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application. J. Appl. Remote Sens. 2017, 11, 032407. [Google Scholar] [CrossRef]
Figure 1. Illustration of a sensing workflow compatible with our methodology. Beginning with the first visit to a survey region, initial encoded HS samples are obtained via spectral grating and modulation. These relatively few encoded measurements are transmitted to a ground station where computing equipment achieves image reconstruction. The obtained HS estimate is used to design a reduced amount of sensing patterns and re-program the light-modulation device.
Figure 1. Illustration of a sensing workflow compatible with our methodology. Beginning with the first visit to a survey region, initial encoded HS samples are obtained via spectral grating and modulation. These relatively few encoded measurements are transmitted to a ground station where computing equipment achieves image reconstruction. The obtained HS estimate is used to design a reduced amount of sensing patterns and re-program the light-modulation device.
Electronics 13 02698 g001
Figure 2. Illustration of the prototype used in our field study. (Top-left and middle) Zemax model of the prototype, Camera 1 and 2 respectively. (Top-right) Bench top setup for sensitivity analysis. (Bottom) System setup for field tests.
Figure 2. Illustration of the prototype used in our field study. (Top-left and middle) Zemax model of the prototype, Camera 1 and 2 respectively. (Top-right) Bench top setup for sensitivity analysis. (Bottom) System setup for field tests.
Electronics 13 02698 g002
Figure 3. Spectral sensitivity evaluation of the benchtop system using three laser diodes at 405 nm, 520 nm, and 650 nm.
Figure 3. Spectral sensitivity evaluation of the benchtop system using three laser diodes at 405 nm, 520 nm, and 650 nm.
Electronics 13 02698 g003
Figure 4. (Top) Birds-eye view of survey scenes at the HBOI campus. (Bottom) Image of Scene 1 taken with a commercial RGB camera.
Figure 4. (Top) Birds-eye view of survey scenes at the HBOI campus. (Bottom) Image of Scene 1 taken with a commercial RGB camera.
Electronics 13 02698 g004
Figure 5. Collected image labeled as HBOI1, with four sample spectral signatures from a grass pixel, the speed limit sign, the green panel, and the blue panel.
Figure 5. Collected image labeled as HBOI1, with four sample spectral signatures from a grass pixel, the speed limit sign, the green panel, and the blue panel.
Electronics 13 02698 g005aElectronics 13 02698 g005b
Figure 6. Collected image labeled as HBOI2, with four sample spectral signatures from a grass pixel, the speed limit sign, the green panel, and the blue panel. Notice that the green and blue panel signatures were blown out, i.e., missing information of the peak wavelengths.
Figure 6. Collected image labeled as HBOI2, with four sample spectral signatures from a grass pixel, the speed limit sign, the green panel, and the blue panel. Notice that the green and blue panel signatures were blown out, i.e., missing information of the peak wavelengths.
Electronics 13 02698 g006
Figure 7. Sample codebooks. (Left plot) Initial patterns generated as in Section 2.4.1 for a single pixel of HBOI1. (Right plot) Adaptive codebook generated as in Section 2.4.2 using the reconstructed HBOI1 to generate sampling patterns for HBOI2.
Figure 7. Sample codebooks. (Left plot) Initial patterns generated as in Section 2.4.1 for a single pixel of HBOI1. (Right plot) Adaptive codebook generated as in Section 2.4.2 using the reconstructed HBOI1 to generate sampling patterns for HBOI2.
Electronics 13 02698 g007
Figure 8. Initial codebook sampling results on HBOI1. (Top) Raw HBOI1 image and (second row) reconstructed estimate of HBOI1 via M = 6 patterns per pixel generated as in Section 2.4.1 (see left plot of Figure 7). (Plots) Comparison of recorded vs. reconstructed spectra of a grass pixel, the speed limit sign, the green panel, and the blue panel.
Figure 8. Initial codebook sampling results on HBOI1. (Top) Raw HBOI1 image and (second row) reconstructed estimate of HBOI1 via M = 6 patterns per pixel generated as in Section 2.4.1 (see left plot of Figure 7). (Plots) Comparison of recorded vs. reconstructed spectra of a grass pixel, the speed limit sign, the green panel, and the blue panel.
Electronics 13 02698 g008
Figure 9. Close-up of speed limit sign in HBOI data. (Top row) Raw images HBOI1 and HBOI2, left and right, respectively. (Bottom row) Reconstructed images HBOI1 (non-adaptive) and HBOI2 (adaptive), left and right, respectively.
Figure 9. Close-up of speed limit sign in HBOI data. (Top row) Raw images HBOI1 and HBOI2, left and right, respectively. (Bottom row) Reconstructed images HBOI1 (non-adaptive) and HBOI2 (adaptive), left and right, respectively.
Electronics 13 02698 g009
Figure 10. Adaptive sampling results for HBOI2. (Top) Raw HBOI2 image and (second row) reconstructed estimate of HBOI2 via M = 6 adaptive patterns generated as in Section 2.4.2 (see right plot of Figure 7). (Plots) Comparison of recorded vs. reconstructed spectra of a grass pixel, the speed limit sign, the green panel, and the blue panel.
Figure 10. Adaptive sampling results for HBOI2. (Top) Raw HBOI2 image and (second row) reconstructed estimate of HBOI2 via M = 6 adaptive patterns generated as in Section 2.4.2 (see right plot of Figure 7). (Plots) Comparison of recorded vs. reconstructed spectra of a grass pixel, the speed limit sign, the green panel, and the blue panel.
Electronics 13 02698 g010aElectronics 13 02698 g010b
Figure 11. Hardware-based image segmentation results for HBOI2. (Top) Reconstructed HBOI2 pixels of target green, blue, and red panels. (Plots) Comparison of recorded vs. reconstructed spectra of green, blue, and red panels.
Figure 11. Hardware-based image segmentation results for HBOI2. (Top) Reconstructed HBOI2 pixels of target green, blue, and red panels. (Plots) Comparison of recorded vs. reconstructed spectra of green, blue, and red panels.
Electronics 13 02698 g011
Figure 12. Hardware-based image segmentation results on HICO data. (Top) RGB image of true HICO802 data and (middle) reconstructed water pixels, with produced segmentation applied to both images. (Bottom) Comparison of true vs. reconstructed spectrum of a water pixel ( 1.67 % sampling).
Figure 12. Hardware-based image segmentation results on HICO data. (Top) RGB image of true HICO802 data and (middle) reconstructed water pixels, with produced segmentation applied to both images. (Bottom) Comparison of true vs. reconstructed spectrum of a water pixel ( 1.67 % sampling).
Electronics 13 02698 g012
Table 1. Comparison of HSCP-ALS with other reconstruction methods. These results were for Cuprite data with 5 % sampling and various measurement corruption levels from our noise model.
Table 1. Comparison of HSCP-ALS with other reconstruction methods. These results were for Cuprite data with 5 % sampling and various measurement corruption levels from our noise model.
Input SNRCPPCA 𝓁 1 -MinDistributed 𝓁 1 -MinHSCP-ALS
10 dB12.8795 dB5.5652 dB15.9992 dB17.9423 dB
15 dB13.6119 dB10.3080 dB17.8556 dB21.8641 dB
20 dB13.8958 dB12.4858 dB18.7570 dB23.3095 dB
Table 2. Comparison of HSCP-ALS with other reconstruction methods. These results were for Cuprite data with 10 % sampling and various measurement corruption levels from our noise model.
Table 2. Comparison of HSCP-ALS with other reconstruction methods. These results were for Cuprite data with 10 % sampling and various measurement corruption levels from our noise model.
Input SNRCPPCA 𝓁 1 -MinDistributed 𝓁 1 -MinCP-ALS
10 dB14.1017 dB5.3540 dB16.5761 dB21.3264 dB
15 dB14.6541 dB10.2561 dB17.8350 dB23.6715 dB
20 dB14.8048 dB12.7200 dB18.5365 dB24.2014 dB
Table 3. Comparison of HSCP-ALS with other reconstruction methods. These results were for Cuprite data with 15 % sampling and various measurement corruption levels from our noise model.
Table 3. Comparison of HSCP-ALS with other reconstruction methods. These results were for Cuprite data with 15 % sampling and various measurement corruption levels from our noise model.
Input SNRCPPCA 𝓁 1 -MinDistributed 𝓁 1 -MinCP-ALS
10 dB14.3713 dB10.2561 dB16.5031 dB21.6496 dB
15 dB14.9681 dB10.1624 dB17.6900 dB23.9046 dB
20 dB15.2057 dB12.8459 dB18.2990 dB24.5190 dB
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

López, O.; Ernce, A.; Ouyang, B.; Malkiel, E.; Gong, C.; Twardowski, M. Advancements in Remote Compressive Hyperspectral Imaging: Adaptive Sampling with Low-Rank Tensor Image Reconstruction. Electronics 2024, 13, 2698. https://doi.org/10.3390/electronics13142698

AMA Style

López O, Ernce A, Ouyang B, Malkiel E, Gong C, Twardowski M. Advancements in Remote Compressive Hyperspectral Imaging: Adaptive Sampling with Low-Rank Tensor Image Reconstruction. Electronics. 2024; 13(14):2698. https://doi.org/10.3390/electronics13142698

Chicago/Turabian Style

López, Oscar, Alexa Ernce, Bing Ouyang, Ed Malkiel, Cuiling Gong, and Mike Twardowski. 2024. "Advancements in Remote Compressive Hyperspectral Imaging: Adaptive Sampling with Low-Rank Tensor Image Reconstruction" Electronics 13, no. 14: 2698. https://doi.org/10.3390/electronics13142698

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop