Next Article in Journal
TSNet: Token Sparsification for Efficient Video Transformer
Previous Article in Journal
Evaluating Variability in Reflective Photoelasticity: Focus on Adhesives, Light Sources, and Camera Setup
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single-Lens Imaging Spectral Restoration Method Based on Gradient Prior Information Optimization

1
Aerospace Information Research Institute, Chinese Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China
2
Department of Key Laboratory of Computational Optical Imaging Technology, Chinese Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China
3
National Computer Network Emergency Response Technical Team, Coordination Center of China, Beijing 100049, China
4
School of Optoelectronics, University of Chinese of Academy Sciences, No. 19 (A) Yuquan Road, Shijingshan District, Beijing 100039, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(19), 10632; https://doi.org/10.3390/app131910632
Submission received: 8 August 2023 / Revised: 20 September 2023 / Accepted: 22 September 2023 / Published: 24 September 2023
(This article belongs to the Section Optics and Lasers)

Abstract

:
Single-lens imaging systems can capture spectral information, but they are affected by aberrations, defocusing, and other factors, leading to spectral data overlap. It is then necessary to perform restoration on overlapped spectral data. The restoration method should not only deblur image information, but also recover spectral information for each wavelength band. In other words, the restoration process needs to handle both spatial and spectral information in parallel while ensuring that neither is distorted. In this study, while considering the characteristics of capturing overlapped information in a single-lens imaging system, a spectrum restoration method based on gradient prior information optimization is proposed. This method is shown to achieve high-quality restoration of both images and spectra. The feasibility of this algorithm is demonstrated through simulation and experimental verification, which show that the restoration quality of spectra using the proposed algorithm is improved compared to that achieved using the MTV algorithm.

1. Introduction

Image spectroscopy is an optical remote sensing technique that integrates imaging and spectral detection. Data cubes obtained from data capture contain two-dimensional spatial images of objects and one-dimensional spectral radiance (also known as spectral information). This allows the retrieval of the spectral curve of a target from any pixel in the data cube, while also providing spatial images in different wavelength bands [1,2,3,4]. Due to its capacity to analyze and identify objects based on both geometric shape and spectral characteristics, image spectroscopy has found widespread applications in various fields such as the military, exploration, and environmental monitoring. Traditional image spectroscopy techniques suppress aberrations and enhance spectral quality by increasing the number of optical components in a system and using detectors with better performance. This approach effectively reduces the impact of system aberrations through different combinations of optical elements, improving the quality of spectral information. However, it leads to complex optical systems, large amounts of equipment, and high costs. Moreover, when the number of optical components reaches a certain point, their ability to improve spectral quality diminishes. Therefore, there is a need for a new approach to enhancing spectral quality. With the development of computer technology, computational power has significantly increased. Further optimization and processing of existing spectral information can again improve spectral quality, and this approach relies on computational imaging technology [5,6].
Single-lens imaging is a development based on computational imaging techniques. It utilizes an optical system composed of a single-lens element to replace traditional complex optical systems. By employing backend algorithms for processing instead of front-end optical processing, it not only simplifies the complexity of optical systems, but also yields images that meet specific requirements. The earliest proponents of single-lens imaging were Schuler et al., who, in 2011, created a single-lens camera containing only one lens element [7]. They introduced an alternating algorithm for removing mosaic artifacts and image blurriness using images captured by the camera to validate the effectiveness of their algorithm in mitigating optical aberrations and blurriness. Building upon Schuler’s work, Heide et al. developed specialized image restoration algorithms tailored for single-lens imaging [8]. Their algorithms suppressed optical aberrations, reducing the complexity, weight, and cost of their front-end optical system. They introduced the cross-channel prior deconvolution algorithm and found that the edge information of objects in the R, G, and B channels shared similar positions. They used the information from one channel as prior knowledge to deconvolve blurry images from the other two channels, significantly improving the quality of the final restored images. Li Weili et al. utilized the front lens element of a Canon EF 50 mm F1.8II lens to construct a single lens and adapted it to use with a Canon 5D Mark II camera [9]. They developed a blind deconvolution image restoration algorithm based on maximum a posteriori probability for blurry images obtained with their single-lens imaging system. They introduced new priors related to the structure of the blur kernel and smooth color transitions in the images, enhancing the accuracy of point spread function (PSF) estimation and, consequently [10,11], improving the quality of their final image restoration.
Utilizing a single-lens imaging system in conjunction with algorithms allows clear images to be obtained. However, two-dimensional image data cannot fully represent certain physical properties of objects. Spectral information can compensate for this limitation. Scholars from various parts of the world have conducted research on whether it is possible to use the straightforward optical structure of single-lens imaging systems in combination with algorithms to acquire spectral information.
In 1995, Lyons, in the United States, proposed a new structure for an imaging spectrometer [12]. This structure primarily utilizes the dispersive properties of a back-ordered erbium (BOE) element, enabling spectral imaging in the visible and near-infrared wavelength ranges. The BOE element can image at different positions, and a charge-coupled device (CCD) scans along the optical axis to obtain image information in the desired spectral bands. A monochromatic CCD is used in this setup. The image received by the CCD consists of an accurately focused image and overlapping images formed by other wavelengths at different defocusing positions. Post-image processing using computer tomography techniques eliminates unwanted blur components, leaving only the images corresponding to each wavelength. Yubin et al. designed a visual imaging spectrometer experimental setup that utilizes the axial dispersion of binary optical elements [13]. Its spectral range is from 500 nm to 900 nm, with a spectral resolution of 6.4 nm @ 632.8 nm. The system has an F-number of F/8, a field of view angle of 1.3 degrees, a CCD size of 15 × 15 μm, and a pixel count of 512 × 512. The authors used a three-dimensional optical slice microscopy technique for spectral restoration and proposed three deconvolution algorithms suitable for imaging spectrometers with binary optical elements: the nearest-neighbor method, inverse filtering, and constrained iterative deconvolution [14]. Oğuzhan Fatih Kar and others introduced a simple and fast computational imaging spectrometer system using a single programmable diffractive lens. They also proposed a rapid spectral restoration algorithm based on an alternating direction multiplier for effectively restoring spectral information under different signal-to-noise ratios [15].
Image deblurring is the fundamental element of spectral information recovery based on single-lens imaging. In an imaging system, the process of image formation can be described as the convolution of an ideal image with a blur kernel. Therefore, deconvolution, as the inverse process of convolution, theoretically allows for the restoration of a clear image from a blurred one. For deconvolution-based deblurring algorithms, there is close integration of physical considerations and mathematical principles, so that image quality can be improved without altering the physical design of the system or the imaging environment.
Deblurring algorithms can be categorized into non-blind and blind methods based on prior information about the blur kernel. Non-blind methods usually do not consider estimation of the blur kernel, which is typically obtained through direct measurement or simple approximation [16]. With further research, regularization techniques have been introduced to deblur images. One example of this is the total variation (TV) regularization term [17], which is designed on the basis of an understanding of image gradient. TV regularization emphasizes the gradient information of an image: when the regularization weight coefficient is large, it achieves better results in recovering texture details, while a smaller weight produces smoother results. Therefore, TV regularization combines both denoising and texture preservation characteristics. Some researchers, such as Chen et al., have used channel correlation properties in multispectral images to guide spectral information recovery. They obtained guiding images for each blurry image, computed their gradients, and used this as prior information for spectral recovery, resulting in high-quality spectral restoration [18].
The major difference between blind and non-blind methods lies in the blind estimation of blur kernels. Estimating blur kernels is a critical issue in image restoration algorithms, as the accuracy of blur kernel estimation determines the quality of image restoration. Blind deblurring models have more severe drawbacks as their blur kernels are unknown. A representative approach is the variational Bayesian multi-scale blind deconvolution method proposed by Fergus and his team. Initially, they used Bayesian methods to iteratively estimate blur kernels based on a maximum a posteriori (MAP) [19,20] model. This iterative process, going from coarse to fine, takes place within a spatial pyramid scale space. Subsequently, the authors reconstructed a clear image using the Richardson–Lucy (RL) method [21,22]. However, due to limitations of the standard RL deconvolution algorithm in suppressing ringing artifacts, the image they achieved exhibits noticeable ringing effects, as seen in their paper. Levin and his team proposed an improved variant of blind image restoration based on the effective edge similarity of an image, building upon the method of Fergus and others [23]. This method also operates within the maximum a posteriori (MAP) solving framework. Its primary contribution lies in the processes of updating and estimating a blur kernel. It not only takes into account the influence of the blurred image itself on the estimation of a blur kernel, but also considers the impact of the covariance of potential clear images on the estimation process. Q. Shan and his colleagues proposed a unified probabilistic model for both blind and non-blind deconvolution, addressing respective maximum a posteriori (MAP) problems through advanced iterative optimization. This optimization process alternates between refining a blur kernel and restoring an image until convergence to a global optimum is achieved. The algorithm can be initialized with a rough estimate of the blur kernel and ultimately yields results that preserve complex image structures while avoiding ringing artifacts [24]. Krishna proposed an algorithm that uses the ratio of L1 and L2 norms as a regularization constraint [25], allowing a blurry image to gradually become clear. The specific computational process involves initially placing this constraint on the loss function. Then, it alternates between estimating a clear image and the blur kernel. Ultimately, a more accurate blur kernel is estimated. After obtaining a blur kernel, the author utilized a super-Laplacian prior model for non-blind image deconvolution, resulting in the final restoration of a clear image.
In this study, Baiyang compared non-blind deblurring using TV regularization with blind deblurring based on the MAP framework [26]. In the visible light wavelength range, when constrained with the TV regularization term utilizing gradient information, it outperformed the MAP method by effectively suppressing ringing artifacts, preserving texture details better, and achieving faster computation.
As the single-lens-based spectral acquisition device in this study only contains a single-lens element, the system’s point spread function (PSF) can be directly measured, making it more suitable for non-blind methods. Building on the previous research and achievements of our research group, the TV regularization term was selected for constrained solving, and this study mainly focuses on optimizing and improving the TV regularization term. The proposed algorithm is based on gradient prior information optimization and enhances similarity to the original image, ultimately improving the quality of spectral restoration. In comparison to unoptimized gradient prior information, the proposed algorithm stands out in terms of restoration quality. The feasibility of this algorithm was validated using publicly available remote sensing datasets and actual captured images.

2. System Model

The single-lens imaging system in this study utilizes the axial chromatic dispersion of a single lens, and a spectral imaging model is established on the basis of this characteristic, as shown in Figure 1. A coordinate system (xy) is established at the lens position, and another coordinate system (xjyj) is established at the imaging position, with the opticalaxis along the z-axis.
Moving the detector along the optical axis, data are collected at specific positions, and include image information from different wavelengths, known as a mixed spectrum. Assuming that data are collected at the focal position corresponding to wavelength λ1, the collected mixed spectrum can be represented as follows:
g 1 x , y = h 1 x , y , z 1 f x , y , λ 1 + h 2 x , y , z 1 f x , y , λ 2 + h i x , y , z 1 f x , y , λ j
where g1(x,y) represents collected mixed spectrum information, hi(x,y,z1) represents the point spread function (PSF) corresponding to different wavelengths at the collection position, f(x,y,λj) represents the original image at different wavelengths, and “*” denotes the convolution operation.

3. Methods

Image restoration is a deconvolution process. If the point spread function (PSF) of an imaging system is known, an original clear image can be obtained by deconvolution of a blurred image with the PSF [27]. In practical situations, the influence of noise also needs to be considered. The entire process is shown in Figure 2.
In the spatial domain, it can be represented as:
g x , y = h x , y f x , y + n x , y = f u , v h x u , y v d u d v + n x , y
In the above equation, g(x,y) represents the blurred image, h(x,y) represents the blur kernel, f(x,y) represents the original image, n(x,y) represents noise, and “*” denotes convolution.
Since digital images are discrete, the above model can be represented as:
g x , y = m = 0 M 1 n = 0 N 1 f m , n h x m , y n + n x , y
If there is no noise, n(x,y) = 0.
The convolution operation in the spatial domain can be transformed into a multiplication operation in the frequency domain; thus:
G u , v = H u , v F u , v + N u , v
In the above equation, G(u,v), H(u,v), F(u,v), and N(u,v) correspond to the Fourier transforms of g(x,y), h(x,y), f(x,y), and n(x,y), respectively. When a blurred image and noise are acquired, and prior information about the blur kernel is known, restoration of the image can be achieved through deconvolution, with the aim of recovering an image that closely resembles the original.
Based on the principle of spectrum acquisition using a single-lens imaging system, the spectral information captured using CCD includes both in-focus and out-of-focus images. In other words, the imaging process involves convolving in-focus spectral images with the in-focus PSF and adding them to the convolution of out-of-focus spectral images with the out-of-focus PSF. This imaging process can be challenging to solve as it involves a large number of two-dimensional convolutions. By analyzing the frequency domain, the computational complexity of the convolution can be reduced. If we do not consider the influence of noise, it can be expressed as follows:
G u , v = H u , v F u , v
F u , v = G u , v / H u , v
F(u,v) represents the Fourier transform of the original image, H(u,v) represents the Fourier transform of the point spread function (PSF), and G(u,v) represents the Fourier transform of the mixed image obtained by CCD acquisition.
Assuming there are N spectral segments, the image obtained from the k-th segment can be represented as:
G k u , v = i = 1 N H i u , v F k u , v
Blurring in the collected images can be caused by the following: (1) spatial blurring caused by the point spread function (PSF) and (2) defocusing from neighboring spectral segments. In practical situations, as the wavelength difference from the k-spectrum segment increases, the impact of defocusing decreases, and it can even be neglected. The real impact comes from adjacent spectral segments. Both spatial blurring and spectral defocusing occur simultaneously.
The key to image restoration lies in the inversion of matrix H. However, H usually exhibits ill conditioning. Here, the concept of condition number is introduced, which measures the uncertainty of solution x with respect to b, or the sensitivity of the error in equation Ax = b. This is expressed as:
c o n d A = A A 1
If a small perturbation in matrix A causes only a small perturbation in solution vector x, then matrix A is said to be well conditioned. If it causes a large perturbation in x, it is considered to be ill conditioned. It is evident that even a tiny perturbation in matrix H can have a significant impact on the restored image, making matrix H ill conditioned. As a result, it is not directly invertible. In the field of mathematics, inversion processes and deconvolution are both considered inverse problems, which are generally ill posed. In other words, a slight perturbation can lead to a severe deviation in the final solution. To address this issue, more prior information is needed for constraints to be imposed, and appropriate solution methods must be chosen to obtain stable approximate solutions. This approach is known as regularization.
In 1992, Rudin et al. proposed the total variation (TV) regularization model, initially applied to image denoising problems and later widely used in various image restoration tasks. The expression is as follows:
f = i D x f i 2 + D y f i 2
In the above equation, Dx and Dy are first-order gradient operators in the x and y directions, respectively, and i represents the pixel position.
MTV (multi-task total variation) is a multi-channel model that calculates gradients for each pixel individually, expressed as [28]:
f = i k D x f i , k 2 + D y f i , k 2
where k represents the spectral segment. Using MTV, we can restore the aliased spectral imaging of a single-lens imaging system, which can be expressed as:
min f 1 2 H f G 2 2 + a f
where a represents the regularization coefficient, which balances the weights between the first and second terms. To solve the above equation, the alternating direction method of multiplier (ADMM) [29,30,31] can be applied, which usually solves the optimization problem, as follows:
min x , z f x + g z s . t . A x + B z = c
where xRn, zRm are two variables that need to be optimized and ARp×n, BRp×m, and CRp×n.
If functions f(x) and g(z) are convex in the solution of the above expression, variables x and z can be separated, which means that the optimization problem can be decomposed into two separate optimization problems for the two variables. The optimization process involves alternating the optimization of these variables until the optimal solution is obtained. The augmented Lagrangian function for the objective function above is formulated by introducing a quadratic penalty term and is given as follows:
L μ x , z , y = f x + g z + y T A x + B z c + μ 2 A x + B z c 2 2
where μ represents the penalty parameter, which takes a positive value, and y is also a positive parameter, serving as a dual variable. The process of alternating optimization can be described as optimizing x and z, and then iteratively updating y; it can be expressed as follows:
x k + 1 = arg min x L μ x , z k , y k z k + 1 = arg min z L μ x k + 1 , z , y k y k + 1 = y k + μ A x k + 1 + B z k + 1 c
Scaling y, we define b = y/μ, which results in:
L μ x , z , b = f x + g z + μ 2 A x + B z c + b 2 2
Therefore, the iterative process can be updated as follows:
x k + 1 = arg min x f x + μ 2 A x + B z k c + b k 2 2 z k + 1 = arg min z g z + μ 2 A x k + 1 + B z c + b k 2 2 b k + 1 = b k + A x k + 1 + B z k + 1 c
The above reconstruction model only provides prior information for the spatial dimension and does not consider prior information for the spectral dimension. Therefore, the key to the restoration algorithm is how to introduce effective spectral prior information. The total variation (TV) model mainly deals with gradient information, so spectral gradient information is introduced as prior information. In terms of spatial gradient information, regions with large gradient values represent the edges of an image. However, the total variation constraint is essentially the L1 norm of the image gradient, which may lead to some larger gradient values (such as edge information) not being well preserved, resulting in a certain degree of edge blurring in the reconstructed image.
Therefore, we optimize gradient information by implementing the following improvements:
P = D x = D x + D x D x + D x _ a v g D y = D y + D y D y + D y _ a v g D z = D z + D z D z + D z _ a v g
D x _ a v g = i = 1 M D x i / M D y _ a v g = i = 1 N D x i / N D z _ a v g = k = 1 S D x k / S
where Dz represents the gradient operator for the spectral dimension. Dx_avg, Dy_avg, and Dz_avg represent the average gradients for the spatial and spectral dimensions. M and N denote the spatial size of an image, while S represents the number of spectral bands. By pre-processing and enhancing constraints on the gradient information, the model becomes more closely aligned with the original image’s gradient information.
The resulting restoration model is as follows:
f = i k D x f i , k 2 + D y f i , k 2 + D z f i , k 2
Based on the above, the spectral restoration model can be finally represented as:
min f 1 2 H f G 2 2 + a P i f 2
According to the aforementioned alternating direction method of multipliers (ADMM) for solving the problem, the model has non-differentiability. Therefore, the solution is decomposed into multiple sub-problems by introducing intermediate variables wi = Pi f, i = 1,2…n2, Therefore, the problem is transformed into:
min f 1 2 H f G 2 2 + a i w i 2 s . t . w i = P i f , i = 1 , 2 n 2
The construction of the augmented Lagrangian function L(f,wi,μi) for the above function is as follows:
L f , w i , μ i = 1 2 H f G 2 2 + i w i 2 μ i T w i P i f + β 2 w i P i f 2 2
Scaling parameter μ, we obtain:
L f , w i , b i = 1 2 H f G 2 2 + i w i 2 + β 2 w i P i f b i 2 2
In the above equation, bi = μi/β represents the Lagrange multiplier. For each iteration, only one variable is optimized while fixing all other variables, and the iterative process alternates to update each variable to be solved.
To verify the feasibility of the proposed algorithm in this study, the image restoration quality was evaluated using root-mean-square error (RMSE), peak signal-to-noise ratio (PSNR), and the structural similarity index (SSIM) [32].
Let y be the restored image and y′ be the original image, both with size M × N.
RMSE is used to calculate the deviation between the restored image and the original image by first computing the mean squared error (MSE) and then taking its square root. A lower RMSE indicates a better restoration result. The formula for calculating RMSE is as follows:
MSE = 1 M × N n = 1 N m = 1 M y m n y m n 2 RMSE = MSE
PSNR is an objective criterion for evaluating images, where a higher value indicates a better restoration result. The formula for calculating PSNR is:
PSNR = 10 log 10 2 n 1 2 / MSE
SSIM utilizes the structural relationship of images to evaluate their similarity at a deeper level. In practical applications, it applies to the mean and variance information of an image matrix, using the mean to describe luminance information and the variance to describe contrast information. Finally, it uses covariance between the matrices to represent the similarity. The expression for SSIM is:
SSIM = 2 μ r μ o + A 2 σ r o + B μ r 2 + μ o 2 + A σ r 2 + σ o 2 + B
In the above equation, A and B are constants related to the pixel range of the image, where A = (0.01L)2 and B = (0.03L)2, with L being the maximum pixel value (e.g., 255 for 8-bit images). μr and μo represent the mean values of the restored image and the original image, respectively. σr and σo represent the standard deviations of the restored image and the original image, respectively, while σro represents the covariance between the restored image and the original image.
For evaluating spectral restoration quality, the spectral correlation coefficient (SCC) and spectral mean square error (SMSE) were used [33,34].
Let u represent the original spectral data and v represent the reconstructed spectral data, with n denoting the data size. Descriptions of the various evaluation methods are as follows:
The formula for calculating SCC is:
S C C ( u , v ) = i = 1 n ( u i u ¯ ) ( v i v ¯ ) i = 1 n ( u i u ¯ ) 2 ( v i v ¯ ) 2
μ ¯ and v ¯ represent the mean values of the original spectral data and the reconstructed spectral data, respectively. The SCC (spectral correlation coefficient) takes values between −1 and 1, where a larger SCC value indicates higher spectral similarity.
The formula for calculating SMSE is:
S M S E u , v = i = 1 n u i v i 2 n 2

4. Simulation

The system parameters were set as follows: lens diameter 30 mm, focal length 60 mm @ 587.6 nm, and CCD pixel size 12.4 μm. In this study, the PaviaU dataset [35] was used for simulation, which consists of 103 spectral bands covering a range from 0.430 μm to 0.860 μm. Spectral data from bands 23 to 31 were selected for simulation, and an image size of 250 × 250 was cropped. During the simulation, the point spread function (PSF) was automatically generated using a Gaussian operator, and the blur parameter was set to 3 × 3. The 23rd band image from the PaviaU dataset was selected as the original image and was degraded to calculate the gradient distribution, as shown in Figure 3a. The degraded image’s gradient was then optimized using the optimization method proposed in this study, as shown in Figure 3b. It can be clearly observed that the optimized gradient distribution is much closer to the original image’s gradient distribution, enabling better image restoration. Figure 4a–i shows the simulation results for different spectral bands. From left to right, it respectively represents the original image, the degraded image, the MTV-restored image, and the image restored using the algorithm proposed in this study.
A quality evaluation of the three restoration methods is shown in Table 1.
Based on a comparison of the quality evaluation metrics mentioned above, it can be observed that the restoration quality of the algorithm proposed in this study is superior to that of the MTV algorithm. Taking the original image of band 23 as an example, points A and B were selected as feature points, as shown in Figure 5.
A comparison between the spectral quality of restoration using the MTV algorithm and the algorithm proposed in this study is shown in Figure 6. The horizontal axis represents the spectral bands, and the vertical axis represents the normalized intensity values. Figure 6a shows the spectral curve for point A, and b shows the spectral curve for point B.
A quality evaluation of the restored spectral data is shown in Table 2.
Based on a comparison of the results, it can be concluded that for both spectral similarity and spectral root-mean-square error, the restoration quality of the algorithm proposed in this study is superior to that of the MTV algorithm for points A and B.

5. Experiment

According to the imaging principle of a single-lens system, the object distance is generally required to be greater than twice the focal length of the system. This way, the obtained image is a reduced real image. In order to reduce the length of the imaging system, in this study, we chose the following lens parameters: diameter of 30 mm, focal length of 60 mm @ 587.6 nm, and N-BK7 material. The detector parameters were 1024 × 1024 pixels with a sensor size of 12.7 mm × 12.7 mm.
The PSF measurement system was set up as shown in Figure 7. In the experiment, a monochromator was used to illuminate a pinhole, and a CCD was used to capture an image of the pinhole. By moving the CCD, the pinhole’s clearest image was obtained, which represents the in-focus PSF for that spectral band. Then, by adjusting the wavelength, PSFs at different degrees of defocus for other spectral bands relative to this band were obtained. Through this method, in-focus and defocused PSFs for different spectral bands were obtained.
As shown in Figure 8a, the setup for the PSF measurement experiment consisted of a pinhole with a diameter of 0.1 mm. The CCD exposure time for all measurements was set to 16.69 ms, as shown in Figure 8b.
In this study, PSFs for the spectral range of 0.520 μm to 0.590 μm were measured with a sampling interval of 0.010 μm, as shown in Figure 9. The experimental setup is shown in Figure 10. A monochromator was used as a single-wavelength light source, and an LED (light-emitting diode) was used as a polychromatic light source. The object distance was set to 42.5 cm. When using the polychromatic light source, the center brightness of the LED was considered to be too high, resulting in an overexposed center region and relatively dark image edges. To address this, a mirror was used to reflect the light, making the entire light source more uniform. The object captured in the images is the uppercase letter “E”.
Figure 11 shows the experimental restoration results for the spectral range from 0.520 μm to 0.590 μm, where (a) to (h) represent the experimental results for different wavelengths. From left to right are the acquired original image, the captured image, the MTV-restored image, and the image restored using the algorithm proposed in this study.
An evaluation of these results using quality assessment metrics is shown in Table 3.
Taking the original image at a wavelength of 0.520 μm as an example, points A and B were selected, as shown in Figure 12. Spectral restoration was performed on points A and B, and the results are shown in Figure 13.
From a comparison of the restoration results, it can be observed that the spectral restoration quality of the algorithm proposed in this study is superior to that of the MTV algorithm. For point A, the restored spectrum using the algorithm in this study and the MTV-restored spectrum both exhibit the same trend as the original spectrum, but the similarity of the algorithm in this study is higher. For point B, the MTV algorithm’s restored spectrum is completely distorted, while the restored spectrum using the algorithm proposed in this study shows a high similarity to the original spectrum.
An evaluation of the quality of the restored spectral data is shown in Table 4.
Based on the above table, it can be observed that for both spectral similarity and spectral root-mean-square error, the restoration quality of the algorithm proposed in this study is higher than that of the MTV algorithm for points A and B.

6. Conclusions

Spectral imaging information is composed of the two-dimensional spatial image information of natural objects and one-dimensional spectral information. It not only provides a visual display of the detailed textures of objects, but also characterizes target properties. With increasing demand for spatial and spectral resolution, the complexity, volume, and weight of systems have been continuously increasing, leading to difficulties in system installation and increased costs. However, with the application of computational imaging technology in the field of spectral imaging, these problems have been alleviated to some extent. Single-lens imaging is a type of computational imaging technology known for its advantages of high throughput, low complexity, and low cost. The key to obtaining spectral information based on single-lens imaging technology lies in the spectral imaging restoration algorithm. In this paper, we give details of an in-depth study of a spectral imaging restoration algorithm based on single-lens imaging and a proposed spectral imaging restoration method based on the optimization of gradient prior information. By optimizing gradient information, the quality of restored spectral images was improved. Compared to the unoptimized algorithm, we evaluated the quality of the restored images and spectra using RMSE, SSIM, SMSE, and SCC. Both the simulation and experimental results demonstrate the superior quality of the algorithm proposed in this study, thus confirming its effectiveness and accuracy.
The algorithm presented in this paper has already been validated for its feasibility in single-lens imaging systems. However, whether it is applicable to multi-lens imaging systems is unknown. Future research will explore this aspect and further optimize the regularization terms to enhance the method’s generality. The proposed single-lens imaging system is designed to be portable and practical, so it places high demands on its volume and data processing speed. Currently, there is room for further optimization of the computational complexity of the algorithm presented in this paper. However, reducing computational complexity may decrease the quality of the restoration, so striking a balance between the two is one of the issues that needs to be addressed in the future.

Author Contributions

Conceptualization, P.H., Z.L., Y.T. and Q.L.; methodology, P.H.; software, P.H. and Y.B.; investigation, Z.L., J.W. and Y.T.; resources, J.W.; data curation, P.H.; writing—original draft preparation, P.H.; writing—review and editing, P.H. and Q.L.; supervision, J.W. and Q.L.; project administration, Z.L., J.W. and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA28050401).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boardman, J.W. Inversion of imaging spectrometry data using singular value decomposition. In Proceedings of the 12th Canadian Symposium on Remote Sensing Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 10–14 July 1989. [Google Scholar]
  2. Curran, P.J. Imaging spectrometry. Prog. Phys. Geogr. 1994, 18, 247–266. [Google Scholar] [CrossRef]
  3. Nascimento, J.M.P.; Dias, J.M.B. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  4. Winter, M.E. N-FINDR: An algorithm for fast autonomous spectral end-member determination in hyperspectral data. In Proceedings of the Imaging Spectrometry V. International Society for Optics and Photonics, Denver, CO, USA, 19–21 July 1999; Volume 3753, pp. 266–276. [Google Scholar]
  5. Saghafi, S.; Becker, K.; Hahn, C.; Dodt, H.U. Recent Development in Light Ultramicroscopy Using Aspherical Optical Elements. In Optical Systems Design 2012; International Society for Optics and Photonics: Barcelona, Spain, 2012; Volume 8550, p. 85500K. [Google Scholar]
  6. Wang, X. Optical Design of a High-Resolution Space Camera. J. Opt. 2015, 35, 321–329. [Google Scholar]
  7. Schuler, C.J.; Hirsch, M.; Harmeling, S. Non-stationary correction of optical aberrations. In Proceedings of the IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  8. Heide, F.; Rouf, M.; Hullin, M.B. High-Quality Computational Imaging through Simple Lenses. Acm Trans. Graph. 2013, 32, 1–14. [Google Scholar] [CrossRef]
  9. Li, W.; Yin, X.; Wang, B.; Zhang, M.; Tan, K. Laser Active Image-denoising based on Principal Component Analysis with Local Pixel Grouping. In Proceedings of the 2014 International Conference on Computers and Information Processing Technologies, Shanghai, China, 23–24 April 2014; pp. 753–756. [Google Scholar]
  10. Li, W.L.; Yin, X.Q.; Zhang, M.J. A Novel Seam-Searching Approach for Image Mosaic. In Advanced Materials Research; Trans Tech Publications Ltd.: Stafa-Zurich, Switzerland, 2014; Volume 926, pp. 3471–3475. [Google Scholar]
  11. Lyons, D.M. Image Spectrometry with a Diffractive Optic. In Imaging Spectrometry; International Society for Optics and Photonics: Barcelona, Spain, 1995; Volume 2480, pp. 123–131. [Google Scholar]
  12. Yu, B. Research on the Performance of Spectral Imaging Using Binary Optical Lens Axial Dispersion. Master’s Thesis, Graduate School of the Chinese Academy of Sciences (Changchun Institute of Optics, Fine Mechanics and Physics), Changchun, China, 2003. [Google Scholar]
  13. Yu, B.; Chen, D.N.; Sun, Q.; Qu, J.L.; Niu, H.B. Design and Analysis of a Novel Diffraction Optical Imaging Spectrometer. J. Opt. 2009, 29, 1260–1263. [Google Scholar]
  14. Kar, O.F.; Oktem, F.S. Fast Computational Spectral Imaging with a Programmable Diffractive Lens. In Proceedings of the Computational Optical Sensing and Imaging, Munich, Germany, 24–27 June 2019. [Google Scholar]
  15. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W.T. Removing camera shake from a single photograph. ACM 2006, 25, 787. [Google Scholar]
  16. Rudin, L.L.; Osher, S.; Fatemi, M. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  17. Greig, D.M.; Porteous, B.T.; Seheult, A.H. Exact maximum a posteriori estimation for binaryimages. J. R. Stat. Soc. Ser. B (Methodol.) 1989, 51, 271–279. [Google Scholar]
  18. Chen, S.J.; Shen, H.L. Multispectral Image Out-of-Focus Deblurring Using Interchannel Correlation. IEEE Trans. Image Process. 2015, 24, 4433–4445. [Google Scholar] [CrossRef]
  19. Levitan, E.; Herman, G.T. A maximum a posteriori probability expectation maximization algorithm for image reconstruction in emission tomography. IEEE Trans. Med. Imaging 1987, 6, 185–192. [Google Scholar] [CrossRef] [PubMed]
  20. Lucy, L.B. An Iterative Technique for the Rectification of Observed Distributions. Astron. J. 1974, 79, 745. [Google Scholar] [CrossRef]
  21. Richardson, W.H. Bayesian-Based Iterative Method of Image Restoration. J. Opt. Soc. Am. 1972, 62, 55–59. [Google Scholar] [CrossRef]
  22. Levin, A.; Fergus, R.; Durand, F.; Freeman, B. Image and Depth from a Conventional Camera with a Coded Aperture. In ACM Transactions on Graphics (TOG); ACM: New York, NY, USA, 2007; Volume 26, p. 70-es. [Google Scholar]
  23. Shan, Q.; Jia, J.; Agarwala, A. High-quality motion deblurring from a single image. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar]
  24. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 233–240. [Google Scholar]
  25. Whyte, O.; Sivic, J.; Zisserman, A.; Ponce, J. Non-uniform deblurring for shaken images. In Proceedings of the IEEE International Conference, Miami, FL, USA, 5–10 July 2010; pp. 491–498. [Google Scholar]
  26. Bai, Y.; Tan, Z.; Lv, Q.; Huang, M. A deconvolutional deblurring algorithm based on short-and long-exposure images. Sensors 2022, 22, 1846. [Google Scholar] [CrossRef]
  27. Singh, P.; Jung, S.Y. Data decoding based on iterative spectral image reconstruction for display field communications. ICT Express 2021, 7, 392–397. [Google Scholar] [CrossRef]
  28. Yang, J.; Yin, W.; Zhang, Y.; Wang, Y. A fast algorithm for edge-preserving variational multichannel image restoration. SIAM J. Imaging Sci. 2009, 2, 569–592. [Google Scholar] [CrossRef]
  29. Chang, Y.; Yan, L.; Fang, H.; Luo, C. Anisotropic spectral-spatial total variation model for multispectral remote sensing image destriping. IEEE Trans. Image Process. 2015, 24, 1852–1866. [Google Scholar] [CrossRef]
  30. Sun, L.; Jeon, B.; Zheng, Y.; Wu, Z. A novel weighted cross total variation method for hyperspectral image mixed denoising. IEEE Access 2017, 5, 27172–27188. [Google Scholar] [CrossRef]
  31. Sun, L.; Zhan, T.; Wu, Z.; Jeon, B. A Novel 3D Anisotropic Total Variation Regularized Low Rank Method for Hyperspectral Image Mixed Denoising. ISPRS Int. J. Geo-Inf. 2018, 7, 412. [Google Scholar] [CrossRef]
  32. Lu, T.; Li, S.; Fang, L.; Ma, Y.; Benediktsson, J.A. Spectral–spatial adaptive sparse representation for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2015, 54, 373–385. [Google Scholar] [CrossRef]
  33. Phillips, V.Z.; Kim, E.; Kim, J.G. Preliminary Study of Gender-Based Brain Lateralization Using Multi-Channel Near-Infrared Spectroscopy. J. Opt. Soc. Korea 2015, 19, 284–296. [Google Scholar]
  34. Zhang, L.; Liang, D.; Zhang, D.; Gao, X.; Ma, X. Study of Spectral Reflectance Reconstruction Based on an Algorithm for Improved Orthogonal Matching Pursuit. J. Opt. Soc. Korea 2016, 20, 515–523. [Google Scholar]
  35. Hou, B.; Yao, M.; Jia, W.; Luo, H.; Tang, Y.Y. Spatial Discriminant Analysis for Hyperspectral Image Classification. Opt. Precis. Eng. 2018, 26, 450–460. [Google Scholar]
Figure 1. Single-lens spectral imaging acquisition model.
Figure 1. Single-lens spectral imaging acquisition model.
Applsci 13 10632 g001
Figure 2. Blurred image model.
Figure 2. Blurred image model.
Applsci 13 10632 g002
Figure 3. Gradient distribution maps: (a) spatial gradient distribution maps of the original image and the degraded image; (b) spatial gradient distribution maps of the original image, degraded image, and the optimized image after restoration.
Figure 3. Gradient distribution maps: (a) spatial gradient distribution maps of the original image and the degraded image; (b) spatial gradient distribution maps of the original image, degraded image, and the optimized image after restoration.
Applsci 13 10632 g003
Figure 4. Comparison of image restoration results.
Figure 4. Comparison of image restoration results.
Applsci 13 10632 g004aApplsci 13 10632 g004bApplsci 13 10632 g004c
Figure 5. Spectral feature points of simulation image.
Figure 5. Spectral feature points of simulation image.
Applsci 13 10632 g005
Figure 6. Restored spectral data.
Figure 6. Restored spectral data.
Applsci 13 10632 g006
Figure 7. PSF measurement system.
Figure 7. PSF measurement system.
Applsci 13 10632 g007
Figure 8. Experimental setup for PSF measurement. (a) Experimental setup; (b)schematic of the pinhole.
Figure 8. Experimental setup for PSF measurement. (a) Experimental setup; (b)schematic of the pinhole.
Applsci 13 10632 g008
Figure 9. PSFs at different positions.
Figure 9. PSFs at different positions.
Applsci 13 10632 g009
Figure 10. Experimental setup of the imaging system.
Figure 10. Experimental setup of the imaging system.
Applsci 13 10632 g010
Figure 11. Experimental results.
Figure 11. Experimental results.
Applsci 13 10632 g011aApplsci 13 10632 g011b
Figure 12. Spectral feature points of experiment image.
Figure 12. Spectral feature points of experiment image.
Applsci 13 10632 g012
Figure 13. Experimental spectral restoration results.
Figure 13. Experimental spectral restoration results.
Applsci 13 10632 g013
Table 1. Comparison of the quality of restoration methods.
Table 1. Comparison of the quality of restoration methods.
BandRestoration MethodRMSEPSNRSSIM
23MTV0.183723.75000.7796
Ours0.033238.61510.9394
24MTV0.185223.62290.7786
Ours0.033438.50350.9407
25MTV0.186823.44420.7766
Ours0.033438.40200.9394
26MTV0.188523.23020.7766
Ours0.033538.24920.9400
27MTV0.189423.15420.7771
Ours0.033638.16430.9397
28MTV0.190423.12820.7768
Ours0.033838.13940.9411
29MTV0.191623.08590.7782
Ours0.034038.10120.9412
30MTV0.192922.99580.7769
Ours0.034138.03600.9421
31MTV0.194322.88180.7803
Ours0.034337.95500.9425
Table 2. Comparison of spectral quality.
Table 2. Comparison of spectral quality.
MethodFeature PointsSCCSMSE
MTVA0.76550.0169
B0.28040.0931
OursA0.97480.0092
B0.82480.0350
Table 3. Comparison of the quality evaluation of experimental restoration results.
Table 3. Comparison of the quality evaluation of experimental restoration results.
Band (μm)Restoration MethodRMSEPSNRSSIM
0.520MTV0.188818.32030.4613
Ours0.028334.80900.7284
0.530MTV0.188118.44880.4559
Ours0.028234.92690.7257
0.540MTV0.186418.68210.4531
Ours0.028035.14220.7263
0.550MTV0.185918.74470.4542
Ours0.028035.19840.7263
0.560MTV0.185118.87260.4509
Ours0.027935.31670.7265
0.570MTV0.186218.70850.4536
Ours0.028035.16760.7274
0.580MTV0.186618.75020.4513
Ours0.028135.20510.7217
0.590MTV0.184018.75230.4373
Ours0.027735.20360.7196
Table 4. Comparison of quality evaluation of experimental restored spectra.
Table 4. Comparison of quality evaluation of experimental restored spectra.
MethodFeature PointsSCCSMSE
MTVA0.64320.0059
B0.05800.0060
OursA0.71420.0032
B0.98510.0042
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, P.; Li, Z.; Wang, J.; Tang, Y.; Bai, Y.; Lv, Q. Single-Lens Imaging Spectral Restoration Method Based on Gradient Prior Information Optimization. Appl. Sci. 2023, 13, 10632. https://doi.org/10.3390/app131910632

AMA Style

He P, Li Z, Wang J, Tang Y, Bai Y, Lv Q. Single-Lens Imaging Spectral Restoration Method Based on Gradient Prior Information Optimization. Applied Sciences. 2023; 13(19):10632. https://doi.org/10.3390/app131910632

Chicago/Turabian Style

He, Peidong, Zekun Li, Jianwei Wang, Yinhui Tang, Yang Bai, and Qunbo Lv. 2023. "Single-Lens Imaging Spectral Restoration Method Based on Gradient Prior Information Optimization" Applied Sciences 13, no. 19: 10632. https://doi.org/10.3390/app131910632

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop