Next Article in Journal
Prediction of Temperature Distribution for Previous Cement Concrete Pavement with Asphalt Overlay
Next Article in Special Issue
Computer-Aided Bacillus Detection in Whole-Slide Pathological Images Using a Deep Convolutional Neural Network
Previous Article in Journal
Application of Optimized Adaptive Chirp Mode Decomposition Method in Chirp Signal
Previous Article in Special Issue
rStaple: A Robust Complementary Learning Method for Real-Time Object Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CMOS Fixed Pattern Noise Removal Based on Low Rank Sparse Variational Method

1
Key Laboratory of Adaptive Optics, Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
2
School of Optoelectronic Information, University of Electronic Science and Technology, Chengdu 611731, China
3
Astronomical Technology Laboratory, Yunnan Observatory, Chinese Academy of Sciences, Kunming 650216, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(11), 3694; https://doi.org/10.3390/app10113694
Submission received: 27 March 2020 / Revised: 13 May 2020 / Accepted: 18 May 2020 / Published: 27 May 2020
(This article belongs to the Special Issue Advanced Intelligent Imaging Technology Ⅱ)

Abstract

:
Fixed pattern noise (FPN) has always been an important factor affecting the imaging quality of CMOS image sensor (CIS). However, the current scene-based FPN removal methods mostly focus on the image itself, and seldom consider the structure information of the FPN, resulting in various undesirable noise removal effects. This paper presents a scene-based FPN correction method: the low rank sparse variational method (LRSUTV). It combines not only the continuity of the image itself, but also the structural and statistical characteristics of the stripes. At the same time, the low frequency information of the image is combined to achieve adaptive adjustment of some parameters, which simplifies the process of parameter adjustment, to a certain extent. With the help of adaptive parameter adjustment strategy, LRSUTV shows good performance under different intensity of stripe noise, and has high robustness.

1. Introduced

Compared with CMOS Image Sensor (CIS), CCD has the characteristics of high quantum efficiency, high sensitivity, low dark current, good consistency and low noise. However, in recent years, with the development of large-scale integrated circuit technology, the photoelectric characteristics of CIS have been greatly improved. In particular, sCMOS sensor is a composite technology which combines the advantages of CCD and CMOS. It also has the features of high quantum efficiency, high sensitivity and low dark current [1,2]. However, CIS still lags behind CCD in the consistency of photoelectric response. This is mainly due to the apparent fixed pattern noise (FPN) in CIS relative to the CCD. However, CIS is also favored by many industries, due to its outstanding advantages in high acquisition rate and low cost. Technically speaking, the appearance of FPN is mainly caused by the structure of CIS. In order to reduce the readout noise and improve the signal-to-noise ratio, most CIS uses active pixel structures such as three transistors (3T), four transistors (4T), five transistors (5T). In each pixel, the electrons generated by the photoelectric response also need to pass the pixel amplifier and the column amplifier to reach the ADC and the digital processing unit finally. Due to the mismatch of pixel amplifier and column amplifier, the photoelectric characteristics of pixels and columns are inconsistent, which leads to the appearance of FPN. The reason for FPN generation is analyzed in detail, with the 3T structure in Figure 1 as an example.
Due to the difference in the threshold voltage of different transistors, different output results can be obtained even under the same light conditions. For Figure 1b analysis, this is due to the inconsistency of A p magnification among different pixels. The noise caused by this A p inconsistency is called pixel fixed pattern noise (PFPN). It appears as a snowflake patch on the image. In general, PFPN can be suppressed to a great extent by using the related double sampling technique (CDS). The M4 load transistor, AC amplifier and Aout amplifier are output structures shared by each column pixel. Due to the mismatch between the bias voltage of each column amplifier, a noise called column fixed pattern noise (CFPN) is produced, which is shown as vertical stripes on the image. In summary, the fixed pattern noise can be made up of two parts: FPN = CFPN + PFPN, where FPN is the total fixed pattern noise, PFPN is pixel fixed pattern noise, and CFPN is column fixed pattern noise. The image with FPN noise is similar to Figure 2. To suppress FPN, CDS technology is generally used, but it can only eliminate PFPN component in FPN. CFPN caused by output stage amplifier Aout cannot be effectively eliminated. For specific analysis, please refer to the literature [3,4]. After CDS, the output voltage can be described by Formula (1):
M = G A p A C A o u t η P t h v + G A o u t Δ V A o u t = G A p A C A o u t N + G A o u t Δ V A o u t = k N + b
where G is the analog-to-digital conversion coefficient, unit: D N / v ,   A p is the equivalent comprehensive amplification factor composed of M2, M3 and M4, unit: v/e-,   A c and A o u t are the amplification factors of buffer operation amplifier and output operation amplifier, respectively and Δ V A o u t is the bias voltage of output amplifier. P is the incident light power, t is the exposure time, h is the Planck constant, V is the incident light wavelength, η is the CIS quantum efficiency. N is the number of incident electrons. k = G A p A C A o u t , b = A o u t Δ V A o u t .
Generally speaking, there are two kinds of noises in CIS images: 1. random noise—this kind of noise is generally caused by many factors [5], such as thermal noise, Poisson noise, flicker noise, shot noise, etc; 2. fixed pattern noise—as analyzed above, this type of noise is generally caused by the mismatch between pixels. It has the feature that the noise remains constant between frames under constant working conditions and within a certain period of time. FPN noise mainly presents regular stripes, and the human eye is very sensitive to the regular stripes, so the impact of fixed pattern noise is far greater than random noise [6]. For the processing of random noise, the method of multi frame stack and average is generally used, which can effectively reduce the fluctuation amplitude of random noise after multi frame stack and average. Under certain conditions, the fixed pattern noise is almost stable between different frames. Therefore, the intensity and shape of the noise remain the same after multi frame stacking and averaging. The conclusion is that the Inhomogeneity of CIS is mainly caused by FPN. After CDS processing, the Inhomogeneity is mainly caused by CFPN.

2. Existing Methods

In order to obtain better CIS image quality and reduce the impact of FPN on image quality, many scholars have carried out many studies in this field in recent years. In summary, the FPN removal methods of CIS can be summarized into two categories: 1. the calibration-based method; 2. the scene-based method. As shown in Figure 3.

2.1. Calibration-Based Method

The main representative methods based on calibration are as follows: the two-point method [7,8,9], the subsection correction method [10,11], the S-curve method [12,13], the polynomial fitting method [14].
1. Two-point correction method
The basic idea of two-point correction method is to assume that the photoelectric response characteristic of each pixel is a stable linear relationship, which can be expressed by M = k n + B , where k is the slope of photoelectric response curve, B is the offset, n is the number of incident photons or the energy of incident light. The correction of nonuniformity can be completed by the following steps: (1) calculate the slope correction coefficient g of each pixel; (2) calculate the offset correction coefficient O of each pixel; (3) under the effect of Formula M = m G + O , correct the slope k of each pixel to K, and the offset b to B, as shown in Figure 4. K is the average of all pixel slopes, and B is the average of all pixel offsets.
The premise for the two-point method to be feasible is that the photoelectric response characteristics of each pixel are stable and linear. This is because b in m = k N + b contains CFPN, as shown in Formula 1. CFPN is due to the bias voltage mismatch of the last op amp, and its mismatch voltage will drift to a certain extent with the working environment and working time [15,16]. Figure 5 shows the whole process of gradual failure of the correction factor as b drifts. First, after 15 min of CIS operation, the corrected parameters G and O are calculated immediately. Next, an image is collected and nonuniformity correction is performed using the currently obtained G and O values. A very good correction result can be obtained at this point, as shown in Figure 5a. After working for CIS for 1 h, the images were collected again, and the original G and O were used for non-uniformity correction. The expected uniform correction results are not obtained at this time, and there is a significant residual of CFPN noise, as shown in Figure 5b. Similarly, CIS works for 3 h, and then collects the image again, and uses the original correction parameters G and O to correct the nonuniformity. The final result is shown in Figure 5c. From the above experiments, it can be very obvious that the calibration-based method can achieve very effective correction in the short term. As time goes by, the original correction parameters become invalid gradually. Therefore, in order to get good correction effect, the calibration parameters G and O need to be calibrated periodically.
2. Segmental Correction Method
Because the linearity of the photoelectric response curve of CIS is not ideal, it shows some nonlinearity, as shown in Figure 6. It can be understood that k, b change with the energy of the incident light, that is, K (N), b (N) are a function of the incident energy. Unified use of the same set of correction parameters G, O will reduce the accuracy of the correction. In order to obtain a higher correction accuracy, piecewise linear fitting can be used. The curve can be approximated as a combination of several linear segments, each of which is corrected by a two-point method. With the drift of CIS parameters, the piecewise correction method will also encounter the problem of gradually invalidating. Therefore, we need to calibrate the parameters regularly and repeatedly.
3. S-shaped method, polynomial fitting method
To further improve the accuracy of the correction, the photoelectric response curve can be fitted by an S-shaped nonlinear equation or by a polynomial. This type of correction method is computationally intensive and unsuitable for hardware-based real-time nonuniformity correction. Similarly, this kind of method also needs to face the CIS parameter drift problem, and needs to repeat the calibration periodically.
In general, among all calibration-based methods, the two-point method and piecewise linear method are most used in engineering applications, and are very suitable for real-time calibration system, based on field programmable gate array (FPGA). However, these methods need to complete a repeated calibration periodically, which brings great inconvenience to the actual application. Importantly, the correction process is very cumbersome and has strict requirements on the environment and light source.

2.2. Scene-Based Method

Because of the operational inconvenience of calibration methods, scene-based noise removal methods have been developed in recent years, which do not require lab calibration. It removes FPN from the image itself. Scene-based methods can be grouped into three categories: 1. filter-based methods [17,18,19]; 2. statistics-based methods [20,21]; and 3. optimization-based methods [22,23,24,25,26,27,28,29,30].
1. Filter-based methods
Filter-based methods are widely used [17,18,19], due to their simplicity and ease of use. The filter-based methods require that the FPN has a certain periodicity in order to have a relatively good effect. In fact, this is difficult to satisfy, and most CIS FPNs are non-periodic. It is difficult to separate the FPN from the image if the filtering method is not used on the basis of periodicity.
2. Statistical methods
The main idea of statistical methods is to modify the distribution of FPN to a reference distribution. For this method to be effective, many statistical similarity assumptions [20,21] must be satisfied. However, at most times, this assumption of similarity is difficult to satisfy, so it is generally difficult to achieve an ideal noise reduction effect.
3. Optimization method
Based on the optimization methods, it has been the most studied methods in recent years. These types of methods can achieve a good denoising effect [22,23,24,25,26,27,28,29,30]. Essentially, these methods are to solve an underdetermined equation. Most of these methods, which are based on the variational method, add reasonable prior constraints to complete the transformation of undetermined equations to well-defined equations. The low-rank sparse variational method (LRSUTV), which we will propose next, is also a category of this type of method. At present, some scholars also use the low-rank sparse method to remove stripes. However, most of them remove noise from the image itself, but little attention is paid to the structure of the stripes themselves. A small number of scholars have considered the structural characteristics of the stripes [31,32,33,34,35], but they lack the consideration of the statistical characteristics of the stripes. The stripe removal method we use considers not only the continuity of the image itself, but also the structural and statistical characteristics of the stripes. At the same time, adaptive adjustment measurement of some parameters is added, which simplifies the adjustment process of parameters to a certain extent. Overall, our approach is a more comprehensive approach to noise reduction.

3. Motivation for Presenting this Method

Most scene-based correction methods start directly from the observation image to estimate the clear image, and lack some structure information about the strip itself. Then, the structural information of the strip is the key to improve the correction quality. This paper proposes an FPN correction method, based on the details of the image and the stripe itself. By analyzing the structure of CIS, an observed image can be roughly decomposed into three parts (Figure 7U,S,N). Figure 7Y is from the rice grain image in the quiet area of the sun taken by the high-resolution imaging terminal of the one-meter infrared solar tower of Yunnan Observatory, Chinese Academy of Sciences.
Y = U + S + N .
Y is a noise image, U is a clear image to be estimated, N is the comprehensive image composed of PFPN, dark current, Poisson, and other random noises, and S is a CFPN. We need to estimate the values of S and U from Equation (2). Observations show that the equation is an underdetermined equation with more unknown quantities than the given data, but it can be converted into a well-defined equation with reasonable prior constraint information added. How to find constraint information, model, and solve the equation will be the key to this article. By analyzing the noise image, the following clues can be found:
1. The specific directional structure of CFPN.
The vertical gradient histogram of noise image is very similar to that of clear image. From Figure 8 (histogram probability distribution of Figure 7Y,U), it can be found that the vertical gradient histogram distribution of the two images is very similar. The vertical gradient similarity between the two can be guaranteed by using the sparse constraint term of Y y U y 1 .
2. The Striped noise has the characteristic of low rank.
Through the analysis of the CMOS output structure, it can be found that each column shares an output amplifier. Therefore, low rank can be used to describe the characteristics of CFPN.
3. Gaussian distribution of CFPN
Δ V A o u t has the characteristic of random Gaussian distribution [36], so we can add S 2 2 to express this random characteristic.
4. Structural similarity before and after noise removal
In order to maintain the structural similarity before and after noise removal, the constraint of 2 norm fidelity term Y U S 2 2 can be added.
5. Minimum variation of clear image
According to the correlation theory of total variation denoising, we know that the clear image has the characteristics of low variation value [37]. The regular term constraint of   α 2 U x 1 + α 3 U y 1 can be added.

4. Proposed Model

Based on the fidelity items analyzed above and the relevant priori constraint information, the optimization equation with constraints can be obtained as follows:
E ( U , S ) = min U , S α 1 2 Y U S 2 2 + τ S * s . t .   U x 1 = 0           U y 1 = 0 Y y U y 1 = 0 S 2 2 = 0
where Y is the observed image, U is the clear image and S is the stripe noise.   S * is the kernel norm of S, which is used to constrain S with low rank. Then, according to the Lagrange multiplier method, the constrained Equation (3) becomes an unconstrained Equation (4).
E ( U , S ) = α 1 2 Y U S 2 2 + α 2 U x 1 + α 3 U y 1 + α 4 Y y U y 1 + γ 2   S 2 2 + τ S *
where α 1 , α 2 ,   α 3 ,   α 4 , γ , τ are Lagrange multipliers This is a multivariable convex optimization equation. At present, the commonly used methods for multivariable optimization are Bregman and ADMM. Through these optimization methods, the complex multivariable optimization process can be divided into more convenient sub optimization problems. The ADMM algorithm [38] is used in this paper. The specific optimization process is as follows: Three auxiliary variables H, J, K are introduced. Let H = U x ,   J = U y ,   K = Y y U y , so Equation (4) equivalent to Equation (5)
E ( U , S ) = α 1 2 Y U S 2 2 + α 2 U x 1 + α 3 U y 1 + α 4 Y y U y 1 + γ 2   S 2 2 + τ S * s . t .   H = U x J = U y K = Y y U y
According to the Augmented Lagrange multiplier method, the constraint equation of Equation (5) is changed into the unconstrained Equation (6)
E ( U , S , H , J , K ) = α 1 2 Y U S 2 2 + α 2 H 1 + α 3 J 1 + α 4 K 1 + γ 2 S 2 2 + τ S *         + < R 2 ,   H U x > + < R 3 ,   J U y > + < R 4 , K ( Y y U y ) > + ω 2 2 H U x 2 2 + ω 3 2 J U y 2 2 + ω 4 2 K ( Y y U y ) 2 2
where <A, B> is defined as the inner product of two variables. R 2 ,   R 3 , R 4 are the regular coefficients of the regular terms H U x , J U y , K ( Y y U y ) , respectively.
Equation (6) can be changed into Equation (7) after the relevant terms are combined.
E ( U , S , H , J , K ) = α 1 2 Y U S 2 2 + α 2 H 1 + α 3 J 1 + α 4 K 1 + γ 2 S 2 2 + τ S * + ω 2 2 H U x + R 2 ω 2 2 2 + ω 3 2 J U y + R 3 ω 3 2 2 + ω 4 2 K ( Y y U y ) + R 4 ω 4 2 2
1. Sub-questions about U
min U E ( U , S , H , J , K ) = α 1 2 Y U S 2 2 + ω 2   2 H U x + R 2 ω 2 2 2 + ω 3   2 J U y + R 3 ω 3 2 2 + ω 4   2 K ( Y y U y ) + R 4 ω 4 2 2
2. Solve the extreme value about U
E ( U , S , H , J , K ) U = 0 α 1 ( Y U S ) ω 2 ( H x 2 U x 2 + 1 ω 2 R 2 x ) ω 3 ( J y 2 U y 2 + 1 ω 3 R 3 y ) + ω 4 ( K y ( 2 Y y 2 2 U y 2 ) + 1 ω 4 R 4 y ) = 0
To make full use of the advantages of fast FFT calculation, we transform the formulas above into frequency domain to solve the equation. The specific calculation process is as follows: Simultaneous Fourier Transform on Both Sides of Equation
( α 1 U + ω 2 2 U x 2 + ω 3 2 U y 2 + ω 4 2 U y 2 ) = ( α 1 Y α 1 S + ω 2 H x + R 2 x + ω 3 J y + R 3 y ω 4 K y + ω 4 2 Y y 2 R 4 y ) ( B U ) = ( A )
where A and B are
A = α 1 Y α 1 S + ω 2 H x + R 2 x + ω 3 J y + R 3 y ω 4 K y + ω 4 2 Y y 2 R 4 y
B = a 1 + ω 2 2 x 2 + ω 3 2 y 2 + ω 4 2 y 2
( U ) = ( A ) ( B )
U = 1 ( ( A ) ( B ) )
where ( A ) and ( B ) are
( A ) = α 1 ( Y S ) + ω 2 ( x ) ( H ) + ( x ) ( R 2 ) + ω 3 ( y ) ( J ) + ( y ) ( R 3 ) ω 4 ( y ) ( K ) + ω 4 ( 2 y 2 ) ( Y ) ( y ) ( R 4 ) ( B ) = α 1 + ω 2 ( 2 x 2 ) + ω 3 ( 2 y 2 ) + ω 4 ( 2 y 2 )
where is the forward Fourier transform,   1 is the inverse Fourier transform.
3. Sub-questions about S
min S E ( U , S , H , J , K ) = = α 1 2 Y U S 2 2 + γ 2   S 2 2 + τ S *
The extremum solution process of the function with kernel norm can be summed up in two steps. First, the extremum of the non-kernel norm is solved, and then the dimension of S is reduced by singular value decomposition. The specific process is as follows:
•  Step 1, solving the extreme value of non-kernel norm term in Formula (10)
min S E ( U , S , H , J , K ) = = α 1 2 Y U S 2 2 + γ 2   S 2 2
Solve the extreme value about S
E ( U , S , H , J , K ) S = α 1 ( Y U S ) + γ S = 0 S = α 1 ( Y U ) α 1 + γ
•  Step 2, reducing dimension of S by the soft threshold method
S = U S h r i n k ( D , τ ) V T
where S h r i n k ( D , τ ) = d i a g { [ D ( 1 : n ) : z e r o s ( n m a x n )   ]   }   , U is the left singular matrix of S, V is the right singular matrix of S, and D is the diagonal matrix of S. Reduce the dimension of diagonal matrix D by the following formula:
S h r i n k ( D , τ ) = d i a g { [ D ( 1 : n ) : z e r o s ( n m a x n )   ]   }
where n m a x is the total number of diagonal elements of the diagonal matrix D.
M = i = 1 i = n m a x D i i
N = i = 1 i = n D i i
n = f l o o r ( N M = τ )
Among them, τ is used to control the degree of retention of principal components during SVD decomposition. This process is a process of rank reduction, in order to achieve the purpose of low rank.
4. Sub-questions about H
E ( U , S , H , J , K ) = α 2 H 1 + ω 2   2 H U x + R 2 ω 2 2 2
Solve the extreme value about H.
E ( U , S , H , J , K ) H = 0
H = { U x R 2 ω 2 α 2 ω 2 , i f   U x R 2 ω 2 > α 2 ω 2 0 , i f   | U x R 2 ω 2 | α 2 ω 2 U x R 2 ω 2 + α 2 ω 2 , i f   U x R 2 ω 2 < α 2 ω 2
5. Sub-questions about J
( U , S , H , J , K ) = α 3 J 1 + ω 3   2 J U y + R 3 ω 3 2 2
Solve the optimal solution of J.
E ( U , S , H , J , K ) J = 0
J = { U y R 3 ω 3 α 3 ω 3 , i f   U y R 3 ω 3 > α 3 ω 3 0 , i f   | U y R 3 ω 3 | α 3 ω 3 U y R 3 ω 3 + α 3 ω 3 , i f U y R 3 ω 3 < α 3 ω 3
6. Sub-questions about K
E ( U , S , H , J , K ) = α 4 K 1 + ω 4 2 K ( Y y U y ) + R 4 ω 4 2 2
Solve the optimal solution of K.
E ( U , S , H , J , K ) K = 0
K = { ( Y y U y ) R 4 ω 4 α 4 ω 4 , i f   ( Y y U y ) R 4 ω 4 > α 4 ω 4 0 , i f   | ( Y y U y ) | R 4 ω 4 α 4 ω 4 ( Y y U y ) R 4 ω 4 + α 4 ω 4 , i f   ( Y y U y ) R 4 ω 4 < α 4 ω 4
7. Updating the language multiplier R 2 R 3 R 4 by the dual gradient rise method
R 2 = R 2 + ω 2 ( H U x )
R 3 = R 3 + ω 3 ( J U y )
R 4 = R 4 + ω 4 ( K ( Y y U y ) )
where ω 2 , ω 3 , ω 4 is the iterative step length in the process of gradient rise.
In order to facilitate computer computation, we need to discretize continuous operators. The discretization of partial differential is defined as follows: A x discrete operation is A ij + 1 A i j , A y discrete operation is A i + 1 j A i j , 2 A x 2 discrete operation is A ij + 1 A i j 1 2 A i j , 2 A y 2 discrete operation is A i + 1 j A i 1 j 2 A i j . The complete calculation process is shown in Algorithm 1. The matlab code used in this article can be obtained from the download link in Supplementary Materials.
Algorithm 1 Low-Rank sparase variationnal destripe (LRSUTV)
1. Get image Y with FPN
2. The initial matrix U = 0, S = 0,   R 2 = 0 ,   R 3 = 0 ,   R 4 = 0 , H = 0, J = 0, K = 0
3. Initial optimization factor α 1 , α 2 , ω 1 , ω 2 , ω 3 , ω 4 ,   τ ,   N
4. For n = 1:N do
5. Calculating the optimal solution of U via Fourier Transformation by (9)
6. Calculating low rank S by singular value decomposition (SVD) by (12) (13)
7. calculating H J K through soft thresholds by (14), (16), (18)
8.  Update   R 2 R 3 R 4 , by method of dual gradient rise by (19), (20), (21)
9. End for
10.  Separate clear image U and stripe S

5. Experimental Results and Discussions

5.1. Experimental Environment

Before the experiment, to facilitate the display of the image, we encoded the original image into the gray scale of [0.255], and set the CFPN with standard deviation intensity in the [0.20] range. For scientific CIS, the general photo response nonuniformity (PRNU) is about 0.5%, and for consumer CMOS, it is about 2%. We set the noise intensity in the range of [0.20], so that the noise level of these two kinds of CIS can be completely expressed. To illustrate the effectiveness of the proposed algorithm (LRSUTV), we tested it from both the simulation and real data. In this paper, six scene-based FPN correction algorithms are selected for comparison experiments. They are wavelet [17,39,40], anisotropic total variation (UTV) [41,42,43], ASSTV [29], variational stationary noise remover (SNR), SILR [28], 0 sparse method ( 0 sparse) [34], and the recommended method in this work.
To comprehensively and objectively reflect the correction effect of FPN, we use several common quality evaluation methods to evaluate the noise removal effect of each algorithm, namely Mean Cross-track (x-axis stands for the column number of the image, and the y-axis represents the mean value of each column), PSNR and SSIM. The various Lagrange multipliers used in our algorithm are hard-tuned. In order to objectively compare the effects of the various methods, I have adjusted the parameters of all methods to the best of my ability.

5.2. Simulation Experiment

In the simulation experiment, we generated the noise image of CFPN with different intensities. In order to restore the fact that CFPN has the characteristics of zero mean Gaussian distribution, we set the mean value μ = 0, σ = [0:20] of CFPN to test the denoising performance of various algorithms. In terms of the sCMOS cameras we currently use, almost all the CFPN we encountered presented aperiodic random stripes, and the PRNU is generally around 0.5%. However, some CMOS cameras used in some fields have periodic stripes. Therefore, in the comparison test of algorithms, we also simulated the existence of periodic noise. Next, we compared and evaluated the effectiveness of our method from two aspects: aperiodic and periodic noise. In terms of picture selection, we chose two pictures, one of which was a picture of a solar active area with a rich structure, and the other being a picture of a relatively single structure of the solar sphere. These two types of pictures are the types of images often encountered in solar observation.
  • Aperiodic stripe noise
In the simulation experiment of aperiodic stripe, we generated a set of random column noise with mean value μ = 0 and standard deviation σ = [0:20], and the position of noise is random.
From the test results of Figure 9, the WAFT method and ASSTV method corresponding to Figure 9d,g achieved the worst denoising effect, and obvious stripes remained after denoising. Figure 9e has better results than Figure 9d,g, almost removing all stripes, except for some areas with wider stripes. In these areas, UTV showed incomplete stripe removal, with a certain degree of residue. From a visual point of view, several methods corresponding to Figure 9f,h–j achieved the best results. They completely removed all the stripes. Visually, it is difficult for me to distinguish the differences.

Subjective Qualitative Evaluation

Next, to further distinguish the differences between VSNR, SILR, 0 sparse, and LRSUTV′s denoising results, we first made a qualitative comparison of the various methods using the difference image formed by the difference between the original image and the denoising result, and the mean cross-track curve of the denoising result. We then used the PSNR and SSIM values of various denoising results for a quantitative comparison.
The difference images shown in Figure 10 clearly show the stripe extraction ability of various methods, and whether the various methods damage the original image structure during the stripe extraction process.
It can be found from Figure 10b that the stripe extracted by WAFT has a large error compared to Figure 10a, which indicates that the image after WAFT denoising still has a lot of residual noise stripe. This situation can also be observed in Figure 9d. The stripes in Figure 10c,d are similar to those in Figure 10a, but there is a certain degree of residual image structure information in the stripes. This shows that the denoising results of UTV and ASSTV in these residual areas have a certain degree of damage to the original structure. Figure 10e has a high similarity with Figure 10a, but some areas are bright, which will cause the brightness of the VSNR denoised image in this area to be darker than the original image. Figure 10f,g has a very high similarity to Figure 10a, but there is a shift in overall intensity. The shift in intensity causes the brightness and darkness of the denoised image to be different from the original image. Then, the stripe information extracted by our proposed LRSUTV method is the closest to Figure 10a both in structure and intensity.
The mean cross-track curve is also a commonly used image quality evaluation method, through which the overall trend of the image can be clearly observed. Next, we conducted a qualitative evaluation of various results through the mean cross-track curve. It can be seen from Figure 11 that the curve of Figure 11b,d is significantly different from that of Figure 11a, and there is a significant curve fluctuation caused by incomplete stripe removal. This conclusion is fully consistent with the subjective feeling of Figure 9. The curve of Figure 11e has obvious deviations in some areas compared to the original curve. After comparing Figure 10a, it was found that the CFPN in these areas has a certain width. It can be inferred that the VSNR denoising method will produce a certain error when processing wide stripes. The overall situation of the Figure 11c curve is better than that of Figure 11b,d,e, but there is excessive smoothness, which means that the details of the image are lost. Figure 11a,g are very similar in appearance.
When you observe carefully, you will find that the curve of Figure 11g has an overall upward shift, which means that the denoised image has an overall shift in brightness compared to the original image. The curves of Figure 11f,h are very similar to Figure 11a in terms of strength and shape, but Figure 11f has a certain degree of excessive smoothing, while Figure 11h retains more details.

5.3. Periodic Stripe Noise

In a similar way, we then observed a picture of the solar sphere with a relatively simple structure. In order to analyze the ability of various methods to remove periodic noise, we added a strip noise with period T = 16, mean μ = 0 and σ = 15 on this picture. As can be seen from Figure 12, except for Figure 12d,g, the rest of the pictures have a very good stripe removal effect, and have a very similar structure to Figure 12b. Therefore, we also needed to use the stripe extraction image and mean cross-track curve to further judge the advantages and disadvantages of various methods.

Subjective Qualitative Evaluation

From the results of Figure 13 stripe extraction, WAFT has the worst effect, and the extracted stripes are not similar to Figure 13a. Figure 13c,d,g are the same as Figure 13a. There is a certain degree of similarity, but the extracted results still carry weak original image structure information. This means that the UTV, ASSTV, 0 sparse methods still have streaks left after denoising, but the residual amplitude is not strong enough to make it difficult for the human eye to distinguish. From the streaks in Figure 13e, you can see the periodic trend. However, there is a case where the intensity value is obviously large in some areas, which indicates that the VSNR has a small estimate of the fringes in this area. Figure 13f,h has the highest similarity to Figure 13a, but it can also be clearly seen that the SILR denoising result has the fact that the overall intensity is relatively small. Overall, the results of LRSUTV are closest to the original CFPN, and the effect of stripe extraction is the best.
The same conclusion can be found from the analysis of the mean cross-track curve. The curve of Figure 14f–h is most similar to that of Figure 14a, but it can be found that Figure 14f,g has excessive smoothing, which will lead to the loss of detailed information. The current image structure is very simple, so it is difficult to visually detect the difference in their denoising results.
Because the stripe noise is periodic, we can also compare the power spectrum curves of various denoising results and observe the suppression of noise pulse by various methods. The curve shown in Figure 15 is the power spectrum curve of Figure 12a. For better vision, we normalized the frequency of the x-axis and logarithmically calculated the power spectrum amplitude of the y-axis. Due to the periodicity of the noise, the power spectrum curve of Figure 12a shows an obvious pulse signal at some frequencies. After denoising, LRSUTV removes all obvious pulse signals, retains details at maximum range, and maintains the same spectral intensity as the original image. However, WAFT, UTV, ASSTV have distinct large pulse residues, which means striped residues. SILR and 0 sparse have significant intensity differences compared with Figure 15a, which means that the overall brightness of the image is different from the original image. The power spectrum on the left side of Figure 15e is significantly different from that of the original, which means that the image after VSNR denoising will have noise residue in the low frequency area.

5.4. Quantitative Objective Evaluation

The above analysis is a relatively subjective one. Different people may have different conclusions. Next, we quantitatively give the performance of various methods under different noise intensity and different images in a more objective way. Since we have the data from the original image, we will evaluate the result of the noise reduction using a full-reference approach. The evaluation indexes are: PSNR, SSIM. In Table 1, Table 2, Table 3 and Table 4, is the standard deviation of the stripe noise. Pictures of the solar active region add aperiodic noise, and photosphere pictures add periodic noise with a period of 16.
From Table 1, Table 2, Table 3 and Table 4 above, we can see that our proposed LRSUTV method is superior to all the methods on the PSNR index. On the SSIM index, our method is superior to all other methods, except when the variance is 20. This is mainly due to the adaptive adjustment strategy with some parameters in our algorithm and the reasonable model structure. All the methods involved in the comparison adjust their parameters to the best when σ = 12. Then, they are applied to different noise levels for testing. Experiments show that our method can adapt to different noise levels, and has good robustness.

5.5. Actual Image Testing

Figure 16 is a sunspot image in the TiO band observed by the 1-m infrared solar tower of Yunnan Observatory. There is obvious CFPN noise on the surface of the observed image, which greatly reduces the quality of the image.
From the observation of Figure 17 and Figure 18, It is seen that both for the images of the solar granular and complex sunspots, there still exits the residual noise after the reduction of WAFT, UTV, and ASSTV. The result still has obvious stripe noise. Looking at Figure 18e, we can see that there is a distinct variable at the center of the sunspot, which is due to the incorrect estimation of the central zone stripes by VSNR. The observation of Figure 18f–h shows that all three can remove noise stripes thoroughly. Although all three methods remove stripes effectively, the SILR and 0 sparse methods do not suppress random noise effectively. Significant random noise still exists in the result of denoising. This is mainly due to their simple assumption that noise images are the result of the overlay of clear images and stripe noise in the stage of model building, that is, Y = U + S , where Y is the noise image, U is the clear image, and S is the stripe noise image. In fact, Y = U + S + N , where N is the random noise caused by the combination of reset noise, shot noise, thermal noise, Poisson noise, and so on, in the camera. Therefore, the LRSUTV denoising method is a more comprehensive one.

5.6. Discussion

5.6.1. Parameter Selection

In Model (7), adjustment of parameters α 1 , α 2 , α 3 , α 4 , γ , τ are involved. The basic principle of parameter adjustment is that as the variance of stripe noise increases, the value of γ needs to be increased accordingly. The best PSNR value is obtained by adjusting γ between [0.65–0.85], when σ changes between [5:20] ranges, according to experience. The adjustment for τ is generally between [0.5–0.75], based on experience. The adjustment of α 2 is critical, because its values vary widely with the intensity of images and stripes. To simplify the adjustment of α 2 , we used an adaptive strategy. The basic steps are as follows:
First, the Fourier transform of the noise image Y in each column is calculated, as shown in Formula (22).
F : i = ( Y : i )
where F represents the forward Fourier transform operator, Y represents the input noise image, and i represents the index of the column.
Second, update the regular parameter α 2 .
α 2 = F 1 : x 1 10 5 α 1
where F 1 : x represents the horizontal differentiation of the DC component in F. The algorithm automatically calculates the relationship between α 2 and the fidelity term regularization factor α 1 , based on the intensity of the current stripe noise. The α 1 coefficient is set to a fixed value of α 1 = 20 in the LRSUTV algorithm. The algorithm can automatically enhance the α 2 parameter if the intensity of the stripe noise is too high, which will cause the stripe suppression process to dominate in the iteration process. As the intensity of the stripe decreases, the F 1 : x 1 value will become smaller and, thus, the fidelity process will dominate. As a rule of thumb, the values of α 3 and α 4 are set at [15:25] and [1.5:2.5], respectively. At this time, the denoised image has the optimal PSNR value. To simplify the parameter adjustment process, we generally set ω 2 = 0.1,   ω 3 = 0.1 and ω 4 = 0.1 .
The parameters in Table 5 are the optimal parameters for various methods for Figure 9a. Figure 9a contains stripe noise with intensity_ σ = 10 and random noise with intensity_ σ = 5 . By setting the above parameters, each method obtains the best PSNR for Figure 9a. The above optimization parameters are used for different pictures and different noise intensities used in the experiment.

5.6.2. Program Run Time

All test procedures are implemented in MATLAB on a desktop personal computer with a 3.4-GHz CPU and 8 GB RAM. From the perspective of the execution time of the program, our method is not optimal, so we should do corresponding optimization in the next work to further improve the execution efficiency of the program. As for our current work scenario, we will give a brief introduction. We mainly extract CFPN through the algorithm proposed in this paper, and then write the extracted results into the embedded system to deduct the CFPN from the camera in real time. In general, CFPN in sCMOS camera will not change much in a few hours. Although we take a little more time to extract CFPN at a time, the result can be used for several hours, so we are not too sensitive to the running time. The running time of each method is shown in Table 6.

6. Summary

Although some scholars also use the low-rank sparse method to remove stripes, most of them remove noise from the image itself, and little attention is paid to the structure of the stripes themselves. Of course, a small number of scholars have considered the structural characteristics of the stripes, but they lack the consideration of the statistical characteristics of the stripes. The stripe removal method we used considers not only the continuity of the image itself, but also the structural and statistical characteristics of the stripes. In terms of parameter adjustment, LRSUTV uses an adaptive scheme for some key parameters, which can automatically adjust the relevant regularization coefficients according to the noise level. This simplifies the adjustment of parameters to some extent. Of course, there are also obvious shortcomings in LRSUTV. First, our method is invalid for tilt stripes. The removal of oblique stripes is occasionally encountered in our work. Secondly, LRSUTV is not currently optimized for parallel computing, so there are inefficient computations. In the next work, we will do some research on the removal of tilted stripes and the improvement of computational efficiency.

Supplementary Materials

The matlab codes are available online at https://www.mdpi.com/2076-3417/10/11/3694/s1.

Author Contributions

Below is a brief introduction to the relevant contributions of the authors to this paper. The author, T.Z., completed the establishment of the model and the related work of model testing. The author, X.L., mainly works on Algorithm Analysis in this paper. The author, J.L., is mainly engaged in image analysis. The author, Z.X., mainly focuses on the acquisition of experimental data and the analysis of experimental data. The work of each of the above authors has played a key role in the successful completion of the article. All authors have read and agree to the published version of the manuscript.

Funding

“National Natural Science Foundation of China”, Grant No. 11573066; 11873091 “Yunnan Province Basic Research Plan”, Grant No. 2019FA001.

Conflicts of Interest

All authors declare no conflict of interest.

References

  1. Babcock, H.P.; Huang, F.; Speer, C.M. Correcting Artifacts in Single Molecule Localization Microscopy Analysis Arising from Pixel Quantum Efficiency Differences in sCMOS Cameras. Sci. Rep. 2019, 9, 18058. [Google Scholar] [CrossRef] [Green Version]
  2. Mandracchia, B.; Hua, X.; Guo, C.; Son, J.; Urner, T.; Jia, S. Fast and accurate sCMOS noise correction for fluorescence microscopy. Nat. Commun. 2020, 11, 94. [Google Scholar] [CrossRef] [Green Version]
  3. Yu, L.; Guoyu, W. A New Fixed Mode Noise Suppression Technology for CMOS Image Sensor. Res. Prog. SSE 2006, 3, 345–348. [Google Scholar]
  4. Xiaozhi, L.; Shengcai, Z.; Shuying, Y. Design of low FPN column readout circuit in CMOS image sensor. J. Sens. Technol. 2006, 3, 697–701. [Google Scholar]
  5. Bao, J.Y.; Xing, F.; Sun, T.; You, Z. CMOS imager non-uniformity response correction-based high-accuracy spot target localization. Appl. Opt. 2019, 58, 4560–4568. [Google Scholar] [CrossRef]
  6. Brouk, I.; Nemirovsky, A.; Nemirovsky, Y. Analysis of noise in CMOS image sensor. In Proceedings of the 2008 IEEE International Conference on Microwaves, Communications, Antennas and Electronic Systems, Tel-Aviv, Israel, 13–14 May 2008. [Google Scholar]
  7. Xing, S.-X.; Zhang, J.; Sun, L.; Chang, B.-K.; Qian, Y.-S. Two-Point nonuniformity correction based on LMS. In Infrared Components and Their Applications; International Society for Optics and Photonics: Bellingham, WA, USA, 2005. [Google Scholar]
  8. Huawei, W.; Caiwen, M.; Jianzhong, C.; Haifeng, Z. An adaptive two-point non-uniformity correction algorithm based on shutter and its implementation. In Proceedings of the 2013 5th IEEE International Conference on Measuring Technology and Mechatronics Automation, Hong Kong, China, 16–17 January 2013. [Google Scholar]
  9. Lim, J.H.; Jeon, J.W.; Kwon, K.H. Optimal Non-Uniformity Correction for Linear Response and Defective Pixel Removal of Thermal Imaging System. In Proceedings of the International Conference on Ubiquitous Information Management and Communication, Phuket, Thailand, 4–6 January 2019; Springer: Cham, Switzerland, 2019. [Google Scholar]
  10. Zhou, B.; Ma, Y.; Li, H.; Liang, K. A study of two-point multi-section non-uniformity correction auto division algorithm for infrared images. In Proceedings of the 5th International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optoelectronic Materials and Devices for Detector, Imager, Display, and Energy Conversion Technology, Dalian, China, 22–29 October 2010; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; p. 76583X. [Google Scholar]
  11. Honghui, Z.; Haibo, L.; Xinrong, Y.; Qinghai, D. Adaptive non-uniformity correction algorithm based on multi-point correction. Infrared Laser Eng. 2014, 43, 3651–3654. [Google Scholar]
  12. Rui, L.; Yang, Y.; Wang, B.; Zhou, H.; Liu, S. S-Curve Model-Based Adaptive Algorithm for Nonuniformity Correction in Infrared Focal Plane Arrays. Acta Opt. Sin. 2009, 29, 927–931. [Google Scholar] [CrossRef]
  13. Yang, H.; Huang, Z.; Cai, H.; Zhang, Y. Novel real-time nonuniformity correction solution for infrared focal plane arrays based on S-curve model. Opt. Eng. 2012, 51, 077001. [Google Scholar] [CrossRef]
  14. Rozkovec, M.; Čech, J. Polynomial based NUC implemented on FPGA. In Proceedings of the 2016 IEEE Euromicro Conference on Digital System Design (DSD), Limassol, Cyprus, 31 August–2 September 2016. [Google Scholar]
  15. Gross, W.; Hierl, T.; Schulz, M.J. Correctability and long-term stability of infrared focal plane arrays. Opt. Eng. 1999, 38, 862–869. [Google Scholar]
  16. Chatard, J.P. Physical Limitations To Nonuniformity Correction In IR Focal Plane Arrays. In Focal Plane Arrays: Technology and Applications; International Society for Optics and Photonics: Bellingham, WA, USA, 1988. [Google Scholar]
  17. Pande-Chhetri, R.; Abd-Elrahman, A. De-striping hyperspectral imagery using wavelet transform and adaptive frequency domain filtering. ISPRS J. Photogramm. Remote Sens. 2011, 66, 620–636. [Google Scholar] [CrossRef]
  18. Jinsong, C.; Yun, S.; Huadong, G.; Weiming, W.; Boqin, Z. Destriping CMODIS data by power filtering. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2119–2124. [Google Scholar] [CrossRef]
  19. Munch, B.; Trtik, P.; Marone, F.; Stampanoni, M. Stripe and ring artifact removal with combined wavelet—Fourier filtering. Opt. Express 2009, 17, 8567–8591. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Gadallah, F.; Csillag, F.; Smith, E. Destriping multisensor imagery with moment matching. Int. J. Remote Sens. 2000, 21, 2505–2511. [Google Scholar] [CrossRef]
  21. Wegener, M. Destriping multiple sensor imagery by improved histogram matching. Int. J. Remote Sens. 1990, 11, 859–875. [Google Scholar] [CrossRef]
  22. Chang, Y.; Yan, L.; Fang, H.; Liu, H. Simultaneous Destriping and Denoising for Remote Sensing Images With Unidirectional Total Variation and Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1051–1055. [Google Scholar] [CrossRef]
  23. Chang, Y.; Yan, L.; Fang, H.; Luo, C. Anisotropic spectral-spatial total variation model for multispectral remote sensing image destriping. IEEE Trans. Image Process. 2015, 24, 1852–1866. [Google Scholar] [CrossRef]
  24. Fehrenbach, J.; Weiss, P.; Lorenzo, C. Variational Algorithms to Remove Stationary Noise: Applications to Microscopy Imaging. IEEE Trans. Image Process. 2012, 21, 4420–4430. [Google Scholar] [CrossRef] [Green Version]
  25. Lu, X.; Wang, Y.; Yuan, Y. Graph-Regularized Low-Rank Representation for Destriping of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4009–4018. [Google Scholar] [CrossRef]
  26. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral Image Restoration Using Low-Rank Matrix Recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  27. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-Variation-Regularized Low-Rank Matrix Factorization for Hyperspectral Image Restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  28. Chang, Y.; Yan, L.; Wu, T.; Zhong, S. Remote Sensing Image Stripe Noise Removal: From Image Decomposition Perspective. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7018–7031. [Google Scholar] [CrossRef]
  29. Shen, H.; Zhang, L. A MAP-Based Algorithm for Destriping and Inpainting of Remotely Sensed Images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1492–1502. [Google Scholar] [CrossRef]
  30. Bouali, M.; Ladjal, S. Toward Optimal Destriping of MODIS Data Using a Unidirectional Variational Model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2924–2935. [Google Scholar] [CrossRef]
  31. Yanovsky, I.; Dragomiretskiy, K. Variational Destriping in Remote Sensing Imagery: Total Variation with L1 Fidelity. Remote Sens. 2018, 10, 300. [Google Scholar] [CrossRef] [Green Version]
  32. Sun, Y.-J.; Huang, T.-Z.; Ma, T.-H.; Chen, Y. Remote Sensing Image Stripe Detecting and Destriping Using the Joint Sparsity Constraint with Iterative Support Detection. Remote Sens. 2019, 11, 608. [Google Scholar] [CrossRef] [Green Version]
  33. Song, Q.; Wang, Y.; Yan, X.; Gu, H. Remote Sensing Images Stripe Noise Removal by Double Sparse Regulation and Region Separation. Remote Sens. 2018, 10, 998. [Google Scholar] [CrossRef] [Green Version]
  34. Dou, H.-X.; Huang, T.-Z.; Deng, L.-J.; Zhao, X.-L.; Huang, J. Directional ℓ0 Sparse Modeling for Image Stripe Noise Removal. Remote Sens. 2018, 10, 361. [Google Scholar] [CrossRef] [Green Version]
  35. Chen, Y.; Huang, T.-Z.; Zhao, X.-L.; Deng, L.-J.; Huang, J. Stripe noise removal of remote sensing images by total variation regularization and group sparsity constraint. Remote Sens. 2017, 9, 559. [Google Scholar] [CrossRef] [Green Version]
  36. El Gamal, A.; Fowler, B.A.; Min, H.; Liu, X. Modeling and estimation of FPN components in CMOS image sensors. Int. Soc. Opt. Photonics 1998, 3301, 168–177. [Google Scholar]
  37. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  38. Lin, Z.; Liu, R.; Su, Z. Linearized Alternating Direction Method with Adaptive Penalty for Low-Rank Representation. Adv. Neural Inf. Process. Syst. 2011, 612–620. [Google Scholar]
  39. Cao, Y.L.; He, Z.W.; Yang, J.X.; Ye, X.P.; Cao, Y.P. A multi-scale non-uniformity correction method based on wavelet decomposition and guided filtering for uncooled long wave infrared camera. Signal Process. Image 2018, 60, 13–21. [Google Scholar] [CrossRef]
  40. Xie, X.-F.; Zhang, W.; Zhao, M.; Zhi, X.-Y.; Wang, F.-G. Sequence arrangement of wavelet transform for nonuniformity correction in infrared focal-plane arrays. In Proceedings of the 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, Beijing, China, 6–9 November 2011. [Google Scholar]
  41. Yang, J.H.; Zhao, X.L.; Ma, T.H.; Chen, Y.; Huang, T.Z.; Ding, M. Remote sensing images destriping using unidirectional hybrid total variation and nonconvex low-rank regularization. J. Comput. Appl. Math. 2020, 363, 124–144. [Google Scholar] [CrossRef]
  42. Song, Q.; Wang, Y.H.; Yang, S.N.; Dai, K.H.; Yuan, Y. Guided total variation approach based non-uniformity correction for infrared focal plane array. In Proceedings of the 10th International Conference on Graphics and Image Processing (ICGIP 2018), Chengdu, China, 6 May 2019. [Google Scholar]
  43. Huang, Z.H.; Zhang, Y.Z.; Li, Q.; Li, Z.T.; Zhang, T.X.; Sang, N.; Xiong, S.Q. Unidirectional variation and deep CNN denoiser priors for simultaneously destriping and denoising optical remote sensing images. Int. J. Remote Sens. 2019, 40, 5737–5748. [Google Scholar] [CrossRef]
Figure 1. A 3T circuit structure of CMOS Image Sensor (CIS) single pixel. (a) A 3T active pixel structure. (b) A 3T structure equivalent circuit.
Figure 1. A 3T circuit structure of CMOS Image Sensor (CIS) single pixel. (a) A 3T active pixel structure. (b) A 3T structure equivalent circuit.
Applsci 10 03694 g001
Figure 2. Images taken by CIS with fixed pattern noise (FPN) noise (including two-ended output structure).
Figure 2. Images taken by CIS with fixed pattern noise (FPN) noise (including two-ended output structure).
Applsci 10 03694 g002
Figure 3. Summary of FPN correction methods.
Figure 3. Summary of FPN correction methods.
Applsci 10 03694 g003
Figure 4. Correction process diagram of two-point method (x coordinate: incident light energy y coordinate: gray value).
Figure 4. Correction process diagram of two-point method (x coordinate: incident light energy y coordinate: gray value).
Applsci 10 03694 g004
Figure 5. The effect of flat field correction (a) corrected results after CIS working for 15 min; (a) corrected results after CIS working for 15 min; (b) corrected results after CIS working for 1 h; (c) corrected results after CIS working for 3 h.
Figure 5. The effect of flat field correction (a) corrected results after CIS working for 15 min; (a) corrected results after CIS working for 15 min; (b) corrected results after CIS working for 1 h; (c) corrected results after CIS working for 3 h.
Applsci 10 03694 g005
Figure 6. Schematic diagram of sectional correction method (x coordinate: incident light energy; y coordinate: gray value).
Figure 6. Schematic diagram of sectional correction method (x coordinate: incident light energy; y coordinate: gray value).
Applsci 10 03694 g006
Figure 7. Noise image composition. (Y) noise image; (U) clear image; (S) column fixed pattern noise (CFPN); (N) random noise.
Figure 7. Noise image composition. (Y) noise image; (U) clear image; (S) column fixed pattern noise (CFPN); (N) random noise.
Applsci 10 03694 g007
Figure 8. Gradient probability distribution. (a) Horizontal gradient probability distribution; (b) vertical gradient probability distribution (x coordinates: horizontally or vertically gradient; y coordinates: probability distribution).
Figure 8. Gradient probability distribution. (a) Horizontal gradient probability distribution; (b) vertical gradient probability distribution (x coordinates: horizontally or vertically gradient; y coordinates: probability distribution).
Applsci 10 03694 g008
Figure 9. The results of various methods for removing stripes in the solar active area. (a) Global image with noise; (b) Original image of framed area; (c) Noise image of framed area; (d) WAFT; (e) Anisotropic total variation (UTV); (f) VSNR; (g) ASSTV; (h) SILR; (i) 0 sparse; (j) LRSUTV.
Figure 9. The results of various methods for removing stripes in the solar active area. (a) Global image with noise; (b) Original image of framed area; (c) Noise image of framed area; (d) WAFT; (e) Anisotropic total variation (UTV); (f) VSNR; (g) ASSTV; (h) SILR; (i) 0 sparse; (j) LRSUTV.
Applsci 10 03694 g009aApplsci 10 03694 g009b
Figure 10. Stripes extracted by various methods. (a) Added CFPN; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV.
Figure 10. Stripes extracted by various methods. (a) Added CFPN; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV.
Applsci 10 03694 g010aApplsci 10 03694 g010b
Figure 11. Mean cross-track curves of various denoising results, where the green curve in each result is the mean cross-track curve of the original image, and the red curve is the mean cross-track curve of various denoising results. (a) Original image; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV.
Figure 11. Mean cross-track curves of various denoising results, where the green curve in each result is the mean cross-track curve of the original image, and the red curve is the mean cross-track curve of various denoising results. (a) Original image; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV.
Applsci 10 03694 g011aApplsci 10 03694 g011b
Figure 12. Various methods of stripe removal result on the solar sphere image. (a) Global image with noise; (b) the original image of the comparison area; (c) the noise image of the comparison area; (d) WAFT; (e) UTV; (f) VSNR; (g) ASSTV; (h) SILR; (i) 0 sparse; (j) LRSUTV.
Figure 12. Various methods of stripe removal result on the solar sphere image. (a) Global image with noise; (b) the original image of the comparison area; (c) the noise image of the comparison area; (d) WAFT; (e) UTV; (f) VSNR; (g) ASSTV; (h) SILR; (i) 0 sparse; (j) LRSUTV.
Applsci 10 03694 g012aApplsci 10 03694 g012b
Figure 13. Stripes extracted by various methods. (a) Added CFPN; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV.
Figure 13. Stripes extracted by various methods. (a) Added CFPN; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV.
Applsci 10 03694 g013aApplsci 10 03694 g013b
Figure 14. Mean cross-track curves of various denoising results, where the green curve in each result is the mean cross-track curve of the original image, and the red curve is the mean cross-track curve of various denoising results. (a) Original image; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV.
Figure 14. Mean cross-track curves of various denoising results, where the green curve in each result is the mean cross-track curve of the original image, and the red curve is the mean cross-track curve of various denoising results. (a) Original image; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV.
Applsci 10 03694 g014aApplsci 10 03694 g014b
Figure 15. The power spectrum of the image in the solar active region. (a) Original image; (b) noise image; (c) WAFT; (d) UTV; (e)VSNR; (f) ASSTV; (g) SILR; (h) 0 sparse; (i) LRSUTV.
Figure 15. The power spectrum of the image in the solar active region. (a) Original image; (b) noise image; (c) WAFT; (d) UTV; (e)VSNR; (f) ASSTV; (g) SILR; (h) 0 sparse; (i) LRSUTV.
Applsci 10 03694 g015aApplsci 10 03694 g015b
Figure 16. Photo of sunspots in the solar TiO band. (a) Emphasis area is granular areas; (b) emphasis area is sunspot.
Figure 16. Photo of sunspots in the solar TiO band. (a) Emphasis area is granular areas; (b) emphasis area is sunspot.
Applsci 10 03694 g016
Figure 17. Denoising effect of various methods in the area of solar granular. (a) Noise Image; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV; (i) CFPN noise estimated by LRSUTV method; (j) mean cross track of images (a,h).
Figure 17. Denoising effect of various methods in the area of solar granular. (a) Noise Image; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV; (i) CFPN noise estimated by LRSUTV method; (j) mean cross track of images (a,h).
Applsci 10 03694 g017
Figure 18. Denoising effect of various methods in the area of sunspot. (a) Noise Image; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV; (i) CFPN noise estimated by LRSUTV method; (j) mean cross track of images (a,h).
Figure 18. Denoising effect of various methods in the area of sunspot. (a) Noise Image; (b) WAFT; (c) UTV; (d) ASSTV; (e) VSNR; (f) SILR; (g) 0 sparse; (h) LRSUTV; (i) CFPN noise estimated by LRSUTV method; (j) mean cross track of images (a,h).
Applsci 10 03694 g018aApplsci 10 03694 g018b
Table 1. PSNR of denoising results in solar active region by various methods for solar active region.
Table 1. PSNR of denoising results in solar active region by various methods for solar active region.
ImagesMethodσ = 4σ = 8σ = 12σ = 16σ = 20
Solar Active RegionWAFT31.5995831.3438230.8700030.5000628.54466
UTV31.3185331.1632630.8797330.8269929.55368
ASSTV31.6148731.3208530.7617130.1611527.63940
VSNR29.5523329.4586229.2899229.3899529.41818
SILR32.1590432.0683531.7553631.8130830.89457
L031.4839031.2175430.9705431.1017430.95841
LRSUTV32.3931632.1994331.8641131.9327831.05222
Table 2. PSNR of denoising results of various methods for solar granular.
Table 2. PSNR of denoising results of various methods for solar granular.
ImagesMethodσ = 4σ = 8σ = 12σ = 16σ = 20
Solar photospheric layerWAFT33.9548333.7057333.0027931.9899629.77545
UTV33.8333933.7909733.5677633.2739831.70765
ASSTV34.0360433.7707433.0433031.7945929.08905
VSNR32.6931432.7242432.7302032.7621632.39804
SILR35.4319735.4626935.1910634.8950433.39456
L033.7900533.6959633.5762233.3863732.86435
LRSUTV36.8081435.7419135.5238535.4247033.80298
Table 3. SSIM of denoising results in solar active region by various methods.
Table 3. SSIM of denoising results in solar active region by various methods.
ImagesMethodσ = 4σ = 8σ = 12σ = 16σ = 20
Solar active regionWAFT0.9796980.9767630.9711730.9592070.911204
UTV0.9739150.9730320.9718250.9684490.946812
ASSTV0.979610.9763340.9699640.9542130.889726
VSNR0.9728530.9726450.9724040.9724310.972587
SILR0.98130.9809560.9801650.978850.970024
L00.9779230.9771090.9766150.9761240.975494
LRSUTV0.989140.9810680.9804750.978930.976364
Table 4. SSIM of denoising results of various methods for solar granular.
Table 4. SSIM of denoising results of various methods for solar granular.
ImagesMethodσ = 4σ = 8σ = 12σ = 16σ = 20
Solar photospheric layerWAFT0.9421240.9273060.8979610.8527610.768504
UTV0.9417530.9359440.9276160.9077440.853808
ASSTV0.9429870.9293860.9024730.853790.74729
VSNR0.8777480.8781570.8806370.875240.878135
SILR0.95170.9456220.93840.9225220.887166
L00.9388840.9271770.9270610.9151140.909504
LRSUTV0.9570220.9500720.9406740.9287950.895663
Table 5. Parameter settings of various methods being compared.
Table 5. Parameter settings of various methods being compared.
MethodKey Parameter
WAFT numlev = 2 ,   wavtyp = db 7 , k = 2.8
UTV α = 500 ,   β = 5 ,   ω 1 = 0.003 ,   ω 2 = 0.05 ,   λ = 0.05 ,   MaxIter = 150
ASSTV λ 1 = 10 ,   λ 2 = 60 ,   λ 3 = 30 ,   γ = 0.5 ,   α = 8 ,   δ = 3 ,   Maxiter   = 150  
VSNR Eps = 0 ,   p = 2 ,   alpha = 4 e 4 ,   maxit = 1000 ,   prec = 2 e 3 ,   C = 1000
SILR δ = 0.01 ,   β = 1 e 04 ,   γ = 0.01 ,   λ 2 = 0.7 ,   λ 3 = 0.5 ,   τ = 0.5 ,   MaxIter = 150
L0 λ = 10 ,   μ = 0.1 ,   β 1 = 1 ,   β 2 = 1 ,   β 3 = 1 , β 4 = 1 ,   MaxIter = 150
LRSUTV α 1 = 20 ,   α 3 = 20 ,   α 4 = 4 ,   γ = 0.85 ,   τ = 0.5 ,   MaxIter = 150  
Table 6. Running time of various methods. Units are seconds.
Table 6. Running time of various methods. Units are seconds.
SizeWAFTUTVASSTVVSNRSILRL0LRSUTV
512 × 5120.11964.919418.01269.641320.217027.970720.4003

Share and Cite

MDPI and ACS Style

Zhang, T.; Li, X.; Li, J.; Xu, Z. CMOS Fixed Pattern Noise Removal Based on Low Rank Sparse Variational Method. Appl. Sci. 2020, 10, 3694. https://doi.org/10.3390/app10113694

AMA Style

Zhang T, Li X, Li J, Xu Z. CMOS Fixed Pattern Noise Removal Based on Low Rank Sparse Variational Method. Applied Sciences. 2020; 10(11):3694. https://doi.org/10.3390/app10113694

Chicago/Turabian Style

Zhang, Tao, Xinyang Li, Jianfeng Li, and Zhi Xu. 2020. "CMOS Fixed Pattern Noise Removal Based on Low Rank Sparse Variational Method" Applied Sciences 10, no. 11: 3694. https://doi.org/10.3390/app10113694

APA Style

Zhang, T., Li, X., Li, J., & Xu, Z. (2020). CMOS Fixed Pattern Noise Removal Based on Low Rank Sparse Variational Method. Applied Sciences, 10(11), 3694. https://doi.org/10.3390/app10113694

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop