^{1}

^{*}

^{2}

^{1}

^{2}

This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

This paper presents a spatial noise reduction technique designed to work on

The image formation process through consumer imaging devices is intrinsically noisy. This is especially true using low-cost devices such as mobile-phones, PDAs, etc., mainly in low-light conditions and the absence of flash-guns [

The final perceived quality of images acquired by digital sensors can be optimized through multi-shot acquisitions (e.g., extending dynamic range [

In this paper we propose a novel spatial noise reduction method that directly processes the raw

The proposed algorithm introduces the concept of the usage of

The

Sophisticated denoising methods such as [

The proposed filtering method is a trade-off between real time implementation with very low hardware logic and the usage of some

The paper is structured as follows: in the next section some details about the

In typical imaging devices a color filter is placed on top of the imager making each pixel sensitive to only one color component. A color reconstruction algorithm interpolates the missing information at each location and reconstructs the full

The number of green elements is twice the number of red and blue pixels due to the higher sensitivity of the human eye to the green light, which, in fact, has a higher weight when computing the luminance. The proposed filter processes raw Bayer data, providing the best performance if executed as the first algorithm of the IGP (Image Generation Pipeline). A typical image reconstruction pipeline is shown in

It is well known that the

The aforementioned properties of the

The filter changes its smoothing capability depending on the

More specifically, in relation to image content, the following assumptions are considered:

- if the local area is homogeneous, then it can be heavily filtered because pixel variations are basically caused by random noise.

- if the local area is textured, then it must be lightly filtered because pixel variations are mainly caused by texture and by noise to a lesser extent; hence only the little differences can be safely filtered, as they are masked by the local texture.

A block diagram describing the overall filtering process is illustrated in

The fundamental blocks of the algorithm are:

The data in the filter mask passes through the

As noted [

It also crucial to observe that in data from real image sensors, the constant

We decided to incorporate the above considerations of luminance masking and sensor noise statistics into a single curve as shown in

A high HVS value (HVSmax) is set for both low and high pixel values: in dark areas the human eye is less sensitive to variations of pixel intensities, whereas in bright areas noise standard deviation is higher. HVS value is set low (HVSmin) at mid pixel intensities.

As stated in Section 2.2, in order to make some simplifying assumptions, we use the same HVS curve for all CFA colour channels taking as input the pixel intensities directly from the sensor. The HVS coefficient computed by this block is used by the Texture Degree Analyzer that outputs a degree of texture taking also into account the above considerations (Section 3.4).

The proposed filter uses different filter masks for green and red/blue pixels to match the particular arrangement of pixels in the

The texture analyzer block computes a reference value _{d}_{d}

Depending on the color of the pixel under processing, either green or red/blue, two different texture analyzers are used. The red/blue filter power is increased by slightly modifying the texture analyzer making it less sensitive to small pixel differences (_{max}

The green and red/blue texture analyzers are defined as follows:

- if _{d}

- if 0 < _{d}

- if _{d}

The texture threshold for the current pixel, belonging to Bayer channel _{c}_{weight}_{max}

The green texture analyzer (_{R/B}

In order to adapt the filter smoothing capability to the local characteristics of the image, a noise level estimation is required. The proposed noise estimation solution is pixel based and is implemented taking into account the previous estimation to calculate the current one.

The noise estimation equation is designed so that:

if the local area is completely flat (_{d}_{max}

if the local area is highly textured (_{d}

otherwise a new value is estimated.

Each color channel has its own noise characteristics hence noise levels are tracked separately for each color channel. The noise level for each channel is estimated according to the following formulas:
_{d}(k)_{c}(k_{R}(k_{G}(k_{B}(k

The final step of the filtering process consists in determining the weighting coefficients _{i}_{i}_{c}

The process for determining the similarity thresholds and the _{i}

Let:

- _{c}

- _{i}

- _{i}_{c}_{i}),

In order to obtain the _{i}_{i}_{low}_{high}_{i}

small enough to be heavily filtered,

big enough to remain untouched,

an intermediate value to be properly filtered.

The two thresholds can be interpreted as fuzzy parameters shaping the concept of similarity between pixel pairs. In particular, the associated fuzzy member function computes the similarity degree between the central and a neighborhood pixel.

By properly computing _{low}_{high}

To determine which of the above cases is valid for the current local area, the local texture degree is the key parameter to analyze. It is important to remember at this point that, by construction, the texture degree coefficient (_{d}_{i}

Once the similarity thresholds have been fixed, it is possible to finally determine the filter weights by comparing the _{i}

To summarize, the weighting coefficient selection is performed as follows. If the _{i}_{low}_{i}_{weight}_{i}_{i}_{high}_{i}_{low}_{high}_{weight}_{i}

Let W_{1}_{N}_{c}_{f}

In order to preserve the original bitdepth, the similarity weights are normalized in the interval [0,1], and chosen according to _{low}, Th_{high})_{low}_{high}

The following sections describe the tests performed to assess the quality of the proposed algorithm. First, a test computing the noise power before and after filtering is reported. Next some comparisons between the proposed filter and other noise reduction algorithms ([

A synthetic image was used to determine the amount of noise that the algorithm is capable to remove. Let us denote:

_{NOISY}

_{FILTERED}

_{ORIGINAL}

According to these definitions we have:

_{NOISY}_{ORIGINAL} = I_{ADDED_NOISE}

_{FILTERED}_{ORIGINAL} = I_{RESIDUAL_NOISE}

_{ADDED_NOISE}

_{ORIGINAL}

_{RESIDUAL_NOISE}

_{ADDED_NOISE}

_{RESIDUAL_NOISE}

To modulate the power of the additive noise, different values of the standard deviation of a Gaussian distribution are used. Noise is assumed to be

A synthetic test image has been generated having the following properties: it is composed by a succession of stripes having equal brightness but different noise power. Each stripe is composed of 10 lines and noise is added with increasing power starting from the top of the image and proceeding downwards (

The graph in

In order to assess the visual quality of the proposed method, we have compared it with the

The tests were executed using two different approaches. In the first approach, the original noisy Bayer data were interpolated obtaining a noisy color image, which was splitted in its color channels; each color plane was filtered independently using SUSAN. Finally, the filtered color channels were recombined to obtain the denoised color image as sketched in

The second approach consists in slightly modifying the

Finally,

In order to numerically quantify the performance of the filtering process, the standard Kodak 24 (8-bpp) [

After converting each image of the set to Bayer pattern format, the simulation was performed by adding noise with increasing standard deviation to each CFA plane. In particular the following values have been used: σ = 5, 8, 10. More specifically, the aforementioned values of σ refer to the noise level in the middle of the dynamic range. To simulate a more realistic sensor noise, in fact, we followed the model described in [

Experiments show that the proposed method performs well in terms of PSNR compared to the algorithms used in the test (

A spatial adaptive denoising algorithm has been presented; the method exploits characteristics of the human visual system and sensor noise statistics in order to achieve pleasant results in terms of perceived image quality. The noise level and texture degree are computed to adapt the filter behaviour to the local characteristics of the image. The algorithm is suitable for real time processing of images acquired in CFA format. Future work includes the extension of the processing masks along with the study and integration of other

We wish to thank the anonymous reviewers for their accurate and constructive comments in reviewing this paper.

Image Generation Pipeline.

Overall Filter Block Diagram.

HVS curve used in the proposed approach.

Filter Masks for Bayer Pattern Data.

Green Texture Analyzer.

Red/Blue texture analyzer.

Texture Analyzer output: (a) input image after colour interpolation (b) gray-scale texture degree output: bright areas correspond to high frequency, dark areas correspond to low frequencies.

The Wi coefficients weight the similarity degree between the central pixel and its neighborhood.

Block diagram of the fuzzy computation process for determining the similarity weights between the central pixel and its N neighborhoods.

Weights assignment (Similarity Evaluator Block). The i-th weight denotes the degree of similarity between the central pixel in the filter mask and the i-th pixel in the neighborhood.

Synthetic image test

Noise power test. Upper line: noise level before filtering. Lower line: residual noise power after filtering.

Overall scheme used to compare the Susan algorithm with the proposed method. The noisy color image is filtered by processing its color channels independently. The results are recombined to reconstruct the denoised color image.

Images acquired by a CFA sensor. (a) SNR value 30.2dB. (b) SNR value 47.2dB. The yellow crops represent the magnified details contained in the following figures.

A magnified detail of

Comparison test at CFA level (magnified details of

Magnified details of

(a) Original Image. (b) Noisy image. (c) Cropped and zoomed noisy image detail. Cropped and zoomed noisy image detail filtered with: Multistage median-1 filter(d), Multistage median-3 filter (e), proposed method(f).

Testing procedure. (a) The original Kodak color image is converted to Bayer pattern format and demosaiced. (b) Noise is added to the Bayer image, filtered and color interpolated again. Hence, color interpolation is the same for the clean reference and the denoised images.

PSNR comparison between proposed solution and other spatial approaches for the Standard Kodak Images test set. (a) Kodak noisy images set with standard deviation 5. (b) Kodak noisy images set with standard deviation 8. (c) Kodak noisy images set with standard deviation 10.

PSNR comparison between proposed solution and other fuzzy approaches for the Standard Kodak Images test set. (a) Kodak noisy images set with standard deviation 5. (b) Kodak noisy images set with standard deviation 8. (c) Kodak noisy images set with standard deviation 10.