sensors-logo

Journal Browser

Journal Browser

Computational Imaging and Sensing

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Sensing and Imaging".

Viewed by 28053

Editors


E-Mail Website
Collection Editor
Department of Opto-Electronic Engineering, Beihang University, Beijing 100191, China
Interests: imaging; single-pixel imaging; single-photon imaging; ultrafast imaging; quantum imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Collection Editor
Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes (Québec) J3X 1S2, Canada
Interests: ultrafast imaging; photoacoustic imaging; laser beam/pulse shaping
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Computational imaging refers to the technique, in which optical capture methods and computational algorithms are jointly designed in the imaging system, aiming to produce images with more information. One clear feature of computational imaging is that the captured data may not actually look like an image in the conventional sense but are transformed to better suit the algorithm layer, where essential information are computed. Therefore, computational imaging senses the required information of targets and/or scenes, blurring the boundary of imaging and sensing.

Benefiting from the ever developing processor industry, computational imaging and sensing begin to play an important role in imaging techniques and have many applications in areas of smartphone photography, 3D sensing and reconstruction, biological and medical imaging, synthetic aperture radar, remote sensing and so on. A lot of recent researches have accelerated the imaging and sensing performance of those applications.

The topical collection focuses on computational imaging to obtain high-quality images and to solve problems which cannot be solved by optical capturing or post processing alone, in order to improve imaging and sensing performance of physical or optical devices in terms of image quality, imaging speed, and functionality. The purpose of this topical collection is to broadly engage the communities of optical imaging, image processing and signal sensing to provide a forum for the researchers and engineers related to this rapidly developing field and share their novel and original research regarding the topic of computational imaging and sensing. Survey papers addressing relevant topics are also welcome. Topics of interest include but are not limited to:

  • Computational photography for 3D imaging;
  • Depth estimation and 3D sensing;
  • Biological imaging (FLIM/FPM/SIM)
  • Medical imaging (CT/MRI/PET image reconstruction);
  • Image restoration and denoising;
  • Image registration and super-resolution imaging;
  • Ghost imaging and single-pixel imaging;
  • Photon counting and single-photon imaging;
  • High-speed imaging systems and bandwidth reduction;
  • Computational sensing for advanced driver assistance systems (ADAS);
  • Synthetic aperture radar (SAR) imaging;
  • Remote sensing;
  • Ultrasound imaging;
  • Computational sensing for advanced image signal processor (ISP);
  • Deep learning for image reconstruction;
  • Remote sensing and UAV image processing;
  • Under-water imaging and dehazing.

Prof. Dr. Ming-Jie Sun
Prof. Dr. Jinyang Liang
Collection Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (14 papers)

2024

Jump to: 2023, 2022

16 pages, 30693 KiB  
Article
LM-CycleGAN: Improving Underwater Image Quality Through Learned Perceptual Image Patch Similarity and Multi-Scale Adaptive Fusion Attention
by Jiangyan Wu, Guanghui Zhang and Yugang Fan
Sensors 2024, 24(23), 7425; https://doi.org/10.3390/s24237425 - 21 Nov 2024
Viewed by 277
Abstract
The underwater imaging process is often hindered by high noise levels, blurring, and color distortion due to light scattering, absorption, and suspended particles in the water. To address the challenges of image enhancement in complex underwater environments, this paper proposes an underwater image [...] Read more.
The underwater imaging process is often hindered by high noise levels, blurring, and color distortion due to light scattering, absorption, and suspended particles in the water. To address the challenges of image enhancement in complex underwater environments, this paper proposes an underwater image color correction and detail enhancement model based on an improved Cycle-consistent Generative Adversarial Network (CycleGAN), named LPIPS-MAFA CycleGAN (LM-CycleGAN). The model integrates a Multi-scale Adaptive Fusion Attention (MAFA) mechanism into the generator architecture to enhance its ability to perceive image details. At the same time, the Learned Perceptual Image Patch Similarity (LPIPS) is introduced into the loss function to make the training process more focused on the structural information of the image. Experiments conducted on the public datasets UIEB and EUVP demonstrate that LM-CycleGAN achieves significant improvements in Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), Average Gradient (AG), Underwater Color Image Quality Evaluation (UCIQE), and Underwater Image Quality Measure (UIQM). Moreover, the model excels in color correction and fidelity, successfully avoiding issues such as red checkerboard artifacts and blurred edge details commonly observed in reconstructed images generated by traditional CycleGAN approaches. Full article
Show Figures

Figure 1

22 pages, 7635 KiB  
Article
Phase Noise Compensation Algorithm for Space-Borne Azimuth Multi-Channel SAR
by Lu Bai, Wei Xu, Pingping Huang, Weixian Tan, Yaolong Qi, Yuejuan Chen and Zhiqi Gao
Sensors 2024, 24(14), 4494; https://doi.org/10.3390/s24144494 - 11 Jul 2024
Viewed by 606
Abstract
Azimuth multi-channel synthetic aperture radar (SAR) has always been an important technical means to achieve high-resolution wide-swath (HRWS) SAR imaging. However, in the space-borne azimuth multi-channel SAR system, random phase noise will be produced during the operation of each channel receiver. The phase [...] Read more.
Azimuth multi-channel synthetic aperture radar (SAR) has always been an important technical means to achieve high-resolution wide-swath (HRWS) SAR imaging. However, in the space-borne azimuth multi-channel SAR system, random phase noise will be produced during the operation of each channel receiver. The phase noise of each channel is superimposed on the SAR echo signal of the corresponding channel, which will cause the phase imbalance between the channels and lead to the generation of false targets. In view of the above problems, this paper proposes a random phase noise compensation method for space-borne azimuth multi-channel SAR. This method performs feature decomposition by calculating the covariance matrix of the echo signal and converts the random phase noise estimation into the optimal solution of the cost function. Considering that the phase noise in the receiver has frequency-dependent and time-varying characteristics, this method calculates the phase noise estimation value corresponding to each range-frequency point in the range direction and obtains the phase noise estimation value by expectation in the azimuth direction. The proposed random phase noise compensation method can suppress false targets well and make the radar present a well-focused SAR image. Finally, the usefulness of the suggested method is verified by simulation experiments. Full article
Show Figures

Figure 1

10 pages, 6832 KiB  
Communication
Simultaneous Multifocal Plane Fourier Ptychographic Microscopy Utilizing a Standard RGB Camera
by Giseok Oh and Hyun Choi
Sensors 2024, 24(14), 4426; https://doi.org/10.3390/s24144426 - 9 Jul 2024
Viewed by 929
Abstract
Fourier ptychographic microscopy (FPM) is a computational imaging technology that can acquire high-resolution large-area images for applications ranging from biology to microelectronics. In this study, we utilize multifocal plane imaging to enhance the existing FPM technology. Using an RGB light emitting diode (LED) [...] Read more.
Fourier ptychographic microscopy (FPM) is a computational imaging technology that can acquire high-resolution large-area images for applications ranging from biology to microelectronics. In this study, we utilize multifocal plane imaging to enhance the existing FPM technology. Using an RGB light emitting diode (LED) array to illuminate the sample, raw images are captured using a color camera. Then, exploiting the basic optical principle of wavelength-dependent focal length variation, three focal plane images are extracted from the raw image through simple R, G, and B channel separation. Herein, a single aspherical lens with a numerical aperture (NA) of 0.15 was used as the objective lens, and the illumination NA used for FPM image reconstruction was 0.08. Therefore, simultaneous multifocal plane FPM with a synthetic NA of 0.23 was achieved. The multifocal imaging performance of the enhanced FPM system was then evaluated by inspecting a transparent organic light-emitting diode (OLED) sample. The FPM system was able to simultaneously inspect the individual OLED pixels as well as the surface of the encapsulating glass substrate by separating R, G, and B channel images from the raw image, which was taken in one shot. Full article
Show Figures

Figure 1

16 pages, 13916 KiB  
Article
A Single-Shot Scattering Medium Imaging Method via Bispectrum Truncation
by Yuting Han, Honghai Shen, Fang Yuan, Tianxiang Ma, Pengzhang Dai, Yang Sun and Hairong Chu
Sensors 2024, 24(6), 2002; https://doi.org/10.3390/s24062002 - 21 Mar 2024
Cited by 1 | Viewed by 1014
Abstract
Imaging using scattering media is a very important yet challenging technology. As one of the most widely used scattering imaging methods, speckle autocorrelation technology has important applications in several fields. However, traditional speckle autocorrelation imaging methods usually use iterative phase recovery algorithms to [...] Read more.
Imaging using scattering media is a very important yet challenging technology. As one of the most widely used scattering imaging methods, speckle autocorrelation technology has important applications in several fields. However, traditional speckle autocorrelation imaging methods usually use iterative phase recovery algorithms to obtain the Fourier phase of hidden objects, posing issues such as large data calculation volumes and uncertain reconstruction results. Here, we propose a single-shot scattering imaging method based on the bispectrum truncation method. The bispectrum analysis is utilized for hidden object phase recovery, the truncation method is used to avoid the computation of redundant data when calculating the bispectrum data, and the method is experimentally verified. The experimental results show that our method does not require uncertain iterative calculations and can reduce the bispectrum data computation by more than 80% by adjusting the truncation factor without damaging the imaging quality, which greatly improves imaging efficiency. This method paves the way for rapid imaging through scattering media and brings benefits for imaging in dynamic situations. Full article
Show Figures

Figure 1

2023

Jump to: 2024, 2022

16 pages, 8411 KiB  
Article
Accurate Detection for Zirconium Sheet Surface Scratches Based on Visible Light Images
by Bin Xu, Yuanhaoji Sun, Jinhua Li, Zhiyong Deng, Hongyu Li, Bo Zhang and Kai Liu
Sensors 2023, 23(16), 7291; https://doi.org/10.3390/s23167291 - 21 Aug 2023
Cited by 1 | Viewed by 1284
Abstract
Zirconium sheet has been widely used in various fields, e.g., chemistry and aerospace. The surface scratches on the zirconium sheets caused by complex processing environment have a negative impact on the performance, e.g., working life and fatigue fracture resistance. Therefore, it is necessary [...] Read more.
Zirconium sheet has been widely used in various fields, e.g., chemistry and aerospace. The surface scratches on the zirconium sheets caused by complex processing environment have a negative impact on the performance, e.g., working life and fatigue fracture resistance. Therefore, it is necessary to detect the defect of zirconium sheets. However, it is difficult to detect such scratch images due to lots of scattered additive noise and complex interlaced structural texture. Hence, we propose a framework for adaptively detecting scratches on the surface images of zirconium sheets, including noise removing and texture suppressing. First, the noise removal algorithm, i.e., an optimized threshold function based on dual-tree complex wavelet transform, uses selected parameters to remove scattered and numerous noise. Second, the texture suppression algorithm, i.e., an optimized relative total variation enhancement model, employs selected parameters to suppress interlaced texture. Finally, by connecting disconnection based on two types of connection algorithms and replacing the Gaussian filter in the standard Canny edge detection algorithm with our proposed framework, we can more robustly detect the scratches. The experimental results show that the proposed framework is of higher accuracy. Full article
Show Figures

Figure 1

19 pages, 9307 KiB  
Article
Defects Prediction Method for Radiographic Images Based on Random PSO Using Regional Fluctuation Sensitivity
by Zhongyu Shang, Bing Li, Lei Chen and Lei Zhang
Sensors 2023, 23(12), 5679; https://doi.org/10.3390/s23125679 - 17 Jun 2023
Viewed by 1734
Abstract
This paper presents an advanced methodology for defect prediction in radiographic images, predicated on a refined particle swarm optimization (PSO) algorithm with an emphasis on fluctuation sensitivity. Conventional PSO models with stable velocity are often beleaguered with challenges in precisely pinpointing defect regions [...] Read more.
This paper presents an advanced methodology for defect prediction in radiographic images, predicated on a refined particle swarm optimization (PSO) algorithm with an emphasis on fluctuation sensitivity. Conventional PSO models with stable velocity are often beleaguered with challenges in precisely pinpointing defect regions in radiographic images, attributable to the lack of a defect-centric approach and the propensity for premature convergence. The proposed fluctuation-sensitive particle swarm optimization (FS-PSO) model, distinguished by an approximate 40% increase in particle entrapment within defect areas and an expedited convergence rate, necessitates a maximal additional time consumption of only 2.28%. The model, also characterized by reduced chaotic swarm movement, enhances efficiency through the modulation of movement intensity concomitant with the escalation in swarm size. The FS-PSO algorithm’s performance was rigorously evaluated via a series of simulations and practical blade experiments. The empirical findings evince that the FS-PSO model substantially outperforms the conventional stable velocity model, particularly in terms of shape retention in defect extraction. Full article
Show Figures

Figure 1

12 pages, 2927 KiB  
Article
CVCC Model: Learning-Based Computer Vision Color Constancy with RiR-DSN Architecture
by Ho-Hyoung Choi
Sensors 2023, 23(11), 5341; https://doi.org/10.3390/s23115341 - 5 Jun 2023
Cited by 3 | Viewed by 1633
Abstract
To achieve computer vision color constancy (CVCC), it is vital but challenging to estimate scene illumination from a digital image, which distorts the true color of an object. Estimating illumination as accurately as possible is fundamental to improving the quality of the image [...] Read more.
To achieve computer vision color constancy (CVCC), it is vital but challenging to estimate scene illumination from a digital image, which distorts the true color of an object. Estimating illumination as accurately as possible is fundamental to improving the quality of the image processing pipeline. CVCC has a long history of research and has significantly advanced, but it has yet to overcome some limitations such as algorithm failure or accuracy decreasing under unusual circumstances. To cope with some of the bottlenecks, this article presents a novel CVCC approach that introduces a residual-in-residual dense selective kernel network (RiR-DSN). As its name implies, it has a residual network in a residual network (RiR) and the RiR houses a dense selective kernel network (DSN). A DSN is composed of selective kernel convolutional blocks (SKCBs). The SKCBs, or neurons herein, are interconnected in a feed-forward fashion. Every neuron receives input from all its preceding neurons and feeds the feature maps into all its subsequent neurons, which is how information flows in the proposed architecture. In addition, the architecture has incorporated a dynamic selection mechanism into each neuron to ensure that the neuron can modulate filter kernel sizes depending on varying intensities of stimuli. In a nutshell, the proposed RiR-DSN architecture features neurons called SKCBs and a residual block in a residual block, which brings several benefits such as alleviation of the vanishing gradients, enhancement of feature propagation, promotion of the reuse of features, modulation of receptive filter sizes depending on varying intensities of stimuli, and a dramatic drop in the number of parameters. Experimental results highlight that the RiR-DSN architecture performs well above its state-of-the-art counterparts, as well as proving to be camera- and illuminant-invariant. Full article
Show Figures

Graphical abstract

18 pages, 4063 KiB  
Article
Infrared Image Deconvolution Considering Fixed Pattern Noise
by Haegeun Lee and Moon Gi Kang
Sensors 2023, 23(6), 3033; https://doi.org/10.3390/s23063033 - 11 Mar 2023
Cited by 3 | Viewed by 2124
Abstract
As the demand for thermal information increases in industrial fields, numerous studies have focused on enhancing the quality of infrared images. Previous studies have attempted to independently overcome one of the two main degradations of infrared images, fixed pattern noise (FPN) and blurring [...] Read more.
As the demand for thermal information increases in industrial fields, numerous studies have focused on enhancing the quality of infrared images. Previous studies have attempted to independently overcome one of the two main degradations of infrared images, fixed pattern noise (FPN) and blurring artifacts, neglecting the other problems, to reduce the complexity of the problems. However, this is infeasible for real-world infrared images, where two degradations coexist and influence each other. Herein, we propose an infrared image deconvolution algorithm that jointly considers FPN and blurring artifacts in a single framework. First, an infrared linear degradation model that incorporates a series of degradations of the thermal information acquisition system is derived. Subsequently, based on the investigation of the visual characteristics of the column FPN, a strategy to precisely estimate FPN components is developed, even in the presence of random noise. Finally, a non-blind image deconvolution scheme is proposed by analyzing the distinctive gradient statistics of infrared images compared with those of visible-band images. The superiority of the proposed algorithm is experimentally verified by removing both artifacts. Based on the results, the derived infrared image deconvolution framework successfully reflects a real infrared imaging system. Full article
Show Figures

Figure 1

20 pages, 5628 KiB  
Article
Virtual Light Sensing Technology for Fast Calculation of Daylight Autonomy Metrics
by Sergey Ershov, Vadim Sokolov, Vladimir Galaktionov and Alexey Voloboy
Sensors 2023, 23(4), 2255; https://doi.org/10.3390/s23042255 - 17 Feb 2023
Cited by 2 | Viewed by 1836
Abstract
Virtual sensing technology uses mathematical calculations instead of natural measurements when the latter are too difficult or expensive. Nowadays, application of virtual light sensing technology becomes almost mandatory for daylight analysis at the stage of architectural project development. Daylight Autonomy metrics should be [...] Read more.
Virtual sensing technology uses mathematical calculations instead of natural measurements when the latter are too difficult or expensive. Nowadays, application of virtual light sensing technology becomes almost mandatory for daylight analysis at the stage of architectural project development. Daylight Autonomy metrics should be calculated multiple times during the project. A properly designed building can reduce the necessity of artificial lighting, thus saving energy. There are two main daylight performance metrics: Spatial Daylight Autonomy (sDA) and Annual Sunlight Exposure (ASE). To obtain their values, we have to simulate global illumination for every hour of the year. A light simulation method should therefore be as efficient as possible for processing complex building models. In this paper we present a method for fast calculation of Daylight Autonomy metrics, allowing them to be calculated within a reasonable timescale. We compared our method with straightforward calculations and other existing solutions. This comparison demonstrates good agreement; this proves sufficient accuracy and higher efficiency of the method. Our method also contains an original algorithm for the automatic setting of the sensing area. The sDA metric is calculated considering blinds control, which should open or close them depending on overexposure to direct sunlight. Thus, we developed an optimization procedure to determine the blinds configuration at any time. Full article
Show Figures

Figure 1

2022

Jump to: 2024, 2023

10 pages, 4367 KiB  
Communication
Adaptive High-Resolution Imaging Method Based on Compressive Sensing
by Zijiao Wang, Yufeng Gao, Xiusheng Duan and Jingya Cao
Sensors 2022, 22(22), 8848; https://doi.org/10.3390/s22228848 - 16 Nov 2022
Cited by 4 | Viewed by 1617
Abstract
Compressive sensing (CS) is a signal sampling theory that originated about 16 years ago. It replaces expensive and complex receiving devices with well-designed signal recovery algorithms, thus simplifying the imaging system. Based on the application of CS theory, a single-pixel camera with an [...] Read more.
Compressive sensing (CS) is a signal sampling theory that originated about 16 years ago. It replaces expensive and complex receiving devices with well-designed signal recovery algorithms, thus simplifying the imaging system. Based on the application of CS theory, a single-pixel camera with an array-detection imaging system is established for high-pixel detection. Each detector of the detector array is coupled with a bundle of fibers formed by fusion of four bundles of fibers of different lengths, so that the target area corresponding to one detector is split into four groups of target information arriving at different times. By comparing the total amount of information received by the detector with the threshold set in advance, it can be determined whether the four groups of information are calculated separately. The simulation results show that this new system can not only reduce the number of measurements required to reconstruct high quality images but can also handle situations wherever the target may appear in the field of view without necessitating an increase in the number of detectors. Full article
Show Figures

Figure 1

13 pages, 30820 KiB  
Article
Anisotropic SpiralNet for 3D Shape Completion and Denoising
by Seong Uk Kim, Jihyun Roh, Hyeonseung Im and Jongmin Kim
Sensors 2022, 22(17), 6457; https://doi.org/10.3390/s22176457 - 27 Aug 2022
Cited by 3 | Viewed by 2499
Abstract
Three-dimensional mesh post-processing is an important task because low-precision hardware and a poor capture environment will inevitably lead to unordered point clouds with unwanted noise and holes that should be suitably corrected while preserving the original shapes and details. Although many 3D mesh [...] Read more.
Three-dimensional mesh post-processing is an important task because low-precision hardware and a poor capture environment will inevitably lead to unordered point clouds with unwanted noise and holes that should be suitably corrected while preserving the original shapes and details. Although many 3D mesh data-processing approaches have been proposed over several decades, the resulting 3D mesh often has artifacts that must be removed and loses important original details that should otherwise be maintained. To address these issues, we propose a novel 3D mesh completion and denoising system with a deep learning framework that reconstructs a high-quality mesh structure from input mesh data with several holes and various types of noise. We build upon SpiralNet by using a variational deep autoencoder with anisotropic filters that apply different convolutional filters to each vertex of the 3D mesh. Experimental results show that the proposed method enhances the reconstruction quality and achieves better accuracy compared to previous neural network systems. Full article
Show Figures

Figure 1

10 pages, 1525 KiB  
Article
Retina-like Computational Ghost Imaging for an Axially Moving Target
by Yingqiang Zhang, Jie Cao, Huan Cui, Dong Zhou, Bin Han and Qun Hao
Sensors 2022, 22(11), 4290; https://doi.org/10.3390/s22114290 - 5 Jun 2022
Cited by 3 | Viewed by 2355
Abstract
Unlike traditional optical imaging schemes, computational ghost imaging (CGI) provides a way to reconstruct images with the spatial distribution information of illumination patterns and the light intensity collected by a single-pixel detector or bucket detector. Compared with stationary scenes, the relative motion between [...] Read more.
Unlike traditional optical imaging schemes, computational ghost imaging (CGI) provides a way to reconstruct images with the spatial distribution information of illumination patterns and the light intensity collected by a single-pixel detector or bucket detector. Compared with stationary scenes, the relative motion between the target and the imaging system in a dynamic scene causes the degradation of reconstructed images. Therefore, we propose a time-variant retina-like computational ghost imaging method for axially moving targets. The illuminated patterns are specially designed with retina-like structures, and the radii of foveal region can be modified according to the axial movement of target. By using the time-variant retina-like patterns and compressive sensing algorithms, high-quality imaging results are obtained. Experimental verification has shown its effectiveness in improving the reconstruction quality of axially moving targets. The proposed method retains the inherent merits of CGI and provides a useful reference for high-quality GI reconstruction of a moving target. Full article
Show Figures

Figure 1

21 pages, 11767 KiB  
Article
Crosstalk Correction for Color Filter Array Image Sensors Based on Lp-Regularized Multi-Channel Deconvolution
by Jonghyun Kim, Kyeonghoon Jeong and Moon Gi Kang
Sensors 2022, 22(11), 4285; https://doi.org/10.3390/s22114285 - 4 Jun 2022
Cited by 1 | Viewed by 5038
Abstract
In this paper, we propose a crosstalk correction method for color filter array (CFA) image sensors based on Lp-regularized multi-channel deconvolution. Most imaging systems with CFA exhibit a crosstalk phenomenon caused by the physical limitations of the image sensor. In general, [...] Read more.
In this paper, we propose a crosstalk correction method for color filter array (CFA) image sensors based on Lp-regularized multi-channel deconvolution. Most imaging systems with CFA exhibit a crosstalk phenomenon caused by the physical limitations of the image sensor. In general, this phenomenon produces both color degradation and spatial degradation, which are respectively called desaturation and blurring. To improve the color fidelity and the spatial resolution in crosstalk correction, the feasible solution of the ill-posed problem is regularized by image priors. First, the crosstalk problem with complex spatial and spectral degradation is formulated as a multi-channel degradation model. An objective function with a hyper-Laplacian prior is then designed for crosstalk correction. This approach enables the simultaneous improvement of the color fidelity and the sharpness restoration of the details without noise amplification. Furthermore, an efficient solver minimizes the objective function for crosstalk correction consisting of Lp regularization terms. The proposed method was verified on synthetic datasets according to various crosstalk and noise levels. Experimental results demonstrated that the proposed method outperforms the conventional methods in terms of the color peak signal-to-noise ratio and structural similarity index measure. Full article
Show Figures

Figure 1

14 pages, 4344 KiB  
Article
Color Demosaicing of RGBW Color Filter Array Based on Laplacian Pyramid
by Kyeonghoon Jeong, Jonghyun Kim and Moon Gi Kang
Sensors 2022, 22(8), 2981; https://doi.org/10.3390/s22082981 - 13 Apr 2022
Cited by 6 | Viewed by 3278
Abstract
In recent years, red, green, blue, and white (RGBW) color filter arrays (CFAs) have been developed to solve the problem of low-light conditions. In this paper, we propose a new color demosaicing algorithm for RGBW CFAs using a Laplacian pyramid. Because the white [...] Read more.
In recent years, red, green, blue, and white (RGBW) color filter arrays (CFAs) have been developed to solve the problem of low-light conditions. In this paper, we propose a new color demosaicing algorithm for RGBW CFAs using a Laplacian pyramid. Because the white channel has a high correlation to the red, green, and blue channels, the white channel is interpolated first using each color difference channel. After we estimate the white channel, the red, green, and blue channels are interpolated using the Laplacian pyramid decomposition of the estimated white channel. Our proposed method using Laplacian pyramid restoration works with Canon-RGBW CFAs and any other periodic CFAs. The experimental results demonstrated that the proposed method shows superior performance compared with other conventional methods in terms of the color peak signal-to-noise ratio, structural similarity index measure, and average execution time. Full article
Show Figures

Figure 1

Back to TopTop