Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = Bayer pattern image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 10022 KB  
Article
A Camera Calibration Method for Temperature Measurements of Incandescent Objects Based on Quantum Efficiency Estimation
by Vittorio Sala, Ambra Vandone, Michele Banfi, Federico Mazzucato, Stefano Baraldo and Anna Valente
Sensors 2025, 25(10), 3094; https://doi.org/10.3390/s25103094 - 14 May 2025
Viewed by 1008
Abstract
High-temperature thermal images enable monitoring and controlling processes in metal, semiconductors, and ceramic manufacturing but also monitor activities of volcanoes or contrasting wildfires. Infrared thermal cameras require knowledge of the emissivity coefficient, while multispectral pyrometers provide fast and accurate temperature measurements with limited [...] Read more.
High-temperature thermal images enable monitoring and controlling processes in metal, semiconductors, and ceramic manufacturing but also monitor activities of volcanoes or contrasting wildfires. Infrared thermal cameras require knowledge of the emissivity coefficient, while multispectral pyrometers provide fast and accurate temperature measurements with limited spatial resolution. Bayer-pattern cameras offer a compromise by capturing multiple spectral bands with high spatial resolution. However, temperature estimation from color remains challenging due to spectral overlaps among the color filters in the Bayer pattern, and a widely accepted calibration method is still missing. In this paper, the quantum efficiency of an imaging system including the camera sensor, lens, and filters is inferred from a sequence of images acquired by looking at a black body source between 700 °C and 1100 °C. The physical model of the camera, based on the Planck law and the optimized quantum efficiency, allows the calculation of the Planckian locus in the color space of the camera. A regression neural network, trained on a synthetic dataset representing the Planckian locus, predicts temperature pixel by pixel in the 700 °C to 3500 °C range from live images. Experiments done with a color camera, a multispectral camera, and a furnace for heat treatment of metals as ground truth show that our calibration procedure leads to temperature prediction with accuracy and precision of a few tens of Celsius degrees in the calibration temperature range. Tests on a temperature-calibrated halogen bulb prove good generalization capability to a wider temperature range while being robust to noise. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Graphical abstract

13 pages, 2440 KB  
Article
A RAW Image Noise Suppression Method Based on BlockwiseUNet
by Jing Xu, Yifeng Liu and Ming Fang
Electronics 2023, 12(20), 4346; https://doi.org/10.3390/electronics12204346 - 19 Oct 2023
Cited by 1 | Viewed by 2517
Abstract
Given the challenges encountered by industrial cameras, such as the randomness of sensor components, scattering, and polarization caused by optical defects, environmental factors, and other variables, the resulting noise hinders image recognition and leads to errors in subsequent image processing. In this study, [...] Read more.
Given the challenges encountered by industrial cameras, such as the randomness of sensor components, scattering, and polarization caused by optical defects, environmental factors, and other variables, the resulting noise hinders image recognition and leads to errors in subsequent image processing. In this study, we propose a RAW image denoising method based on BlockwiseUNet. By enabling local feature extraction and fusion, this approach enhances the network’s capability to capture and suppress noise across multiple scales. We conducted extensive experiments on the SIDD benchmark (Smartphone Image Denoising Dataset), and the PSNR/SSIM value reached 51.25/0.992, which exceeds the current mainstream denoising methods. Additionally, our method demonstrates robustness to different noise levels and exhibits good generalization performance across various datasets. Furthermore, our proposed approach also exhibits certain advantages on the DND benchmark(Darmstadt Noise Dataset). Full article
(This article belongs to the Special Issue Advances in Image Processing and Detection)
Show Figures

Figure 1

17 pages, 8057 KB  
Article
Digital Image Correlation with a Prism Camera and Its Application in Complex Deformation Measurement
by Hao Hu, Boxing Qian, Yongqing Zhang and Wenpan Li
Sensors 2023, 23(12), 5531; https://doi.org/10.3390/s23125531 - 13 Jun 2023
Cited by 3 | Viewed by 2378
Abstract
Given the low accuracy of the traditional digital image correlation (DIC) method in complex deformation measurement, a color DIC method is proposed using a prism camera. Compared to the Bayer camera, the Prism camera can capture color images with three channels of real [...] Read more.
Given the low accuracy of the traditional digital image correlation (DIC) method in complex deformation measurement, a color DIC method is proposed using a prism camera. Compared to the Bayer camera, the Prism camera can capture color images with three channels of real information. In this paper, a prism camera is used to collect color images. Relying on the rich information of three channels, the classic gray image matching algorithm is improved based on the color speckle image. Considering the change of light intensity of three channels before and after deformation, the matching algorithm merging subsets on three channels of a color image is deduced, including integer-pixel matching, sub-pixel matching, and initial value estimation of light intensity. The advantage of this method in measuring nonlinear deformation is verified by numerical simulation. Finally, it is applied to the cylinder compression experiment. This method can also be combined with stereo vision to measure complex shapes by projecting color speckle patterns. Full article
Show Figures

Figure 1

18 pages, 21137 KB  
Article
On-Orbit Relative Radiometric Calibration of the Bayer Pattern Push-Broom Sensor for Zhuhai-1 Video Satellites
by Litao Li, Zhen Li, Zhixin Wang, Yonghua Jiang, Xin Shen and Jiaqi Wu
Remote Sens. 2023, 15(2), 377; https://doi.org/10.3390/rs15020377 - 7 Jan 2023
Cited by 7 | Viewed by 3048
Abstract
The two video satellites of the second and third batch of Zhuhai-1 microsatellites (referred to as OVS-2A/3A) are operational with their hyperspectral satellites, which improves the data acquisi-tion capability of the Zhuhai-1 remote sensing satellite constellation. Contrary to the linear array push-broom hyperspectral [...] Read more.
The two video satellites of the second and third batch of Zhuhai-1 microsatellites (referred to as OVS-2A/3A) are operational with their hyperspectral satellites, which improves the data acquisi-tion capability of the Zhuhai-1 remote sensing satellite constellation. Contrary to the linear array push-broom hyperspectral satellites and plane array CCD video satellites, the OVS satellite is equipped with a planar array Bayer pattern sensor, which can obtain single-band grayscale images by push-broom imaging. Additionally, the Bayer color reconstruction algorithm can interpolate sensor data to provide RGB color band information. Therefore, for the Bayer pattern push-broom sensor, the relative calibration method of linear push-broom or array cameras cannot be directly applied. The radiometric calibration of the Bayer pattern push-broom imaging mode has become a matter of concern; therefore, this study developed a radiometric calibration method for the Bayer pattern push-broom sensor of the OVS satellite and verified its effectiveness and accuracy. OVS images were used to perform on-orbit relative radiometric calibration, and the calibration accu-racy, including streaking metrics and root-mean-square error, was better than 1%, meeting the specification requirements for the OVS satellite. Visually, after calibration correction, the streaking and striping noise of the Bayer images was removed, and the radiometric quality of the image was considerably improved, providing a good data basis for subsequent research in remote sensing applications. Full article
(This article belongs to the Topic Micro/Nano Satellite Technology, Systems and Components)
Show Figures

Figure 1

25 pages, 15128 KB  
Review
Compression for Bayer CFA Images: Review and Performance Comparison
by Kuo-Liang Chung, Hsuan-Ying Chen, Tsung-Lun Hsieh and Yen-Bo Chen
Sensors 2022, 22(21), 8362; https://doi.org/10.3390/s22218362 - 31 Oct 2022
Cited by 4 | Viewed by 5399
Abstract
Bayer color filter array (CFA) images are captured by a single-chip image sensor covered with a Bayer CFA pattern which has been widely used in modern digital cameras. In the past two decades, many compression methods have been proposed to compress Bayer CFA [...] Read more.
Bayer color filter array (CFA) images are captured by a single-chip image sensor covered with a Bayer CFA pattern which has been widely used in modern digital cameras. In the past two decades, many compression methods have been proposed to compress Bayer CFA images. These compression methods can be roughly divided into the compression-first-based (CF-based) scheme and the demosaicing-first-based (DF-based) scheme. However, in the literature, no review article for the two compression schemes and their compression performance is reported. In this article, the related CF-based and DF-based compression works are reviewed first. Then, the testing Bayer CFA images created from the Kodak, IMAX, screen content images, videos, and classical image datasets are compressed on the Joint Photographic Experts Group-2000 (JPEG-2000) and the newly released Versatile Video Coding (VVC) platform VTM-16.2. In terms of the commonly used objective quality, perceptual quality metrics, the perceptual effect, and the quality–bitrate tradeoff metric, the compression performance comparison of the CF-based compression methods, in particular the reversible color transform-based compression methods and the DF-based compression methods, is reported and discussed. Full article
Show Figures

Figure 1

12 pages, 3924 KB  
Communication
Bionic Birdlike Imaging Using a Multi-Hyperuniform LED Array
by Xin-Yu Zhao, Li-Jing Li, Lei Cao and Ming-Jie Sun
Sensors 2021, 21(12), 4084; https://doi.org/10.3390/s21124084 - 14 Jun 2021
Cited by 4 | Viewed by 3382
Abstract
Digital cameras obtain color information of the scene using a chromatic filter, usually a Bayer filter, overlaid on a pixelated detector. However, the periodic arrangement of both the filter array and the detector array introduces frequency aliasing in sampling and color misregistration during [...] Read more.
Digital cameras obtain color information of the scene using a chromatic filter, usually a Bayer filter, overlaid on a pixelated detector. However, the periodic arrangement of both the filter array and the detector array introduces frequency aliasing in sampling and color misregistration during demosaicking process which causes degradation of image quality. Inspired by the biological structure of the avian retinas, we developed a chromatic LED array which has a geometric arrangement of multi-hyperuniformity, which exhibits an irregularity on small-length scales but a quasi-uniformity on large scales, to suppress frequency aliasing and color misregistration in full color image retrieval. Experiments were performed with a single-pixel imaging system using the multi-hyperuniform chromatic LED array to provide structured illumination, and 208 fps frame rate was achieved at 32 × 32 pixel resolution. By comparing the experimental results with the images captured with a conventional digital camera, it has been demonstrated that the proposed imaging system forms images with less chromatic moiré patterns and color misregistration artifacts. The concept proposed verified here could provide insights for the design and the manufacturing of future bionic imaging sensors. Full article
Show Figures

Figure 1

14 pages, 4350 KB  
Article
3D DCT Based Image Compression Method for the Medical Endoscopic Application
by Jiawen Xue, Li Yin, Zehua Lan, Mingzhu Long, Guolin Li, Zhihua Wang and Xiang Xie
Sensors 2021, 21(5), 1817; https://doi.org/10.3390/s21051817 - 5 Mar 2021
Cited by 20 | Viewed by 3395
Abstract
This paper proposes a novel 3D discrete cosine transform (DCT) based image compression method for medical endoscopic applications. Due to the high correlation among color components of wireless capsule endoscopy (WCE) images, the original 2D Bayer data pattern is reconstructed into a new [...] Read more.
This paper proposes a novel 3D discrete cosine transform (DCT) based image compression method for medical endoscopic applications. Due to the high correlation among color components of wireless capsule endoscopy (WCE) images, the original 2D Bayer data pattern is reconstructed into a new 3D data pattern, and 3D DCT is adopted to compress the 3D data for high compression ratio and high quality. For the low computational complexity of 3D-DCT, an optimized 4-point DCT butterfly structure without multiplication operation is proposed. Due to the unique characteristics of the 3D data pattern, the quantization and zigzag scan are ameliorated. To further improve the visual quality of decompressed images, a frequency-domain filter is proposed to eliminate the blocking artifacts adaptively. Experiments show that our method attains an average compression ratio (CR) of 22.94:1 with the peak signal to noise ratio (PSNR) of 40.73 dB, which outperforms state-of-the-art methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

12 pages, 2688 KB  
Letter
Effective Three-Stage Demosaicking Method for RGBW CFA Images Using The Iterative Error-Compensation Based Approach
by Kuo-Liang Chung, Tzu-Hsien Chan and Szu-Ni Chen
Sensors 2020, 20(14), 3908; https://doi.org/10.3390/s20143908 - 14 Jul 2020
Cited by 9 | Viewed by 3817
Abstract
As the color filter array (CFA)2.0, the RGBW CFA pattern, in which each CFA pixel contains only one R, G, B, or W color value, provides more luminance information than the Bayer CFA pattern. Demosaicking RGBW CFA images [...] Read more.
As the color filter array (CFA)2.0, the RGBW CFA pattern, in which each CFA pixel contains only one R, G, B, or W color value, provides more luminance information than the Bayer CFA pattern. Demosaicking RGBW CFA images I R G B W is necessary in order to provide high-quality RGB full-color images as the target images for human perception. In this letter, we propose a three-stage demosaicking method for I R G B W . In the first-stage, a cross shape-based color difference approach is proposed in order to interpolate the missing W color pixels in the W color plane of I R G B W . In the second stage, an iterative error compensation-based demosaicking process is proposed to improve the quality of the demosaiced RGB full-color image. In the third stage, taking the input image I R G B W as the ground truth RGBW CFA image, an I R G B W -based refinement process is proposed to refine the quality of the demosaiced image obtained by the second stage. Based on the testing RGBW images that were collected from the Kodak and IMAX datasets, the comprehensive experimental results illustrated that the proposed three-stage demosaicking method achieves substantial quality and perceptual effect improvement relative to the previous method by Hamilton and Compton and the two state-of-the-art methods, Kwan et al.’s pansharpening-based method, and Kwan and Chou’s deep learning-based method. Full article
Show Figures

Figure 1

44 pages, 43692 KB  
Article
Demosaicing of CFA 3.0 with Applications to Low Lighting Images
by Chiman Kwan, Jude Larkin and Bulent Ayhan
Sensors 2020, 20(12), 3423; https://doi.org/10.3390/s20123423 - 17 Jun 2020
Cited by 8 | Viewed by 6521
Abstract
Low lighting images usually contain Poisson noise, which is pixel amplitude-dependent. More panchromatic or white pixels in a color filter array (CFA) are believed to help the demosaicing performance in dark environments. In this paper, we first introduce a CFA pattern known as [...] Read more.
Low lighting images usually contain Poisson noise, which is pixel amplitude-dependent. More panchromatic or white pixels in a color filter array (CFA) are believed to help the demosaicing performance in dark environments. In this paper, we first introduce a CFA pattern known as CFA 3.0 that has 75% white pixels, 12.5% green pixels, and 6.25% of red and blue pixels. We then present algorithms to demosaic this CFA, and demonstrate its performance for normal and low lighting images. In addition, a comparative study was performed to evaluate the demosaicing performance of three CFAs, namely the Bayer pattern (CFA 1.0), the Kodak CFA 2.0, and the proposed CFA 3.0. Using a clean Kodak dataset with 12 images, we emulated low lighting conditions by introducing Poisson noise into the clean images. In our experiments, normal and low lighting images were used. For the low lighting conditions, images with signal-to-noise (SNR) of 10 dBs and 20 dBs were studied. We observed that the demosaicing performance in low lighting conditions was improved when there are more white pixels. Moreover, denoising can further enhance the demosaicing performance for all CFAs. The most important finding is that CFA 3.0 performs better than CFA 1.0, but is slightly inferior to CFA 2.0, in low lighting images. Full article
Show Figures

Figure 1

14 pages, 69145 KB  
Article
Joint Demosaicing and Denoising Based on a Variational Deep Image Prior Neural Network
by Yunjin Park, Sukho Lee, Byeongseon Jeong and Jungho Yoon
Sensors 2020, 20(10), 2970; https://doi.org/10.3390/s20102970 - 24 May 2020
Cited by 14 | Viewed by 5125
Abstract
A joint demosaicing and denoising task refers to the task of simultaneously reconstructing and denoising a color image from a patterned image obtained by a monochrome image sensor with a color filter array. Recently, inspired by the success of deep learning in many [...] Read more.
A joint demosaicing and denoising task refers to the task of simultaneously reconstructing and denoising a color image from a patterned image obtained by a monochrome image sensor with a color filter array. Recently, inspired by the success of deep learning in many image processing tasks, there has been research to apply convolutional neural networks (CNNs) to the task of joint demosaicing and denoising. However, such CNNs need many training data to be trained, and work well only for patterned images which have the same amount of noise they have been trained on. In this paper, we propose a variational deep image prior network for joint demosaicing and denoising which can be trained on a single patterned image and works for patterned images with different levels of noise. We also propose a new RGB color filter array (CFA) which works better with the proposed network than the conventional Bayer CFA. Mathematical justifications of why the variational deep image prior network suits the task of joint demosaicing and denoising are also given, and experimental results verify the performance of the proposed method. Full article
(This article belongs to the Special Issue Digital Imaging with Multispectral Filter Array (MSFA) Sensors)
Show Figures

Figure 1

58 pages, 51056 KB  
Article
Demosaicing of Bayer and CFA 2.0 Patterns for Low Lighting Images
by Chiman Kwan and Jude Larkin
Electronics 2019, 8(12), 1444; https://doi.org/10.3390/electronics8121444 - 1 Dec 2019
Cited by 14 | Viewed by 8246
Abstract
It is commonly believed that having more white pixels in a color filter array (CFA) will help the demosaicing performance for images collected in low lighting conditions. However, to the best of our knowledge, a systematic study to demonstrate the above statement does [...] Read more.
It is commonly believed that having more white pixels in a color filter array (CFA) will help the demosaicing performance for images collected in low lighting conditions. However, to the best of our knowledge, a systematic study to demonstrate the above statement does not exist. We present a comparative study to systematically and thoroughly evaluate the performance of demosaicing for low lighting images using two CFAs: the standard Bayer pattern (aka CFA 1.0) and the Kodak CFA 2.0 (RGBW pattern with 50% white pixels). Using the clean Kodak dataset containing 12 images, we first emulated low lighting images by injecting Poisson noise at two signal-to-noise (SNR) levels: 10 dBs and 20 dBs. We then created CFA 1.0 and CFA 2.0 images for the noisy images. After that, we applied more than 15 conventional and deep learning based demosaicing algorithms to demosaic the CFA patterns. Using both objectives with five performance metrics and subjective visualization, we observe that having more white pixels indeed helps the demosaicing performance in low lighting conditions. This thorough comparative study is our first contribution. With denoising, we observed that the demosaicing performance of both CFAs has been improved by several dBs. This can be considered as our second contribution. Moreover, we noticed that denoising before demosaicing is more effective than denoising after demosaicing. Answering the question of where denoising should be applied is our third contribution. We also noticed that denoising plays a slightly more important role in 10 dBs signal-to-noise ratio (SNR) as compared to 20 dBs SNR. Some discussions on the following phenomena are also included: (1) why CFA 2.0 performed better than CFA 1.0; (2) why denoising was more effective before demosaicing than after demosaicing; and (3) why denoising helped more at low SNRs than at high SNRs. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

13 pages, 6424 KB  
Article
Weights-Based Image Demosaicking Using Posteriori Gradients and the Correlation of R–B Channels in High Frequency
by Meidong Xia, Chengyou Wang and Wenhan Ge
Symmetry 2019, 11(5), 600; https://doi.org/10.3390/sym11050600 - 26 Apr 2019
Cited by 2 | Viewed by 4332
Abstract
In this paper, we propose a weights-based image demosaicking algorithm which is based on the Bayer pattern color filter array (CFA). When reconstructing the missing G components, the proposed algorithm uses weights based on posteriori gradients to mitigate color artifacts and distortions. Furthermore, [...] Read more.
In this paper, we propose a weights-based image demosaicking algorithm which is based on the Bayer pattern color filter array (CFA). When reconstructing the missing G components, the proposed algorithm uses weights based on posteriori gradients to mitigate color artifacts and distortions. Furthermore, the proposed algorithm makes full use of the correlation of R–B channels in high frequency when interpolating R/B values at B/R positions. Experimental results show that the proposed algorithm is superior to previous similar algorithms in composite peak signal-to-noise ratio (CPSNR) and subjective visual effect. The biggest advantage of the proposed algorithm is the use of posteriori gradients and the correlation of R–B channels in high frequency. Full article
Show Figures

Graphical abstract

21 pages, 6704 KB  
Brief Report
Comparison of Deep Learning and Conventional Demosaicing Algorithms for Mastcam Images
by Chiman Kwan, Bryan Chou and James F. Bell III
Electronics 2019, 8(3), 308; https://doi.org/10.3390/electronics8030308 - 11 Mar 2019
Cited by 16 | Viewed by 5896
Abstract
Bayer pattern filters have been used in many commercial digital cameras. In National Aeronautics and Space Administration’s (NASA) mast camera (Mastcam) imaging system, onboard the Mars Science Laboratory (MSL) rover Curiosity, a Bayer pattern filter is being used to capture the RGB (red, [...] Read more.
Bayer pattern filters have been used in many commercial digital cameras. In National Aeronautics and Space Administration’s (NASA) mast camera (Mastcam) imaging system, onboard the Mars Science Laboratory (MSL) rover Curiosity, a Bayer pattern filter is being used to capture the RGB (red, green, and blue) color of scenes on Mars. The Mastcam has two cameras: left and right. The right camera has three times better resolution than that of the left. It is well known that demosaicing introduces color and zipper artifacts. Here, we present a comparative study of demosaicing results using conventional and deep learning algorithms. Sixteen left and 15 right Mastcam images were used in our experiments. Due to a lack of ground truth images for Mastcam data from Mars, we compared the various algorithms using a blind image quality assessment model. It was observed that no one algorithm can work the best for all images. In particular, a deep learning-based algorithm worked the best for the right Mastcam images and a conventional algorithm achieved the best results for the left Mastcam images. Moreover, subjective evaluation of five demosaiced Mastcam images was also used to compare the various algorithms. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 21272 KB  
Article
Sensitivity and Resolution Improvement in RGBW Color Filter Array Sensor
by Seunghoon Jee, Ki Sun Song and Moon Gi Kang
Sensors 2018, 18(5), 1647; https://doi.org/10.3390/s18051647 - 21 May 2018
Cited by 11 | Viewed by 9396
Abstract
Recently, several red-green-blue-white (RGBW) color filter arrays (CFAs), which include highly sensitive W pixels, have been proposed. However, RGBW CFA patterns suffer from spatial resolution degradation owing to the sensor composition having more color components than the Bayer CFA pattern. RGBW CFA demosaicing [...] Read more.
Recently, several red-green-blue-white (RGBW) color filter arrays (CFAs), which include highly sensitive W pixels, have been proposed. However, RGBW CFA patterns suffer from spatial resolution degradation owing to the sensor composition having more color components than the Bayer CFA pattern. RGBW CFA demosaicing methods reconstruct resolution using the correlation between white (W) pixels and pixels of other colors, which does not improve the red-green-blue (RGB) channel sensitivity to the W channel level. In this paper, we thus propose a demosaiced image post-processing method to improve the RGBW CFA sensitivity and resolution. The proposed method decomposes texture components containing image noise and resolution information. The RGB channel sensitivity and resolution are improved through updating the W channel texture component with those of RGB channels. For this process, a cross multilateral filter (CMF) is proposed. It decomposes the smoothness component from the texture component using color difference information and distinguishes color components through that information. Moreover, it decomposes texture components, luminance noise, color noise, and color aliasing artifacts from the demosaiced images. Finally, by updating the texture of the RGB channels with the W channel texture components, the proposed algorithm improves the sensitivity and resolution. Results show that the proposed method is effective, while maintaining W pixel resolution characteristics and improving sensitivity from the signal-to-noise ratio value by approximately 4.5 dB. Full article
(This article belongs to the Special Issue Image Sensors)
Show Figures

Figure 1

15 pages, 2881 KB  
Article
An Effective Directional Residual Interpolation Algorithm for Color Image Demosaicking
by Ke Yu, Chengyou Wang, Sen Yang, Zhiwei Lu and Dan Zhao
Appl. Sci. 2018, 8(5), 680; https://doi.org/10.3390/app8050680 - 26 Apr 2018
Cited by 5 | Viewed by 7147
Abstract
In this paper, we propose an effective directional Bayer color filter array (CFA) demosaicking algorithm based on residual interpolation (RI). The proposed directional interpolation algorithm aims to reduce computational complexity and get more accurate interpolated pixel values in the complex edge areas. We [...] Read more.
In this paper, we propose an effective directional Bayer color filter array (CFA) demosaicking algorithm based on residual interpolation (RI). The proposed directional interpolation algorithm aims to reduce computational complexity and get more accurate interpolated pixel values in the complex edge areas. We use the horizontal and vertical weights to combine and smooth color difference estimations. Compared with four directional weights in minimized Laplacian residual interpolation, the proposed algorithm not only guarantees the quality of color images but also reduces the computational complexity. Generally, the directional estimations may be inaccurately calculated because of the false edge information in irregular edges. We alleviate it by using a new method to calculate the directional color difference estimations. Experimental results show that the proposed algorithm provides outstanding performance compared with some previous algorithms, especially in the complex edge areas. In addition, it has lower computational complexity and better visual effect. Full article
Show Figures

Graphical abstract

Back to TopTop