Next Article in Journal
Efficient Data-Driven Machine Learning Models for Cardiovascular Diseases Risk Prediction
Previous Article in Journal
Long-Term Indoor-Outdoor PM2.5 Measurements Using PurpleAir Sensors: An Improved Method of Calculating Indoor-Generated and Outdoor-Infiltrated Contributions to Potential Indoor Exposure
Previous Article in Special Issue
A Point Cloud Simplification Algorithm Based on Weighted Feature Indexes for 3D Scanning Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Vignetting Correction Using a Deformable Radial Polynomial Model

1
Department of Data Science and Engineering, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
2
Biotechnology Center, Silesian University of Technology, Bolesława Krzywoustego 8, 44-100 Gliwice, Poland
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(3), 1157; https://doi.org/10.3390/s23031157
Submission received: 10 December 2022 / Revised: 10 January 2023 / Accepted: 17 January 2023 / Published: 19 January 2023
(This article belongs to the Special Issue Machine Vision Based Sensing and Imaging Technology)

Abstract

:
Image vignetting is one of the major radiometric errors that occur in lens-camera systems. In many applications, vignetting is an undesirable effect; therefore, when it is impossible to fully prevent its occurrence, it is necessary to use computational methods for its correction. In probably the most frequently used approach to the vignetting correction, that is, the flat-field correction, the use of appropriate vignetting models plays a pivotal role. The radial polynomial (RP) model is commonly used, but for its proper use, the actual vignetting of the analyzed lens-camera system has to be a radial function. However, this condition is not fulfilled by many systems. There exist more universal models of vignetting; however, these models are much more sophisticated than the RP model. In this article, we propose a new model of vignetting named the Deformable Radial Polynomial (DRP) model, which joins the simplicity of the RP model with the universality of more sophisticated models. The DRP model uses a simple distance transformation and minimization method to match the radial vignetting model to the non-radial vignetting of the analyzed lens-camera system. The real-data experiment confirms that the DRP model, in general, gives better (up 35% or 50%, depending on the measure used) results than the RP model.

1. Introduction

Image vignetting, also called image shading, is a phenomenon in which the image brightness is reduced from the optical center of this image toward its edges. The characteristic of vignetting depends on the parameters, mainly geometrical and optical, of the lens-camera system used and the parameters chosen during image acquisition, such as aperture size (f-number) and lens focal length (in the case of the usage of a varifocal or a zoom lens). Vignetting is usually unintentional and undesired. The appearance of this effect is particularly undesirable when there is a need for radiometric or quantitative image analysis, which is very common in different areas, e.g., astronomy [1,2]; microscopy [3,4,5,6]; and remote sensing applications using terrestrial [7,8], airborne [9,10,11,12,13] and spaceborne sensors [14,15], to name just a few of them. This phenomenon is also undesirable in the case of the use of computational imaging algorithms, such as the creation of high dynamic range (HDR) images [16,17], the stitching of static images to create panoramic [18,19,20] or mosaic images [3,4,5,6,21], as well as a panoramic real-time view [22]. Vignetting also affects the results of image analysis, including the results obtained using neural networks [23,24].
The best way to reduce vignetting is to remove its causes. This aim can be reached by, e.g., the usage of a lens with appropriate characteristics or setting appropriate exposure parameters. However, in practice, such actions are not always possible or may not produce the desired results. In such cases, hardware solutions can be supported by the usage of computational vignetting correction methods. Based on the causes of vignetting, there exist four main types of vignetting [25,26], that is:
  • Mechanical vignetting—the effect of this type of vignetting is the blockade of light rays on their way from a scene through a lens to a camera sensor; the result of this is a complete loss of information in certain areas of the image and a lack of data needed for computational vignetting correction methods.
  • Optical vignetting—is related to the optical characteristic of the used lens, its characteristic can be change by change of the lens aperture size [27].
  • Natural vignetting—refers to the loss of image brightness caused by a change of the viewing angle Θ for individual image pixels, it is modeled by the cos 4 ( Θ ) law [28].
  • Pixel vignetting—is related to the geometrical size and optical design (in the case of the use of microlenses) of the image sensor of the camera [29].
The computational vignetting correction methods can be used to correct the last three of them; that is, they cannot be used to correct mechanical vignetting. What is more, there is not an easy way to correct each vignetting type independently; however, in practice there is no need to distinguish the causes of vignetting because all mentioned types of vignetting (apart from mechanical vignetting of course) are jointly corrected in a single procedure. It is also worth noting that vignetting correction, the parameters of which are very often calculated using reference images acquired from close distance, can be directly used to correct images obtained for distant objects—such a procedure is typical, e.g., for remote sensing applications. This advantage is due to the fact that the actual vignetting characteristics for fixed parameters of the lens-camera system used do not depend on the distance between the imaging system and an object, but on the viewing angle at which this object is visible from this system.
During the years of development, many different approaches to the vignetting correction problem have been presented in the literature. The vignetting correction methods can be roughly divided into two groups, that is, into the group of methods that do not use the reference image or images, and the group of methods that use such data. The methods which belong to the first group use, e.g.:
  • Physically-based models of vignetting [28,30]—the usage of these methods requires detailed knowledge about the parameters of the used lens-camera system, which is often unavailable; however, this approach is very useful during the design process of the optical system.
  • A single image [31,32,33,34] or a sequence of images [18,19,25,26] of a natural scene or scenes to estimate the vignetting—in the case of these methods, the vignetting estimation is obtained as a result minimization of an objective function with the assumption that vignetting is a radial function, which limits the number of lens-camera systems for which these methods can be used. The effectiveness of these methods also depends strongly on many other factors, such as the precision of localization of corresponding pixels, uniformity of the analyzing scene, which limits their applicability. However, and this is a significant advantage, these methods can be used for already acquired images when the acquisition of new reference images is not possible; such situations are common in the case of, e.g., historical images.
To the second group belong, among others, probably the most frequently used methods, that is, those which are based on the idea of a “flat-field correction” approach.
In the “flat-field” approach, the vignetting is estimated based on a reference image of vignetting I V , which presents a uniformly illuminated flat surface with a uniform color. This approach assumes that the main source of brightness differences in I V is the vignetting of the lens-camera system that was used; however, because the I V image is acquired by a real lens-camera system, this image also contains noise. The formation of the I V image can be described by the following formula
I V ( i , j ) = V ( i , j ) · I f l a t ( i , j ) + ε ( i , j ) ,
where V is a real vignetting of the analyzed lens-camera system and V ( i , j ) ( 0 , 1 ] (where 1 means no vignetting at pixel ( i , j ) and 0 means complete blocking of light), I f l a t is an ideal image of a scene with a reference flat surface, ε ( i , j ) is an image noise and ( i , j ) are, respectively, horizontal and vertical pixel coordinates. The vignetting estimate V ˜ is established during the approximation process, denoted as approx ( · ) , using the assumed vignetting model VM and the image I V as follows
V ˜ ( i , j ) = approx V M , I V ( i , j ) ,
where V ˜ ( 0 , 1 ] . The corrected version of an acquired image I, that is, an image I ˜ , is obtained using the following formula
I ˜ ( i , j ) = I ( i , j ) · V ˜ ( i , j ) 1 .
For the best correction results, the images I V and I should be acquired using the same lens-camera system, its parameters (e.g., focal length for zoom lenses), and exposure parameters (e.g., value of aperture f-number). It should also be mentioned that for the most accurate vignetting correction results, this procedure should be preceded by a dark frame correction, and the camera used for the acquisition of the I V and I images should have a linear characteristic or both images, I V and I are linearized, that is, the relationship between the R, G, B values of the ( i , j ) pixel and the luminous flux of light incoming to the corresponding pixels of the camera sensor is proportional.
Adjusting the flat-field method to the analyzed system is a matter of using the vignetting model, which is appropriate for the actual vignetting occurring in this system. In the literature, different parametric vignetting models have been presented, for example, the polynomial 2D (P2D) model [25,35,36], the exponential 2D polynomial model [35], the smooth non-iterative local polynomial (SNILP) model [37], the radial polynomial (RP) model [38], the hyperbolic cosine model [39], and the Gaussian function [40]. The last three mentioned vignetting models belong to a widely used approach, which assumes that the actual vignetting V of the lens-camera system has a cylindrical symmetry. This means that the vignetting V ( i , j ) can be modeled using a radial function V ˜ ( r ) , that is, a function of r, where
r = i x C 2 + j y C 2
is the distance between the pixel with coordinates ( i , j ) and the optical center C of the lens-camera system, with coordinates ( x C , y C ) .
The use of the assumption of radial vignetting simplifies the process of searching for the vignetting estimate because the approximation of the 2D function is replaced by a much simpler approximation of the 1D function. The consequence of using a simpler approximation function (vignetting model) is a reduction in the number of parameters needed to determine the vignetting estimate, for example, the P2D model needs ( s 2 + 3 s + 2 ) / 2 , where the RP model needs only s + 3 parameters; s is the degree of the approximation function used. The usage of radial vignetting is convenient and therefore very popular; however, because not every lens-camera system satisfies this assumption, this approach to vignetting correction is not universal and should be used carefully. The radial vignetting assumption is not satisfied by a large group of imaging systems, such as industrial lenses designed with reducing vignetting in mind, perspective-control and focal plane-control lenses (shift and tilt lenses), anamorphic lenses, etc.
Analyzing the information presented above, it can be concluded that in the literature there is a lack of model proposals that combine the simplicity of use and a small number of parameters, which are characteristic for the RP model, with the universality of, for example, the P2D model or the SNILP model. In the article, we present a novel model of vignetting, that is, the Deformable Radial Polynomial (DRP) model, which is our attempt to fill this gap. The idea of the new model is based on the observation that the vignetting in many lens-camera systems is not ideally radial, but is rather a radial vignetting, which is “squeezed” in one direction. The contribution of the DRP method is the use of a simple function for distance calculation, which is a slight modification of (4), to transform the non-radial vignetting observed in the image space into the radial vignetting in the distance space. This step allows, for the price of adding only one parameter describing the non-radiality of the vignetting, for the use of a radial vignetting model, e.g., a polynomial 1D model as in the case of the DRP model, for modeling actual non-radial vignetting of a real lens-camera system. To verify this idea, the vignetting correction results obtained from the DRP model were compared with the results obtained using the models RP and P2D, which are well-known from the literature, as well as the state-of-the-art model, that is, the SNILP model. In the comparison, the data from five webcams with different vignetting characteristics, including the degree of non-radiality of their vignetting, were used. The results obtained show that the DRP model, in general, gives better results than the RP model. In the same cases, these results are up 35% or 50%, depending on the measure used, better than the results obtained from the RP model.
The rest of the article is organized as follows. The DRP model of vignetting is described in Section 2. Section 3 presents the conditions of the real-data model comparison, as well as its results. The discussion of the results is given in Section 4. The last, Section 5, contains the conclusions and suggestions for future research.

2. The Deformable Radial Polynomial Model of Vignetting

As mentioned above, the concept of estimation of vignetting using the RP model is very simple: it is assumed that the vignetting has cylindrical symmetry. Therefore, when analyzing the vignetting problem in the image plane, the value of the vignetting has rotational symmetry with respect to the optical image center C. This means that the vignetting model also has rotational symmetry, which leads to the use of the radial function as a vignetting model. In such a case, the vignetting value of all pixels that are at the same distance r from C has the same value v ( r ) , and hence the initial 2D problem of finding the vignetting estimate can be solved as a much simpler 1D approximation problem. Thus, when as the 1D approximation function the 1D polynomial function of degree s, denoted as PolyReg s ( · ) , is chosen, the radial polynomial (RP) model is obtained, which can be formally written as follows
V ˜ = PolyReg s v ( r ) .
To solve this problem, that is, to find the parameters of the approximation polynomial, assuming that v ( r ) does not contain outliers, the ordinary least squares (OLS) method can be used.
When analyzing the images of vignetting I V (Figure 1), it can be seen that vignetting often is not a really radial function, but rather is a “squeezed” radial function. From this observation the founding idea of the DRP model has been derived, which is that the usage of the distance function, which is used to calculate distance r between the pixel ( i , j ) and the optical center C with coordinates ( x C , y C ) , to transform the non-radial vignetting observed in the image space into radial vignetting in the distance space r. This simple transformation allows the use of a radial vignetting function, which in the case of the DRP model is a 1D polynomial function, for modeling actual non-radial vignetting of a lens-camera system.
The method for a simple realization of non-radial to radial vignetting transformation is taken from the method of modeling non-radial vignetting presented in [37]. According to the solution presented there, to accomplish such transformation it is sufficient to slightly modify Equation (4) by a parameter η , as follows
r η = ( i x c ) 2 + η ( j y c ) 2 .
The η can be treated as the measure of non-radiality of the vignetting, where, for ideal radial vignetting η = 1 , and for non-radial vignetting η 1 , e.g., for η > 1 , the vignetting in the vertical (y) direction is stronger than in the horizontal (x) direction.
The value of the coefficient η must be estimated. This can be found by solving the following minimization problem
η * = arg min η 1 M N i = 1 M j = 1 N ( v r η ( i , j ) PolyReg s ( v r η ( i , j ) ) ) 2
for the given coordinates ( x c , y c ) of the optical center C of the image; as mentioned earlier, the parameters p of PolyReg s r η ( i , j ) can be effectively found using the OLS method. When the optimal value of η * is determined, V ˜ is calculated using the parameters p obtained from the PolyReg s v r η ( i , j ) for η * . Because the acquisition of I V should be carried out in such a way that there is no saturation phenomenon in this image, this means that max ( I V ) < 1 ; assuming that the values of the pixels are represented in the normalized range [ 0 , 1 ] . This leads to a situation where max ( V ˜ ) < 1 ; hence, it is necessary to normalize the range of V ˜ to the range ( 0 , 1 ] , which is performed as follows
V ˜ ( i , j ) : = V ˜ ( i , j ) max i , j V ˜ ( i , j ) ,
where : = is the assignment operator. The obtained V ˜ is the final result of the vignetting estimate using the DRP model.
In the described above algorithm, the coordinates of C must be known or at least estimated before the DRP model can be used. However, these coordinates can also be estimated by solving the following multivariate minimization problem
η * , x c * , y c * = arg min η , x c , y c 1 M N i = 1 M j = 1 N ( v r η , x c , y c ( i , j ) PolyReg s ( v r η , x c , y c ( i , j ) ) ) 2 .
In this case, the final value of V ˜ is calculated analogously as in the previous variant of the DRP model, that is, using the parameters p of PolyReg s v r η , x c , y c ( i , j ) estimated for the obtained η * , x c * , y c * and then normalized according to (8).

3. Experimental Comparison of Vignetting Models

3.1. Assumptions of the Comparison and Methods of Evaluation

The purpose of the conducted experiment was a comparison, based on real data, of the DRP model with the selected, known from the literature, vignetting models, that is, the RP model and the P2D and the state of the art SNILP models. The quality of the vignetting models can be evaluated by analyzing the image correction results
I ˜ f l a t = I V · V ˜ 1 .
obtained using vignetting estimates V ˜ calculated using the compared vignetting models.
Ideal correction results should be flat, that is, all pixels in the corrected image I ˜ f l a t should have the same value. Of course, in the case of real data, this ideal result cannot be achieved due to the presence of noise in the input image I V . In such cases, the pixels in the I ˜ f l a t image should have similar values with a possible minimal dispersion. Good dispersion measures are the standard deviation (STD) and the interquartile range (IQR); therefore, these measures were used for a quantitative evaluation of the results of the vignetting correction. Hence, a lower value of STD I ˜ f l a t or IQR I ˜ f l a t means a better ability of the analyzed vignetting model to adapt to the real vignetting.
Since the I V image contains noise and the estimation result V ˜ may differ significantly from the vignetting of a given camera, it is possible that the I ˜ f l a t image contains pixels with values beyond the range of [ 0 , 255 ] , that is, the range of pixel values for cameras used in the experiment. Therefore, the result of the correction, that is, the image I ˜ f l a t , before its evaluation, is subjected to the operation of truncation of the pixel values according to the following formula
I ˜ f l a t ( i , j ) : = 0 for I ˜ f l a t ( i , j ) < 0 I ˜ f l a t ( i , j ) for I ˜ f l a t ( i , j ) [ 0 , 255 ] 255 for I ˜ f l a t ( i , j ) > 255 .

3.2. Laboratory Setup and Data Acquisition Process

The data analyzed in the experiment were acquired using together five webcams, namely Microsoft LifeCam Studio, Logitech C920, Hama C-600 Pro, and two Xiaomi IMILAB CMSXJ22A webcams (Figure 2), which are indicated in the article, respectively, from Cam-A to Cam-E. In the case of webcams, their manufacturers usually do not provide detailed technical data, such as lens focal length, lens speed, etc. Despite this, a comparison of the main technical data of the webcams used is presented in Table 1.
The webcams used in the comparison were selected to differ in their vignetting characteristics. Such a selection of cameras allows for performing a comparison for different examples of real vignetting with a different vignetting strength (also called vignetting magnitude) and a different degree of non-radiality of the vignetting function, which is essential in the case of this comparison. Vignetting strength can be measured using the STD and IQR measure for the images I V acquired using individual cameras—the values of these measures are presented in the second column of, respectively, Tables 3 and 4. The tested cameras are arranged in the order from the camera with the lowest vignetting strength (Cam-A) to the camera with the highest vignetting strength (Cam-E). In Table 5, the estimated values of parameter η are presented. For simplicity of results analysis, the η value estimated during calculation of the best DRP-2 model for individual camera is in the article used as coefficient of vignetting non-radiality of this camera, and denoted as η ˜ . If the cameras were to be ranked in the order from the camera with the most radial vignetting to the camera with the most non-radial vignetting, that is in the order of ascending η ˜ values, the order would be as follows: Cam-E, Cam-D, Cam-A, Cam-B, Cam-C.
As a flat-field surface, which is needed to acquire the vignetting image I V , a uniformly back-lighted milky poly(methyl methacrylate) (PMMP, “plexiglass”) panel of size 50 cm × 100 cm and thickness 3 mm was used. As a light source, the NEC EA304WMI-BK graphic monitor was used that displays a white screen with the brightness set to its maximum, i.e., 350 lx. The plexiglass panel was carefully placed parallel to the surface of the monitor screen, the distance between the monitor and the panel was constant and equal to 15 cm. To position the camera parallel to the monitor screen each camera was positioned in such a way that the geometric distortions of the reproduction of the test image obtained from this camera were symmetric. The test image was displayed on a monitor that served as panel illumination (NEC EA304WMI-BK), and the acquired images were observed in real time on the second display Figure 3a. For the time to acquire the series of images I V , the aforementioned plexiglass panel was inserted between the monitor and the camera used Figure 3b.
To reduce the noise presented in the captured images, in the experiment, the I ¯ V image was used, obtained as the average of 100 originally captured images I V e . Since the experiment evaluates the availability of fitting of the compared vignetting models to the real vignetting, there is no need to calculate the vignetting estimate for each color channel, that is, R ¯ , G ¯ , and B ¯ , of the I ¯ V image. Therefore, in the experiment, for each camera, as the input image,
I V = 0.2989 R ¯ + 0.5870 G ¯ + 0.1140 B ¯
was used. The exposure parameters of each camera were automatically carried out for the first image I V 1 , and for the rest of the images, that is, I V 2 , , I V 100 , the same parameters were used. The I V e images were acquired in a dark laboratory room so that no additional lights interfered with this process. Additionally, all signaling diodes with which the cameras are equipped, as well as all contrasting inscriptions on the camera housings, were covered with black opaque tape. In the case of cameras equipped with autofocus, that is, for Cam-A to Cam-C, their focus has been set to infinity.
It is important to note that due to the objective of the comparison, eventually small errors in, e.g., positioning of the monitor, the plexiglass panel, or the cameras, as well as a small non-uniformity of screen illumination, do not influence the qualitative evaluation of the experiment results. Of course, such errors can affect the quantitative results of the estimation; however, these errors do not change the judgment of the ability of the vignetting models tested to find the best approximation V ˜ based on the input image I V , and precisely this property of the vignetting models is evaluated in the comparison.

3.3. Compared Vignetting Models and Their Implementations

The entire experiment, that is, from the image acquisition, through all calculations, to data presentation, was carried out using the MATLAB R2021b software package with the Image Acquisition Toolbox and the Optimization Toolbox. In the experiment, four vignetting models have been compared, that is, the novel deformable radial polynomial (DRP) model; known from the literature the radial polynomial (RP) [38] and polynomial 2D (P2D) [25] models; and the model which in terms of the quality of the vignetting correction can be treated as a state-of-the-art solution, that is, the smooth non-iterative local polynomial (SNILP) model [37].
The RP and DRP models have been tested in two variants, that is, with the calculation of coordinates ( x c , y c ) of the optical center of the image C performed before the vignetting estimation procedure (these variants are denoted, respectively, RP-1 and DRP-1) and with the estimation of ( x c , y c ) integrated into the vignetting estimation process (these variants are hereafter denoted as RP-2 and DRP-2). The implementation of all variants tested for the RP and DRP models uses the MATLAB functions polyfit and polyval, respectively, to estimate the parameters of the approximation polynomial and calculate its values. In the implementation of the RP-2, DRP-1, and DRP-2 models, the MATLAB function fminunc, which is a MATLAB procedure for solving unconstrained nonlinear optimization problems, is used to find the optimal values of ( x c , y c ) in the case of the RP-2 and DRP-2 models, and the values of η in the case of both variants of the DRP model. The coordinates of the optical center needed for the RP-1 and DRP-1 models were determined by searching for the coordinates of the maximum value of the 2D polynomial approximation of degree s = 2 of the I V image. In the case of the RP-2 model for estimating ( x c , y c ) the optimization procedure (9) with η set as a constant value 1 has been used. As the initial values for the fminunc function, the following values were chosen: η 0 = 1 , x C 0 = M 2 and y C 0 = N 2 , where M × N is the resolution of the analyzed image, which for all cameras used was the same, that is, 1920 × 1080 . For the others parameters of the fminunc function their default values have been used. The other two models, that is, the P2D and SNILP models, have been implemented according to the information given in [37].
All models were compared for different degrees s of the approximation polynomials, that is, for s { 2 , , 10 } . The tested range of the values s covers the value of the degree of approximation polynomial proposed in the literature, that is, from s = 2 , for tasks that do not require exact correction, and up to s = 6 in the most demanding applications. In Table 2, the comparison of type and number of parameters required to specify the vignetting estimates determined using individual models is presented.

3.4. Results of the Experiment

Figure 4 presents vignetting estimates V ˜ obtained from the RP-1, DRP-2, and P2D models. These results are presented using pseudocolor and isolines, which allows for an easier comparison of the obtained vignetting estimates. In addition, in Appendix A in Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6, Figure A7, Figure A8, Figure A9 and Figure A10, the acquired vignetting images I V , vignetting estimates V ˜ , and corrected images I ˜ f l a t are presented in the form of 3D charts.
The numerical results of the experiment—that is, the values of the STD and IQR measures calculated for the images I ˜ f l a t , which are the results of vignetting correction—are presented in Table 3 and Table 4. In Figure 5, a comparison of the best results of the vignetting correction obtained for each model is presented. In this comparison, the STD and IQR values mentioned above are related to the values of the respective measures of the vignetting strength of the input images I V ; these values are presented in the second columns of, respectively, Table 3 and Table 4. Table 5 presents the estimated values of the coefficient η obtained for the DRP-1 and DRP-2 models.

4. Discussion of the Results

4.1. Evaluation of the Models in Terms of the Accuracy of the Obtained Vignetting Estimates

The purpose of the performed experiment was a comparison of the vignetting correction results obtained from the DRP model with the results obtained from the models known from the literature. Due to the large differences in the vignetting characteristics of the individual cameras used in the experiment, both in terms of the vignetting strength and the degree of non-radiality, the obtained correction results have been analyzed from the perspective of the influence of these factors on the correction results.
Analyzing the influence of the vignetting strength on the results of the vignetting correction, measured both with the STD (Table 3) and IQR (Table 4) measures, it can be seen that both tested variants of the novel DRP model, i.e., DRP-1 and DRP-2, practically always fall between the results obtained from the RP model and those obtained from the P2D and SNILP models. Slight exceptions to this general rule apply when low degree polynomials are used ( s 3 ) or when the correction was made for the image from the camera with near-ideal radial vignetting, that is, Cam-E with η ˜ = 1.0068 , where ideal radial vignetting has η = 1 . The same relationship between the results obtained from individual models also applies to the relative evaluation of the vignetting correction (Figure 5).
Among the cameras used for the experiment, only the Cam-E camera has an almost ideal radial vignetting ( η ˜ = 1.0068 ), while for the other cameras, the image brightness in the vertical direction decreases faster than in the horizontal direction (radial vignetting is ”squeezed” in the vertical direction). The speed of brightness decreasing in the case of other cameras varies from about 6% (Cam-D, η ˜ = 1.0605 ), through about 9% (Cam-A, η ˜ = 1.0901 ) and about 15% (Cam-B, η ˜ = 1.1559 ) to almost 30% (Cam-C, η ˜ = 1.2952 ). Evaluating the vignetting correction results obtained for Cam-E, the results obtained from all models are comparable. However, it can be noticed that for small values of s 5 , the RP and DRP models achieve better results than both more universal models, i.e., P2D and SNILP. This advantage decreases with increasing s and finally for s 8 the P2D and SNILP models give better correction results than the RP and DRP models. It is worth noting that even in such a favorable situation for the RP model (almost ideal radial vignetting) for most values of s, any variant of the RP model does not give better results than the corresponding variant of the DRP model. These results indicate that it is worth considering using the DRP model instead of the RP model, even if the vignetting is almost ideal radial.
Interestingly, in the case of the Cam-E camera and the STD measure, the variants RP-1 and DRP-1, i.e., those that use the previously calculated coordinates ( x C , y C ) of the center C, give better results than the RP-2 and DRP-2 variants, in which these coordinates are sought within one optimization problem. A similar situation also occurs in the case of the Cam-A and Cam-D cameras, but only in the case of the IQR measure. Such an effect seems to be counter-intuitive, but it should be noted that the applied optimization procedure does not guarantee to find a global minimum, and no studies have been conducted on the influence of the selection of parameters of the optimization method on its results.
For each camera, which has a noticeable non-radial vignetting, that is, from Cam-A to Cam-D, the results obtained for any variant of the DRP model and for any value of s are always better than the results obtained for any variant of the RP model. With the increase in the degree of non-radiality (increasing η ˜ ), this difference grows. In the case of the Cam-C ( η ˜ = 1.2952 ), the use of the DRP model in relation to the RP model, when comparing the corresponding variants of both models, gives about 35% improvement in the correction quality when the STD measure is used, and over 50% improvement in the case of the use of the IQR measure.
The influence of the degree of non-radiality on the results of vignetting correction is particularly well visible if one looks at the graphs in Figure 5. For the camera with almost radial vignetting (Cam-E), each tested model corrects the vignetting of this camera to a similar extent. However, with the increase in the non-radiality of camera vignetting, the more noticeable is the predominance of models that allow the occurrence of non-radial vignetting over the RP model. Of course, the DRP model is inferior to the P2D and SNILP models in terms of the obtained quality, but these models are much more complex. The difference in model complexity can be seen by comparing the number of parameters necessary to save the vignetting estimate obtained from each of these models (Table 2).

4.2. Computation Time

Another analyzed issue is the computation time, which is required to determine the vignetting estimate using individual models. In the vast majority of cases, even relatively long calculation times of the vignetting estimate are not a serious problem. The reason for this is that there is rarely a need to derive a vignetting estimate. Such situations occur occasionally, for example, when the new lens is used or periodically (e.g., every month) in order to ensure the highest quality of vignetting correction of the acquired images. Moreover, even if vignetting needs to be estimated for many parameters of the analyzed lens-camera system (e.g., for many aperture size values and many focal lengths of the varifocal lens), the calculations can be carried out in the background without involving the human operator.
In Figure 6 the median of the computational times for all models tested and for polynomial degrees s { 2 , , 10 } are presented. In the test, the acquired images I V with resolution 1920 × 1080 were used and 25 repetitions (5 repetitions for each of the I V images acquired using tested cameras) were performed for each model-s value combination. The measurements were carried out on a computer equipped with an AMD Ryzen 7 3800X processor, 16 GB RAM, and a M.2 PCIe NVMe SSD disk using MATLAB 2021b software. The results obtained show that the DRP-2 model requires the longest computation time. This model is followed by the RP-2 and DRP-1 models. This situation results from the use of the minimization procedure, which is much slower than the OLS method used many times for individual lines of the image, as in the case of the SNILP model, or once, but for a larger number of data, as in the case of the P2D model.
It is also worth noting that the calculation times will vary as the degree of the polynomial increases. This is because that increasing the value of s increases also the size of the data matrix, and thus the number of mathematical operations that must be performed.

5. Conclusions and Future Research

The intention of the authors during the development of the DRP model was to combine the simplicity of the RP model with the universality of more complex models such as P2D and SNILP. The tests carried out for cameras, from which the acquired images differ in both the amount of vignetting and the degree of fulfillment of the radial vignetting assumption, fully confirm that this assumed goal has been achieved. The DRP model uses only one more parameter than the RP model, but according to the results obtained, it can give up to 35% better correction results in terms of the STD measure and up to over 50% better correction results in terms of the IQR measure.
It should be noted that the results obtained from the DRP model are better than those obtained from the RP model for all tested cameras, including the camera with almost perfect radial vignetting (Cam-E, η ˜ = 1.0068 ), i.e., in conditions for which the RP model was designed. This observation leads to the essential conclusion that from the perspective of the quality of the vignetting correction, the DRP model can be successfully used as the default vignetting model instead of the RP model. Such a statement results from the fact that the results obtained from the DRP model in the case of radial vignetting are not worse than the results obtained from the RP model, and in the case of non-radial vignetting, they are better or much better than those obtained from the existing model. The expected quality improvement of the vignetting correction results increases with the increase in the degree of non-radiality of the analyzed vignetting.
The use of a polynomial function as a radial vignetting model is one of the few approaches to radial vignetting correction proposed in the literature (e.g., [39,40]). An interesting direction of research would therefore be to check how the usage of the distance transform (6) used in the DRP model would increase the flexibility of other than polynomial radial vignetting models. An important part of these studies should be the comparison of minimization methods in terms of their usefulness in the process of estimating the model parameters, the coordinates of the optical center C and the value of the η * coefficient.
In the conducted research, relatively simple lens-camera systems in the form of webcams were used. An important observation resulting from their application is the noticing of the prevalence of non-radial vignetting among the imaging systems used. Only one of the five cameras used has a vignetting that can be considered as radial (CamE) and only the other two have less than 10% differences in their vignetting in the vertical and horizontal directions (Cam-C and Cam-D). The question then arises to what extent the non-radial vignetting is widespread among more complex camera systems and how the developed DRP model will work in such systems. The analysis of these issues will be the subject of further research.
An important contribution made by the development of the DRP model—apart from making the RP model more flexible at the price of adding only one, easy to determine parameter η , which represents the degree of non-radial of actual vignetting—is that it opens the possibility of using methods adapted to images with radial vignetting for its use in the case of images with non-radial vignetting. Good examples of such methods are methods based on the idea of single-image vignetting correction [31,32,33,34]. This is possible because the DRP model uses transformation (6) of distance calculation between a pixel and optical image center C to transform the non-radial vignetting that occurs in most lens camera systems into an ideal radial vignetting model. As part of future research, it is planned to implement this idea in practice.

Author Contributions

Conceptualization, A.B.; methodology, A.B.; software, A.B.; validation, A.B. and H.P.; formal analysis, A.B.; investigation, A.B.; resources, A.B.; data curation, A.B.; writing—original draft preparation, A.B.; writing—review and editing, A.B. and H.P.; visualization, A.B.; supervision, H.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Polish Ministry of Education and Science under internal grant 02/070/BK22/0035 for the Silesian University of Technology, Gliwice, Poland.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on reasonable request from the corresponding author.

Acknowledgments

The authors thank two anonymous reviewers for their valuable comments, which were very helpful in improving the manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this article:
DRPDeformable Radial Polynomial vignetting mode
IQRInter Quartile Range
OLSOrdinary Least Squares method
P2DPolynomial 2D vignetting model
PolyReg s Polynomial Regression of degree s
RPRadial Polynomial vignetting model
SNILPSmooth Non-Iterative Local Polynomial vignetting model
STDSTandard Deviation
VMVignetting Model

Appendix A. 3D Charts of the I V , V ˜ , and I ˜ flat Images

In Figure A1, Figure A3, Figure A5, Figure A7 and Figure A9 the normalized I V images obtained from each camera and its estimates obtained from the compared models are presented. Normalized images norm ( I V ) are calculated as follows:
norm I V ( i , j ) = I V ( i , j ) · max i , j ( smooth I V ( i , j ) ) 1 ,
where smooth I V ( i , j ) is the result of the smoothing of the I V image with 2D Gaussian filter with standard deviation σ = 0.5 and size 10 × 10 ; the images norm ( I V ) are used only for visualization (these images are not used during the calculation of V ˜ ). Figure A2, Figure A4, Figure A6, Figure A8 and Figure A10 show the best correction results of the I V images, that is, the I ˜ f l a t images. The best results are usually obtained for s = 10 , however, in the case of Cam-D and both variants of the RP and DRP models, the best results have been obtained for s = 9 . The correction results in these images are presented as the normalized I ˜ f l a t images, which are calculated as follows
norm ( I ˜ f l a t ( i , j ) ) = I ˜ f l a t ( i , j ) 1 M · N i = 1 M j = 1 N I ˜ f l a t ( i , j ) .
This solution makes it easier to compare the correction results for different cameras.
Figure A1. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-A.
Figure A1. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-A.
Sensors 23 01157 g0a1
Figure A2. Comparison of vignetting corrections of flat-field image I V for Cam-A.
Figure A2. Comparison of vignetting corrections of flat-field image I V for Cam-A.
Sensors 23 01157 g0a2
Figure A3. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-B.
Figure A3. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-B.
Sensors 23 01157 g0a3
Figure A4. Comparison of vignetting corrections of flat-field image I V for Cam-B.
Figure A4. Comparison of vignetting corrections of flat-field image I V for Cam-B.
Sensors 23 01157 g0a4
Figure A5. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-C.
Figure A5. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-C.
Sensors 23 01157 g0a5
Figure A6. Comparison of vignetting corrections of flat-field image I V for Cam-C.
Figure A6. Comparison of vignetting corrections of flat-field image I V for Cam-C.
Sensors 23 01157 g0a6
Figure A7. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-D.
Figure A7. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-D.
Sensors 23 01157 g0a7
Figure A8. Comparison of vignetting corrections of flat-field image I V for Cam-D.
Figure A8. Comparison of vignetting corrections of flat-field image I V for Cam-D.
Sensors 23 01157 g0a8
Figure A9. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-E.
Figure A9. Comparison of vignetting image I V and vignetting estimates V ˜ obtained for the tested models and Cam-E.
Sensors 23 01157 g0a9
Figure A10. Comparison of vignetting corrections of flat-field image I V for Cam-E.
Figure A10. Comparison of vignetting corrections of flat-field image I V for Cam-E.
Sensors 23 01157 g0a10

References

  1. Xue, Y.; Shi, H.M. The Vignetting Effect of a LAMOST-Type Schmidt Telescope. Chin. J. Astron. Astrophys. 2008, 8, 580. [Google Scholar] [CrossRef]
  2. Zhang, D.; Yang, Q.Y.; Chen, T. Vignetting correction for a single star-sky observation image. Appl. Opt. 2019, 58, 4337–4344. [Google Scholar] [CrossRef] [PubMed]
  3. Piccinini, F.; Lucarelli, E.; Gherardi, A.; Bevilacqua, A. Multi-image based method to correct vignetting effect in light microscopy images. J. Microsc. 2012, 248, 6–22. [Google Scholar] [CrossRef]
  4. Peng, T.; Thorn, K.; Schroeder, T.; Wang, L.; Theis, F.J.; Marr, C.; Navab, N. A BaSiC tool for background and shading correction of optical microscopy images. Nat. Commun. 2017, 8, 14836. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Piccinini, F.; Bevilacqua, A. Colour Vignetting Correction for Microscopy Image Mosaics Used for Quantitative Analyses. BioMed Res. Int. 2018, 2018, 7082154. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Mignard-Debise, L.; Ihrke, I. A Vignetting Model for Light Field Cameras with an Application to Light Field Microscopy. IEEE Trans. Comput. Imaging 2019, 5, 585–595. [Google Scholar] [CrossRef] [Green Version]
  7. Caparó Bellido, A.; Rundquist, B.C. Semi-Automatic Fractional Snow Cover Monitoring from Near-Surface Remote Sensing in Grassland. Remote Sens. 2021, 13, 2045. [Google Scholar] [CrossRef]
  8. Zhang, A.; Hu, S.; Zhang, X.; Zhang, T.; Li, M.; Tao, H.; Hou, Y. A Handheld Grassland Vegetation Monitoring System Based on Multispectral Imaging. Agriculture 2021, 11, 1262. [Google Scholar] [CrossRef]
  9. Hakala, T.; Suomalainen, J.; Peltoniemi, J.I. Acquisition of Bidirectional Reflectance Factor Dataset Using a Micro Unmanned Aerial Vehicle and a Consumer Camera. Remote Sens. 2010, 2, 819–832. [Google Scholar] [CrossRef] [Green Version]
  10. Kelcey, J.; Lucieer, A. Sensor Correction of a 6-Band Multispectral Imaging Sensor for UAV Remote Sensing. Remote Sens. 2012, 4, 1462–1493. [Google Scholar] [CrossRef]
  11. Cao, S.; Danielson, B.; Clare, S.; Koenig, S.; Campos-Vargas, C.; Sanchez-Azofeifa, A. Radiometric calibration assessments for UAS-borne multispectral cameras: Laboratory and field protocols. ISPRS J. Photogramm. Remote Sens. 2019, 149, 132–145. [Google Scholar] [CrossRef]
  12. Cao, H.; Gu, X.; Wei, X.; Yu, T.; Zhang, H. Lookup Table Approach for Radiometric Calibration of Miniaturized Multispectral Camera Mounted on an Unmanned Aerial Vehicle. Remote Sens. 2020, 12, 4012. [Google Scholar] [CrossRef]
  13. Zhou, X.; Liu, C.; Xue, Y.; Akbar, A.; Jia, S.; Zhou, Y.; Zeng, D. Radiometric calibration of a large-array commodity CMOS multispectral camera for UAV-borne remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102968. [Google Scholar] [CrossRef]
  14. Olsen, D.; Dou, C.; Zhang, X.; Hu, L.; Kim, H.; Hildum, E. Radiometric Calibration for AgCam. Remote Sens. 2010, 2, 464–477. [Google Scholar] [CrossRef] [Green Version]
  15. Liu, Y.; Long, T.; Jiao, W.; He, G.; Chen, B.; Huang, P. A General Relative Radiometric Correction Method for Vignetting and Chromatic Aberration of Multiple CCDs: Take the Chinese Series of Gaofen Satellite Level-0 Images for Example. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–25. [Google Scholar] [CrossRef]
  16. Cauwerts, C.; PhD, M.B.; Deneyer, A. Comparison of the Vignetting Effects of Two Identical Fisheye Lenses. LEUKOS 2012, 8, 181–203. [Google Scholar] [CrossRef]
  17. Wagdy, A.; Garcia-Hansen, V.; Isoardi, G.; Pham, K. A Parametric Method for Remapping and Calibrating Fisheye Images for Glare Analysis. Buildings 2019, 9, 219. [Google Scholar] [CrossRef] [Green Version]
  18. Kim, S.J.; Pollefeys, M. Robust radiometric calibration and vignetting correction. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 562–576. [Google Scholar] [CrossRef]
  19. Doutre, C.; Nasiopoulos, P. Fast vignetting correction and color matching for panoramic image stitching. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 709–712. [Google Scholar] [CrossRef]
  20. Alomran, M.; Chai, D. Feature-based panoramic image stitching. In Proceedings of the 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), Phuket, Thailand, 13–15 November 2016; pp. 1–6. [Google Scholar] [CrossRef]
  21. Chen, C.; Pan, J.; Wang, M.; Zhu, Y. Side-Slither Data-Based Vignetting Correction of High-Resolution Spaceborne Camera with Optical Focal Plane Assembly. Sensors 2018, 18, 3402. [Google Scholar] [CrossRef] [Green Version]
  22. Kinzig, C.; Cortés, I.; Fernández, C.; Lauer, M. Real-time Seamless Image Stitching in Autonomous Driving. In Proceedings of the 2022 25th International Conference on Information Fusion (FUSION), Linköping, Sweden, 4–7 July 2022; pp. 1–8. [Google Scholar] [CrossRef]
  23. Saad, K.; Schneider, S.A. Camera Vignetting Model and its Effects on Deep Neural Networks for Object Detection. In Proceedings of the 2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE), Graz, Austria, 4–8 November 2019; pp. 1–5. [Google Scholar] [CrossRef]
  24. Tian, B.; Juefei-Xu, F.; Guo, Q.; Xie, X.; Li, X.; Liu, Y. AVA: Adversarial Vignetting Attack against Visual Recognition. arXiv 2021. [Google Scholar] [CrossRef]
  25. Goldman, D.B.; Chen, J.H. Vignette and exposure calibration and compensation. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volunme I, pp. 899–906. [Google Scholar] [CrossRef]
  26. Goldman, D.B. Vignette and exposure calibration and compensation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2276–2288. [Google Scholar] [CrossRef]
  27. Aggarwal, M.; Hua, H.; Ahuja, N. On cosine-fourth and vignetting effects in real lenses. In Proceedings of the Eighth IEEE International Conference on Computer Vision (ICCV 2001), Vancouver, BC, Canada, 7–14 July 2001; Volume 1, pp. 472–479. [Google Scholar] [CrossRef]
  28. Asada, N.; Amano, A.; Baba, M. Photometric calibration of zoom lens systems. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996; Volume 1, pp. 186–190. [Google Scholar]
  29. Catrysse, P.B.; Liu, X.; Gamal, A.E. QE reduction due to pixel vignetting in CMOS image sensors. In Proceedings of the Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications; Sampat, N., Yeh, T., Blouke, M.M., Sampat, N., Williams, G.M., Jr., Yeh, T., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2000; Volume 3965, pp. 420–430. [Google Scholar] [CrossRef] [Green Version]
  30. Kang, S.B.; Weiss, R. Can We Calibrate a Camera Using an Image of a Flat, Textureless Lambertian Surface? In Proceedings of the Computer Vision—ECCV 2000, 6th European Conference on Computer Vision, Dublin, Ireland, 26 June–1 July 2000; Vernon, D., Ed.; Springer Berlin Heidelberg: Berlin/Heidelberg, Germany, 2000; pp. 640–653. [Google Scholar]
  31. Zheng, Y.; Lin, S.; Kambhamettu, C.; Yu, J.; Kang, S.B. Single-image vignetting correction. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 2243–2256. [Google Scholar] [CrossRef] [PubMed]
  32. Rohlfing, T. Single-Image Vignetting Correction by Constrained Minimization of Log-Intensity Entropy; Figshare: London, UK, 2012. [Google Scholar]
  33. Zheng, Y.; Lin, S.; Kang, S.B.; Xiao, R.; Gee, J.C.; Kambhamettu, C. Single-image vignetting correction from gradient distribution symmetries. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1480–1494. [Google Scholar] [CrossRef]
  34. Cho, H.; Lee, H.; Lee, S. Radial bright channel prior for single image vignetting correction. Lect. Notes Comput. Sci. 2014, 8690, 189–202. [Google Scholar] [CrossRef]
  35. Sawchuk, A.A. Real-Time Correction of Intensity Nonlinearities in Imaging Systems. IEEE Trans. Comput. 1977, C-26, 34–39. [Google Scholar] [CrossRef]
  36. Brady, M.; Legge, G.E. Camera calibration for natural image studies and vision research. J. Opt. Soc. Am. A 2009, 26, 30. [Google Scholar] [CrossRef] [Green Version]
  37. Bal, A.; Palus, H. A Smooth Non-Iterative Local Polynomial (SNILP) Model of Image Vignetting. Sensors 2021, 21, 7086. [Google Scholar] [CrossRef]
  38. Burt, P.J.; Adelson, E.H. A multiresolution spline with application to image mosaics. ACM Trans. Graph. (TOG) 1983, 2, 217–236. [Google Scholar] [CrossRef]
  39. Yu, W. Practical anti-vignetting methods for digital cameras. IEEE Trans. Consum. Electron. 2004, 50, 975–983. [Google Scholar] [CrossRef]
  40. Leong, F.J.; Brady, M.; McGee, J.O. Correction of uneven illumination (vignetting) in digital microscopy images. J. Clin. Pathol. 2003, 56, 619–621. [Google Scholar] [CrossRef]
Figure 1. Comparison of the acquired images I V ; the images are presented in the order of the increasing non-radiality degree of vignetting, which is described by the η ˜ coefficient.
Figure 1. Comparison of the acquired images I V ; the images are presented in the order of the increasing non-radiality degree of vignetting, which is described by the η ˜ coefficient.
Sensors 23 01157 g001
Figure 2. Cameras used in the comparison, from left to right: Microsoft LifeCam Studio (Cam-A); Logitech C920 (Cam-B); Hama C-600 Pro (Cam-C); and two Xiaomi IMILAB CMSXJ22A (Cam-D and Cam-E).
Figure 2. Cameras used in the comparison, from left to right: Microsoft LifeCam Studio (Cam-A); Logitech C920 (Cam-B); Hama C-600 Pro (Cam-C); and two Xiaomi IMILAB CMSXJ22A (Cam-D and Cam-E).
Sensors 23 01157 g002
Figure 3. Laboratory setups used during: (a) positioning of camera; and (b) aquisition of the test images. The lighting in the laboratory was turned on only for taking the presented photographs.
Figure 3. Laboratory setups used during: (a) positioning of camera; and (b) aquisition of the test images. The lighting in the laboratory was turned on only for taking the presented photographs.
Sensors 23 01157 g003
Figure 4. Comparison of the vignetting estimates V ˜ obtained form the RP-1 (left column), DRP-2 (central column), and P2D (right column) models.
Figure 4. Comparison of the vignetting estimates V ˜ obtained form the RP-1 (left column), DRP-2 (central column), and P2D (right column) models.
Sensors 23 01157 g004
Figure 5. Comparison of the quality of the vignetting correction obtained for the compared vignetting models and used cameras. The values of measures of vignetting correction quality are presented in the relation to the initial values of the (a) STD and (b) IQR measures of the I V images obtained from the corresponding camera.
Figure 5. Comparison of the quality of the vignetting correction obtained for the compared vignetting models and used cameras. The values of measures of vignetting correction quality are presented in the relation to the initial values of the (a) STD and (b) IQR measures of the I V images obtained from the corresponding camera.
Sensors 23 01157 g005
Figure 6. Comparison of vignetting estimation times for individual models.
Figure 6. Comparison of vignetting estimation times for individual models.
Sensors 23 01157 g006
Table 1. Main technical data of the cameras used in the comparison.
Table 1. Main technical data of the cameras used in the comparison.
ParametersWebcam
Cam-ACam-BCam-CCam-D & Cam-E
Microsoft LifeCam StudioLogitech C920Hama C-600 ProXiaomi IMILAB CMSXJ22A
Diagonal angle of view75°78°90°85°
Maximum video resolution 1920 × 1080 1920 × 1080 1920 × 1080 1920 × 1080
Maximal frame rate @ 1080p30 fps30 fps30 fps30 fps
Focus typeauto focusauto focusauto focusfixed focus
Focus range>10 cm>60 cm
Remark: All data are provided by camera manufacturers.
Table 2. The type and the number of parameters required for saving vignetting estimation results.
Table 2. The type and the number of parameters required for saving vignetting estimation results.
ModelType of ParametersNumber of Parameters
RPparameters of 1D radial polynomial function +
coordinates of image optical center ( x x , y c )
s + 3
DRPparameters of 1D radial polynomial function +
coordinates of image optical center ( x x , y c ) +
coefficient of non-radiality of the vignetting η
s + 4
P2Dparameters of 2D approximation polynomial 1 2 s 2 + 3 s + 2
SNILPparameters of 1D approximation of each line along the
longer side of the input image I V with the resolution M × N
N ( s + 1 ) , where N M
Table 3. Comparison of STD I V V ˜ 1 values.
Table 3. Comparison of STD I V V ˜ 1 values.
Camera STD I V ModelDegree of Approximation Polynomials s
2345678910
Cam-A0.3028RP-10.05530.05510.05500.05480.05480.05470.05450.05450.0545
RP-20.05530.05500.05490.05470.05470.05460.05450.05450.0544
DRP-10.05410.05380.05360.05330.05330.05330.05310.05310.0531
DRP-20.05410.05370.05360.05330.05330.05320.05310.05310.0530
P2D0.06060.05850.04400.04290.04020.03980.03710.03680.0346
SNILP0.05400.05200.04100.04080.03770.03750.03490.03460.0337
Cam-B3.7551RP-10.57500.57490.57430.56340.54360.53790.52190.51890.5165
RP-20.56300.56290.56280.55200.53310.52590.50680.50480.5010
DRP-10.45490.45490.45430.44120.41700.40320.36810.36740.3552
DRP-20.45490.45490.45420.44110.41670.40260.36760.36700.3547
P2D0.45330.44740.42010.41010.40050.38540.37320.36400.3523
SNILP0.43140.42190.41430.39700.37290.35530.33400.31280.2962
Cam-C7.0446RP-12.13722.09611.83471.79341.72971.72131.67791.68691.6742
RP-22.13172.08701.82481.78401.72421.71851.67781.68661.6740
DRP-11.44531.40421.25941.21351.14311.12071.09551.10081.0894
DRP-21.44521.40161.25681.21201.14241.11901.09221.09741.0852
P2D1.41441.28381.24871.17910.96630.81570.70630.61970.5633
SNILP1.32091.19171.02070.90610.69550.62160.55820.51630.4813
Cam-D9.9096RP-10.93980.88910.86980.83390.82620.80140.79700.79530.7977
RP-20.93080.88100.86410.82980.82170.79690.79270.79090.7933
DRP-10.83530.79870.79450.76300.75540.73100.72520.72210.7243
DRP-20.82670.79040.78600.75070.74340.71630.71050.70660.7087
P2D1.33151.32460.64290.61040.59230.58490.49560.47360.4396
SNILP1.14751.13770.61230.59620.52630.50350.42840.41880.4079
Cam-E20.2225RP-11.41881.15421.07540.86560.85700.83370.83490.80670.7646
RP-21.41061.14281.06150.83930.82950.80740.80900.78110.7364
DRP-11.36561.12001.06160.85820.84990.82470.82550.79980.7600
DRP-21.35341.10471.04510.82940.81980.79560.79680.77170.7297
P2D2.88142.86861.00420.99230.99130.98570.67500.67260.6339
SNILP2.15472.14920.97630.96960.71480.71040.60720.60260.5926
Remark: (i) background colors (from blue, through light blue, light red to red) of Table cells represent range (respectively, from the best to the worst one) of the model results based on the STD( I ˜ f l a t ) values obtained for each camera-s value combination; (ii) for each camera-model combination the values written with bold text represents the best approximation result obtained for the each of these combinations according to the STD( I ˜ f l a t ) measure.
Table 4. Comparison of IQR I V V ˜ 1 values.
Table 4. Comparison of IQR I V V ˜ 1 values.
Camera IQR I V ModelDegree of Approximation Polynomials s
2345678910
Cam-A0.3985RP-10.07110.07060.07040.07080.07080.07110.07100.07100.0710
RP-20.07040.06980.06940.06980.06980.07020.07020.07020.0702
DRP-10.06870.06810.06770.06780.06780.06800.06790.06790.0680
DRP-20.06900.06830.06800.06810.06810.06820.06820.06820.0683
P2D0.07490.06950.05720.05630.05220.05170.04560.04500.0431
SNILP0.07000.06780.05400.05360.04780.04740.04360.04320.0419
Cam-B5.8670RP-10.68950.68700.68870.70920.64740.66460.60880.60450.5947
RP-20.68430.68140.68160.70460.65030.66720.61160.60750.5952
DRP-10.59490.59490.57820.58910.50140.51570.43280.43020.3953
DRP-20.59430.59450.57770.58820.49940.51380.43280.43030.3961
P2D0.60650.60300.55190.53410.51940.49960.48800.48030.4437
SNILP0.56700.55660.55240.51500.48170.46320.41990.38740.3571
Cam-C10.1281RP-12.44832.57102.20632.21292.24462.19882.14582.17812.1546
RP-22.46402.63632.25212.24382.26932.21082.14902.17422.1480
DRP-11.09021.11501.10011.10501.07421.06281.04711.06101.0488
DRP-21.09031.12751.11271.11241.07091.04641.02611.03611.0192
P2D1.10200.95740.87160.89490.73570.67820.64680.61070.5950
SNILP1.04390.90150.76710.75540.62040.59270.58070.55360.5346
Cam-D15.1372RP-11.12251.14341.13730.98590.98750.94880.95150.95620.9616
RP-21.08761.11341.11920.96620.97270.93730.94020.94540.9505
DRP-11.00261.08311.07870.89870.88910.83730.83580.84270.8439
DRP-21.03751.10851.10420.94780.93020.88150.87850.88470.8871
P2D1.61811.63530.90890.87060.86260.84090.62450.59410.5837
SNILP1.73241.72590.88500.87100.72430.68410.55490.54570.5315
Cam-E31.6573RP-11.14781.24111.24950.98110.98680.99320.98960.98310.9224
RP-21.15651.23771.24520.99160.99151.00491.00040.98720.9216
DRP-11.14851.22331.22890.97020.97350.98190.98000.97300.9105
DRP-21.17731.22101.22190.97870.97970.99490.99160.97630.9102
P2D2.49932.45011.33671.30281.20361.19430.80630.80790.8022
SNILP3.14733.14171.20731.19820.90660.90050.77160.76740.7560
Remarks: (i) background colors (from blue, through light blue, light red to red) of Table cells represent range (respectively, from the best to the worst one) of the model results based on the IQR( I ˜ f l a t ) values obtained for each camera-s value combination; (ii) for each camera-model combination the values written with bold text represents the best approximation result obtained for the each of these combinations according to the IQR( I ˜ f l a t ) measure.
Table 5. Comparison of the obtained η values for the DRP-1 and DRP-2 models.
Table 5. Comparison of the obtained η values for the DRP-1 and DRP-2 models.
CameraModelDegree of Approximation Polynomials s
2345678910
Cam-ADRP-11.07461.07751.08061.08321.08321.08341.08311.08321.0824
DRP-21.08121.08591.08761.08921.08931.08951.09051.09051.0901
Cam-BDRP-11.14731.14731.15111.14741.14901.14881.15121.15121.1542
DRP-21.14691.14701.15091.14771.15071.15051.15271.15271.1559
Cam-CDRP-11.32601.32201.29901.29031.29271.29641.29621.29461.2936
DRP-21.32601.32141.29831.28991.29301.29761.29771.29621.2952
Cam-DDRP-11.06021.05631.05411.05091.05001.04931.04951.05001.0499
DRP-21.06921.06491.06261.06091.05941.05961.05981.06051.0606
Cam-EDRP-11.02091.01621.01191.00781.00791.00861.00861.00781.0061
DRP-21.02161.01691.01261.00861.00871.00941.00931.00851.0068
Remarks: (i) in each row the value written with bold text indicate value of η obtained for the best approximation result according to the STD( I ˜ f l a t ) measure for a given camera-model pair; (ii) values with a gray background show the η ˜ value of the individual camera.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bal, A.; Palus, H. Image Vignetting Correction Using a Deformable Radial Polynomial Model. Sensors 2023, 23, 1157. https://doi.org/10.3390/s23031157

AMA Style

Bal A, Palus H. Image Vignetting Correction Using a Deformable Radial Polynomial Model. Sensors. 2023; 23(3):1157. https://doi.org/10.3390/s23031157

Chicago/Turabian Style

Bal, Artur, and Henryk Palus. 2023. "Image Vignetting Correction Using a Deformable Radial Polynomial Model" Sensors 23, no. 3: 1157. https://doi.org/10.3390/s23031157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop