Loading [MathJax]/jax/output/HTML-CSS/jax.js
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = single-image vignetting correction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 27251 KiB  
Article
Single-Frame Vignetting Correction for Post-Stitched-Tile Imaging Using VISTAmap
by Anthony A. Fung, Ashley H. Fung, Zhi Li and Lingyan Shi
Nanomaterials 2025, 15(7), 563; https://doi.org/10.3390/nano15070563 - 7 Apr 2025
Viewed by 300
Abstract
Stimulated Raman Scattering (SRS) nanoscopy imaging offers unprecedented insights into tissue molecular architecture but often requires stitching multiple high-resolution tiles to capture large fields of view. This process is time-consuming and frequently introduces vignetting artifacts—grid-like intensity fluctuations that degrade image quality and hinder [...] Read more.
Stimulated Raman Scattering (SRS) nanoscopy imaging offers unprecedented insights into tissue molecular architecture but often requires stitching multiple high-resolution tiles to capture large fields of view. This process is time-consuming and frequently introduces vignetting artifacts—grid-like intensity fluctuations that degrade image quality and hinder downstream quantitative analyses and processing such as super-resolution deconvolution. We present VIgnetted Stitched-Tile Adjustment using Morphological Adaptive Processing (VISTAmap), a simple tool that corrects these shading artifacts directly on the final stitched image. VISTAmap automatically detects the tile grid configuration by analyzing intensity frequency variations and then applies sequential morphological operations to homogenize the image. In contrast to conventional approaches that require increased tile overlap or pre-acquisition background sampling, VISTAmap offers a pragmatic, post-processing solution without the need for separate individual tile images. This work addresses pressing concerns by delivering a robust, efficient strategy for enhancing mosaic image uniformity in modern nanoscopy, where the smallest details make tremendous impacts. Full article
(This article belongs to the Special Issue New Advances in Applications of Nanoscale Imaging and Nanoscopy)
Show Figures

Figure 1

23 pages, 95614 KiB  
Article
Image Vignetting Correction Using a Deformable Radial Polynomial Model
by Artur Bal and Henryk Palus
Sensors 2023, 23(3), 1157; https://doi.org/10.3390/s23031157 - 19 Jan 2023
Cited by 9 | Viewed by 4030
Abstract
Image vignetting is one of the major radiometric errors that occur in lens-camera systems. In many applications, vignetting is an undesirable effect; therefore, when it is impossible to fully prevent its occurrence, it is necessary to use computational methods for its correction. In [...] Read more.
Image vignetting is one of the major radiometric errors that occur in lens-camera systems. In many applications, vignetting is an undesirable effect; therefore, when it is impossible to fully prevent its occurrence, it is necessary to use computational methods for its correction. In probably the most frequently used approach to the vignetting correction, that is, the flat-field correction, the use of appropriate vignetting models plays a pivotal role. The radial polynomial (RP) model is commonly used, but for its proper use, the actual vignetting of the analyzed lens-camera system has to be a radial function. However, this condition is not fulfilled by many systems. There exist more universal models of vignetting; however, these models are much more sophisticated than the RP model. In this article, we propose a new model of vignetting named the Deformable Radial Polynomial (DRP) model, which joins the simplicity of the RP model with the universality of more sophisticated models. The DRP model uses a simple distance transformation and minimization method to match the radial vignetting model to the non-radial vignetting of the analyzed lens-camera system. The real-data experiment confirms that the DRP model, in general, gives better (up 35% or 50%, depending on the measure used) results than the RP model. Full article
(This article belongs to the Special Issue Machine Vision Based Sensing and Imaging Technology)
Show Figures

Figure 1

20 pages, 13400 KiB  
Article
A Case Study of Vignetting Nonuniformity in UAV-Based Uncooled Thermal Cameras
by Wenan Yuan and Weiyun Hua
Drones 2022, 6(12), 394; https://doi.org/10.3390/drones6120394 - 3 Dec 2022
Cited by 8 | Viewed by 2731
Abstract
Uncooled thermal cameras have been employed as common UAV payloads for aerial temperature surveillance in recent years. Due to the lack of internal cooling systems, such cameras often suffer from thermal-drift-induced nonuniformity or vignetting despite having built-in mechanisms to minimize the noise. The [...] Read more.
Uncooled thermal cameras have been employed as common UAV payloads for aerial temperature surveillance in recent years. Due to the lack of internal cooling systems, such cameras often suffer from thermal-drift-induced nonuniformity or vignetting despite having built-in mechanisms to minimize the noise. The current study examined a UAV-based uncooled thermal camera vignetting regarding camera warmup time, ambient temperature, and wind speed and direction, and proposed a simple calibration-based vignetting migration method. The experiments suggested that the camera needed to undergo a warmup period to achieve stabilized performance. The required warmup duration ranged from 20 to 40 min depending on ambient temperature. Camera vignetting severity increased with camera warmup time, decreasing ambient temperature, and wind presence, while wind speed and direction did not make a difference to camera vignetting during the experiments. Utilizing a single image of a customized calibration target, we were able to mitigate vignetting of outdoor images captured in a 30 min duration by approximately 70% to 80% in terms of the intra-image pixel standard deviation (IISD) and 75% in terms of the pixel-wise mean (PWMN) range. The results indicated that outdoor environmental conditions such as air temperature and wind speed during short UAV flights might only minimally influence the thermal camera vignetting severity and pattern. Nonetheless, frequent external shutter-based corrections and considering the camera nonlinear temperature response in future studies have the potential to further improve vignetting correction efficacy for large scene temperature ranges. Full article
Show Figures

Figure 1

14 pages, 13380 KiB  
Article
Detection of Chilling Injury in Pickling Cucumbers Using Dual-Band Chlorophyll Fluorescence Imaging
by Yuzhen Lu and Renfu Lu
Foods 2021, 10(5), 1094; https://doi.org/10.3390/foods10051094 - 14 May 2021
Cited by 15 | Viewed by 4395
Abstract
Pickling cucumbers are susceptible to chilling injury (CI) during postharvest refrigerated storage, which would result in quality degradation and economic loss. It is, thus, desirable to remove the defective fruit before they are marketed as fresh products or processed into pickled products. Chlorophyll [...] Read more.
Pickling cucumbers are susceptible to chilling injury (CI) during postharvest refrigerated storage, which would result in quality degradation and economic loss. It is, thus, desirable to remove the defective fruit before they are marketed as fresh products or processed into pickled products. Chlorophyll fluorescence is sensitive to CI in green fruits, because exposure to chilling temperatures can induce detectable alterations in chlorophylls of tissues. This study evaluated the feasibility of using a dual-band chlorophyll fluorescence imaging (CFI) technique for detecting CI-affected pickling cucumbers. Chlorophyll fluorescence images at 675 nm and 750 nm were acquired from pickling cucumbers under the excitation of ultraviolet-blue light. The raw images were processed for vignetting corrections through bi-dimensional empirical mode decomposition and subsequent image reconstruction. The fluorescence images were effective for ascertaining CI-affected tissues, which appeared as dark areas in the images. Support vector machine models were developed for classifying pickling cucumbers into two or three classes using the features extracted from the fluorescence images. Fusing the features of fluorescence images at 675 nm and 750 nm resulted in overall accuracies of 96.9% and 91.2% for two-class (normal and injured) and three-class (normal, mildly and severely injured) classification, respectively, which are statistically significantly better than those obtained using the features at a single wavelength, especially for the three-class classification. Furthermore, a subset of features, selected based on the neighborhood component feature selection technique, achieved the highest accuracies of 97.4% and 91.3% for the two-class and three-class classification, respectively. This study demonstrated that dual-band CFI is an effective modality for CI detection in pickling cucumbers. Full article
(This article belongs to the Special Issue Nondestructive Optical Sensing for Food Quality and Safety Inspection)
Show Figures

Figure 1

20 pages, 8516 KiB  
Article
Image Preprocessing for Outdoor Luminescence Inspection of Large Photovoltaic Parks
by Pascal Kölblin, Alexander Bartler and Marvin Füller
Energies 2021, 14(9), 2508; https://doi.org/10.3390/en14092508 - 27 Apr 2021
Cited by 5 | Viewed by 2217
Abstract
Electroluminescence (EL) measurements allow one to detect damages and/or defective parts in photovoltaic systems. In principle, it seems possible to predict the complete current/voltage curve from such pictures even automatically. However, such a precise analysis requires image corrections and calibrations, because vignetting and [...] Read more.
Electroluminescence (EL) measurements allow one to detect damages and/or defective parts in photovoltaic systems. In principle, it seems possible to predict the complete current/voltage curve from such pictures even automatically. However, such a precise analysis requires image corrections and calibrations, because vignetting and lens distortion cause signal and spatial distortions. Earlier works on crystalline silicon modules used the cell gap joints (CGJ) as calibration pattern. Unfortunately, this procedure fails if the detection of the gaps is not accurate or if the contrast in the images is low. Here, we enhance the automated camera calibration algorithm with a reliable pattern detection and analyze quantitatively the quality of the process. Our method uses an iterative Hough transform to detect line structures and uses three key figures (KF) to separate detected busbars from cell gaps. This method allows a reliable identification of all cell gaps, even in noisy images or if disconnected edges in PV cells exist or potential induced degradation leads to a low contrast between active cell area and background. In our dataset, a subset of 30 EL images (72 cell each) forming grid (5×11) lead to consistent calibration results. We apply the calibration process to 997 single module EL images of PV modules and evaluate our results with a random subset of 40 images. After lens distortion correction and perspective correction, we analyze the residual deviation between ideal target grid points and the previously detected CGJ after applied distortion and perspective correction. For all of the 2200 control points in the 40 evaluation images, we achieve a deviation of less than or equal to 3 pixels. For 50% of the control points, a deviation of of less than or equal to 1 pixel is reached. Full article
Show Figures

Figure 1

Back to TopTop