Next Article in Journal
Characteristic Test Analysis of Graphene Plus Optical Microfiber Coupler Combined Device and Its Application in Fiber Lasers
Previous Article in Journal
On the OCRA Measurement: Automatic Computation of the Dynamic Technical Action Frequency Factor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Distance Transformation for Image Enhancement in NIR Imaging of Finger Vein System

1
Department of Electronics, Electrical Engineering and Microelectronics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
2
Department of Cybernetics, Nanotechnology and Data Processing, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(6), 1644; https://doi.org/10.3390/s20061644
Submission received: 27 January 2020 / Revised: 3 March 2020 / Accepted: 10 March 2020 / Published: 16 March 2020
(This article belongs to the Section Biosensors)

Abstract

:
Most of the current image processing methods used in the near-infrared imaging of finger vascular system concentrate on the extraction of internal structures (veins). In this paper, we propose a novel approach which allows to enhance both internal and external features of a finger. The method is based on the Distance Transformation and allows for selective extraction of physiological structures from an observed finger. We evaluate the impact of its parameters on the effectiveness of the already established processing pipeline used for biometric identification. The new method was compared with five state-of-the-art approaches to features extraction (position-gray-profile-curve—PGPGC, maximum curvature points in image profiles—MC, Niblack image adaptive thresholding—NAT, repeated dark line tracking—RDLT, and wide line detector—WD) on the GustoDB database of images obtained in a wide range of NIR wavelengths (730–950 nm). The results indicate a clear superiority of the proposed approach over the remaining alternatives. The method managed to reach over 90 % identification accuracy for all analyzed datasets.

1. Introduction

One of the methods of person verification utilizes the near-infrared (NIR) images of the finger vascular system, which is proved to contain a set of features unique for each human. External features, like a fingerprint, finger shape, skin folds and lunulas are accompanied by the internal features—the structure of a vascular system [1]. Finger tissues and blood have different absorption coefficients for various light wavelengths which phenomenon allows for observation of both types of features in NIR images. While the visible cannot penetrate inside the body showing only the skin, the NIR reveals and distinguishes internal structures from the external ones. This is because the light in a range of 700–1000 nm is strongly absorbed by oxidized haemoglobin in veins [2] and lowly by the tissues [3]. This method of identification has been already described well in rich literature [4].
Currently, most of the methods dedicated for image processing in such imaging, focus on good isolation of the vascular system from an image to be further processed by suitable classification techniques [5,6,7]. However, we are convinced that the external features of a finger can further improve the verification reliability. Unfortunately, none of the methods of initial processing of raw images concentrates on proper extraction of both types of features. This is due to the fact that internal and external parts of a finger are somewhat different, showing varying structural characteristics (sizes, shapes, range of intensities in the image, etc.) which would require a flexible processing schema. The method presented in this paper is the first such approach that tries to include and enhance the most representative features in the image to improve the accuracy of person identification.

2. Materials and Methods

2.1. Distance Transformation

The original idea of Distance Transformation (DT) was presented in [8]. The main goal of the method is to find the distance from a given pixel to all other. The main advantage of this approach was the very short time of execution, much shorter than other methods based on computing the Euclidian distance. Several modifications of the method have been already presented. One of them included calculation of topological distance, i.e., counting intensity differences instead of spatial difference between pixels positions for colorization purpose [9]. A combination of the topological and spatial distances can be found in [10]. This approach was adopted by the authors in the estimation of image quality of the finger vein vascular system [11].
In its simplest form, the Distance Transformation of an image is calculated as follows. Initially the distance matrix is initialized with infinities in all pixels excluding the object’s pixels—for them the distance equals 0. Next, the distance matrix is swept twice (this is so-called double-scan algorithm) with a special L-shaped mask as presented in Figure 1. In each step of the mask, one replaces the distance of a central pixel ( d C ) pixel with its new values using the formula:
d C = m i n { d P 1 + 2 , d P 2 + 1 , d P 3 + 2 , d P 4 + 1 , d C }
After two passes, the distances in each pixel are rough estimations of the Euclidean distance to the closest object’s pixel.
We propose the Modified Distance Transformation ( M D T ) in which we replace the formula above so that it operates on the intensities of pixels rather than on binary image:
d C = m i n ( i = 1 , 2 , 3 ) { d P i + α | I C I P i | } ,
where I C and I P i stands for the intensity of a pixel at central position (C) and at P i in the mask. The α is the trigger which allows for distance growth only if the intensity of a central pixel is higher then at neighbouring pixel (i.e., α = 1 if I A I P i > 0 ). As a result, the distance will grow along intensity gradients only for positive slopes. This triggered approach was adopted to better fit to the characteristics of veins in images, since their neighbourhood is always brighter.
In our application of M D T we select a pixel in image with its local N × M neighbourhood ( Ω ). In this small region we perform M D T so that each pixel in Ω receives its modified distance from the central pixel. The new value of a pixel in M D T -enhanced image receives the mean of distances calculated within Ω window. If the intensity gradients in the window are small, no visible structures are detected and so the result of MDT is low. On the other hand, if a pixel is within some well distinguishable structure, the result of MDT is high. It is even higher when the central pixel is visibly dimmer than its neighbourhood. By changing window size (in both x and y directions) we can obtain differing filtering outcomes that can expose various finger’s structures: internal, external and mixed.
We present examples of images obtained from consecutive steps of image processing using MDT in Figure 2 for two sizes of operational window. The original image (O) and the results of MDT-enhancement (M1/M2) are presented together with respective Otsu’s binarization [12] outcomes (T1/T2). Clearly, it can be seen that depending on the utilized window significantly different internal and external structures are enhanced and subsequently segmented. In Figure 3 we also show the results of MDT processing of data obtained at 730 nm and 940 nm for the same size of the window ( 9 × 3 pixels). In the longer wavelengths the vein system is much better visible both before and after processing with MDT.
In our previous research [11] the MDT was used to estimate the image sharpness and to assess if an image is in-focus, currently we use it for image enhancement applied before segmentation phase. We extend the range of sizes of a running, operational window in which the distance transformation operations are performed (previously a fixed window size was used). The mentioned quality measure of a pixel is now assumed as a filtered intensity. Therefore, the proposed filtering schema gives each pixel a value which depends on the relations between intensities of neighboring pixels in the operational window.

2.2. Biometric System

The evaluation of method’s effectiveness was conducted using the biometric database GustoDB [11,13,14] which contained a total number of over 34,000 images in nine wavelengths between 730 nm and 950 nm, obtained from 107 volunteers, for both left and right hands. Our apparatus consisted of (1) ten replaceable finger pads with NIR diodes (wavelengths mentioned above), (2) CCD camera with CCTV lens allowing for imaging with 3.8 pix/mm scale and (3) an hand-box isolated from external light and with black walls (to reduce NIR light scattering) in which the images are acquires. Although we concentrate only on finger images, the system was prepared to allow for taking pictures of whole hand. Additionally, the Arduino micro controller was used to control the intensities of NIR diodes so that the images in the camera are not saturated. Several most important parameters of the equipment used during construction of the database are given in Table 1. For more details of GustoDB see the original work [13].
The full pipeline used for the identification is presented in Figure 4. Its details are given in our recent work [11]. This biometric system consists of the following parts:
  • image acquisition in a selected wavelength using the CCD webcam with removed infrared filter from the sensor and the illumination utilizing LED NIR diodes,
  • storage of raw data in GustoDB database,
  • pre-processing procedures,
  • extraction of the region of interest (ROIs),
  • extraction the features from the ROIs,
  • template matching by calculating the similarity of features extracted from images and the data already collected in the database,
  • making the final decision about recognized personality.
The dotted arrows show a possible path of data flow when building the database (collecting images). During the identification process, this path is not used, expect for the situation when one wants to add new data (i.e., include a new person in the set).
Block (c) of the pipeline (see Figure 4) was implemented using the methods described and commonly used in the literature. Initially, the images are corrected for the offsets resulting from either generation of the dark current in CCD pixels or from the constant bias charge. This is done by taking dark frames (images acquired with the same exposure time and the matrix gain set as for the light frames), averaging them to improve their signal-to-noise ratio, and subtracting averaged dark frame from each taken image. In this procedure, the pixels exhibiting saturation are also identified and interpolated (if they are faulty) or excluded from further analysis if the saturation originated in too high light level. Next, the min-max histogram stretching is done to fit the range of intensities to 8-bit range (0–255). In the final step of pre-processing, the finger is extracted from the image using gradient-based approach presented by authors in [15].
In feature extraction part (d), three sub-blocks are present: image transformation, binarization and template generation. The transformation may utilize several alternative approaches: position-gray-profile-curve (PGPC) [16], maximum curvature points in image profiles (MC) [17], Niblack image adaptive thresholding (NAT) [18], repeated dark line tracking (RDLT) [5], wide line detector (WD) [19]. These are alternative methods used for image enhancement and they are used separately during evaluation of the system. The proposed MDT method is another, alternative method for this sub-block. Next, as binarization, we use the Otsu thresholding [12]. Finally, the generated template of features is matched (e) using Miura match technique [5] against other images in the database (b). The result of this matching (f) is also the final decision of the recognition system.
The image transformation part is very important in the whole pipeline since it is responsible for highlighting the structures within the finger before performing Otsu binarization. The better the structures are revealed in this phase, the more informative features are extracted prior to matching. Here is also the place where we propose our new algorithm, competitive to the other five methods utilized in the pipeline.

3. Results of Numerical Experiments

All experiments were carried out on GustoDB and with the biometric system described above. The image transformation was performed using five known methods and the proposed MDT technique, alternatively. The GustoDB was limited to images of the bottom side of fingers, as earlier research [13] has proved that this side allows for reaching a better identification score. The collection of images of a given finger was divided randomly 500 times into the test and database sets. By ’database set’ we understand the set of images used as a reference during the matching phase. In the GustoDB we have three images of each finger (images acquired after consecutive removing and putting the finger into the device). Therefore, two images of a given finger were left in the database set while the remaining one was put into the test set. The identification system had to answer the question if the image from the test set belongs to a given person having only two his images in database. The possible answers can be classified into four categories: true positive (TP), true negative (TN), false positive (FP) and false negative (FN). To assess the effectiveness of recognition we defined the accuracy (ACC), as in our previous work [13]:
A C C = T P + T N T P + T N + F P + F N
In such a quality measure all situations encountered by the system are covered: (1) system refused entry to wrong person (TN), (2) system refused entry to right person (FN), (3) system accepted wrong person (FP) and (4) system accepted right person (TP). Obviously, the ideal system showing 100 % accuracy should have zero false counts (both positive and negative). The ACC was calculated for each o 500 trials for all 5 evaluated techniques. Eventually, the mean ACC was determined as a final indicator of the identification efficiency.
We also evaluated the false positive ratio (FPR) using trails in which a tested person was excluded from the training database. The FPR, defined as in Equation (4), indicates the chance that someone is recognised by the system even though he/she is not included in the database. This measure shows the reliability of the system utilized for authentication rather then for identification propose. The results of ACC and FPR indicators are given in Table 2 and Table 3, respectively.
F P R = F P T P + F P
In the whole evaluation, we modified the parameters of the transformation methods to obtain the highest possible ACC values. The FPR was not optimized—it was calculated for the best parameters received from ACC optimization. In the proposed MDT method, this was the size of the running operational window. The best window dimensions indicated by the results of the experiment are listed in Table 2 together with the best ACCs. Additionally, the dependencies of the received ACC in the function of x and y dimensions of the window are exposed in Figure 5.

4. Disscussion

The experiment proves that the new method introduces a much higher success rate of classification. It can reach 99 % in some cases, however, these results should be treated with caution, since it may be biased by a limited number of images/persons used in the experiment. Importantly, the average results higher than 90 % for almost all wavelengths are very promising. The analysis of FPR confirms a clear advantage of the method as well. In most cases, this measure reaches a magnitude lower false positive rate than other techniques. The second best method in our tests was the wide line detector (WD) [19] which managed to outperform the proposed technique for a 950 nm image dataset and was close to its results in other cases. Other methods were not so flexible and universal in terms of utilized wavelength—their efficiency dropped even to 60 % and below, in some image sets.
It can be seen that the MDT optimization surfaces presented in Figure 5 show clear plateau when the window is large enough to cover the size of a typical structure present in the image. The method’s efficiency decreases when too small window is used (smaller than 4 × 4 , which is approximately 1 × 1 mm). This indicates that there is very little or even no information hidden in such small structures. It is also clear, that the window should be rectangular with its larger side perpendicular to the finger axis (which is x axis in the assumed coordinate system). Such a rectangular window promotes enhancement of features arranged along the finger (mostly the veins). The larger side of the window should have at least 15 pixels (4 mm). However, values larger than 21 pixels (5.5 mm) do not bring any significant improvements. The shorter side should be small, sometimes even the smallest possible 1-pixel. The optimal height should be between 1–5 pixels, which is approximately 0.5–1.5 mm. Summarizing, the desired size of the operational window should be about 5 × 1 mm on average, for our 3.8 pix/mm sampling rate.
One should also notice the change of plots shape in dependency on the wavelength utilized. While for the shortest and the longest ones (730 nm and 950 nm) the characteristic plateau is not observed and the results are on average poorer, for the light around 860 nm it can be clearly seen having very high mean ACC value over wide range of window’s dimensions. This indicates that for this range of NIR light the largest number of characteristic features in the images can be observed and the system’s efficiency is much less dependent on the dimensions of the utilized operational window. Additionally, for the longer wavelengths (950 nm), we could observe that much less external properties of finger skin were extracted after MDT enhancement. This was expected as more light is travelling unabsorbed through the finger, lowering the contrast of skin features. This was also accompanied by the lower quantum efficiency of CCD at 950 nm, resulting in poorer signal to noise ratio of the images.

5. Conclusions

In the paper, we proposed a new technique for processing of finger vein system images acquired in NIR. Our Modified Distance Transformation (MDT), employed before for image quality estimation, appeared to allow for selective enhancing of internal and external structures in the human finger which is crucial in the biometric system prior to the classification phase. The method should be correctly placed within the pipeline used for identification—after preprocessing and before the image binarization stage. By modifying only a single pair of parameters—the size of the operational window—one can easily fit the method to the lens/sensor combination used in a given biometric system.
The comparison with other, widely used processing algorithms, on an extended set of images collected in the GustoDB database, showed that our method results in significantly better efficiency of person identification. While our method reached over 90 % accuracy, the compared methods showed on average 10 % worse results. The tests were performed on a range of NIR images in between 730–950 nm and the superiority of the method was proved over whole this wavelength range. Based on the performed analysis, we also provide the suggested size of the operational window which should be utilized in the MDT to optimize its performance.

Author Contributions

Conceptualization, A.P. and K.B.; Methodology, A.P.; Software, K.B., T.M.; Validation, K.B., T.M., A.P.; Data Collection, K.B.; Writing, Draft Preparation, A.P., K.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by: (1) the Polish Ministry of Science and Higher Education funding for statutory activities, (2) Silesian University of Technology Rector Grant 02/140/RGJ20/0001 and (3) statutory funds of the Department of Cybernetics, Nanotechnology and Data Processing, Silesian University of Technology, BK-2020 (T.M.).

Acknowledgments

The collection of GustoDB data was performed in PhD thesis of Michal Walus.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Traore, I.; Alshahrani, M.; Obaidat, M.S. State of the art and perspectives on traditional and emerging biometrics: A survey. Secur. Priv. 2018, 1, e44. [Google Scholar] [CrossRef] [Green Version]
  2. Prahl, S. Tabulated Molar Extinction Coefficient for Hemoglobin in Water, 1998. Available online: http://omlc.ogi.edu/spectra/hemoglobin/summary.html (accessed on 5 March 2020).
  3. Raghavendra, R.; Busch, C. Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition. Pattern Recog. 2014, 47, 2205–2221. [Google Scholar] [CrossRef]
  4. Daas, S.; Boughazi, M.; Sedhane, M.; Bouledjfane, B. A review of finger vein biometrics authentication System. In Proceedings of the 2018 International Conference on Applied Smart Systems (ICASS), Medea, Algeria, 24–25 November 2018; pp. 1–6. [Google Scholar]
  5. Miura, N.; Nagasaka, A.; Miyatake, T. Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification. Mach. Vis. Appl. 2004, 15, 194–203. [Google Scholar] [CrossRef]
  6. Xiao, R.; Yang, G.; Yin, Y.; Yang, L. A novel matching strategy for finger vein recognition. In Intelligent Science and Intelligent Data Engineering; Springer: Berlin/Heidelberg, Germany, 2013; pp. 364–371. [Google Scholar]
  7. Dubuisson, M.P.; Jain, A.K. A modified Hausdorff distance for object matching. In Proceedings of the 12th International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994; pp. 566–568. [Google Scholar]
  8. Rosenfeld, A.; Pfaltz, J. Distance functions on digital pictures. Pattern Recog. 1968, 1, 33–61. [Google Scholar] [CrossRef]
  9. Lagodzinski, P.; Smolka, B. Application of the Extended Distance Transformation in digital image colorization. Multimed. Tools Appl. 2012, 69, 111–137. [Google Scholar] [CrossRef]
  10. Popowicz, A.; Smolka, B. A method of complex background estimation in astronomical images. Mon. Notic. R. Astron. Soc. 2015, 452, 809–823. [Google Scholar] [CrossRef] [Green Version]
  11. Waluś, M.; Bernacki, K.; Popowicz, A. Quality assessment of NIR finger vascular images for exposure parameter optimization. Biomed. Res. 2016, 2, 383–391. [Google Scholar]
  12. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  13. Waluś, M.; Bernacki, K.; Konopacki, J. Impact of NIR wavelength lighting in image acquisition on finger vein biometric system effectiveness. Opto Electron. Rev. 2017, 25, 263–268. [Google Scholar] [CrossRef]
  14. Waluś, M.; Bernacki, K.; Konopacki, J.; Nycz, M. NIR finger vascular system imaging in angiology applications. In Proceedings of the 2015 22nd International Conference Mixed Design of Integrate Circuits & Systems (MIXDES), Torun, Poland, 25–27 June 2015; pp. 69–73. [Google Scholar]
  15. Yang, L.; Yang, G.; Yin, Y.; Xiao, R. Sliding Window-Based Region of Interest Extraction for Finger Vein Images. Sensors 2013, 13, 3799–3815. [Google Scholar] [CrossRef] [PubMed]
  16. Jiang, H.; Guo, S.; Li, X.; Qian, X. Vein pattern extraction based on the position-gray-profile curve. In Proceedings of the 2nd International Congress on Image and Signal Processing, Tianjin, China, 17–19 October 2009; pp. 1–4. [Google Scholar]
  17. Miura, N.; Nagasaka, A.; Miyatake, T. Extraction of finger-vein patterns using maximum curvature points in image profiles. IEICE Trans. Inform. Syst. 2007, 90, 1185–1194. [Google Scholar] [CrossRef] [Green Version]
  18. Sezgin, M.; Sankur, B. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 146–166. [Google Scholar]
  19. Liu, L.; Zhang, D.; You, J. Detecting wide lines using isotropic nonlinear filtering. IEEE Trans. Image Process. 2007, 16, 1584–1595. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The L-shaped mask used in the original Modified Transformation and its sliding manner the double-scan technique.
Figure 1. The L-shaped mask used in the original Modified Transformation and its sliding manner the double-scan technique.
Sensors 20 01644 g001
Figure 2. Exemplary results of processing images with MDT (before and after binarization) using two sizes of operational window ( 9 × 3 pixels on the left and 9 × 19 pixels on the right side). O—original image, M1/M2—image processed with MDT, T 1 / T 2 —thresholded image (after Otsu binarization).
Figure 2. Exemplary results of processing images with MDT (before and after binarization) using two sizes of operational window ( 9 × 3 pixels on the left and 9 × 19 pixels on the right side). O—original image, M1/M2—image processed with MDT, T 1 / T 2 —thresholded image (after Otsu binarization).
Sensors 20 01644 g002
Figure 3. Comparison of MDT processing of images registered at 730 and 940 nm. From the left: original image 1 (730 nm), processed image 1, original image 2 (940 nm) and processed image 2. In both cases the operational window of 9 × 3 pixels was utilized.
Figure 3. Comparison of MDT processing of images registered at 730 and 940 nm. From the left: original image 1 (730 nm), processed image 1, original image 2 (940 nm) and processed image 2. In both cases the operational window of 9 × 3 pixels was utilized.
Sensors 20 01644 g003
Figure 4. The overview of the biometric pipeline used in the experiments. The highlighted block—IMG Transformation—is the place where the new algorithm is proposed.
Figure 4. The overview of the biometric pipeline used in the experiments. The highlighted block—IMG Transformation—is the place where the new algorithm is proposed.
Sensors 20 01644 g004
Figure 5. Impact of the window size on verification accuracy. Axis labels for the surface plots are: x,y—dimensions of the operational window in pixels, z—mean ACC. The highest values of ACC were marked with a red dot.
Figure 5. Impact of the window size on verification accuracy. Axis labels for the surface plots are: x,y—dimensions of the operational window in pixels, z—mean ACC. The highest values of ACC were marked with a red dot.
Sensors 20 01644 g005
Table 1. Hardware characteristics of the equipment used for the creation of GustoDB biometric database.
Table 1. Hardware characteristics of the equipment used for the creation of GustoDB biometric database.
Sensor technologyCCD
Image resolution640 × 480
Image pixel scale3.8 pix/mm
Illumination typeLED diodes
Wavelengths730–950 nm
Table 2. The mean accuracy ACC of Miura match [5] obtained for different segmentation methods: MDT, PGPC, MC, NAT, RDLT, WD. For the new method MDT, the best size of operational window was given for each wavelength.
Table 2. The mean accuracy ACC of Miura match [5] obtained for different segmentation methods: MDT, PGPC, MC, NAT, RDLT, WD. For the new method MDT, the best size of operational window was given for each wavelength.
WavelengthMDT (window)PGPCMCNATRDLTWD
950 nm85.25 ( 17 × 5 ) 78.7076.9767.1666.6687.61
940 nm96.27 ( 21 × 3 ) 77.3787.7682.4783.9592.23
890 nm99.37 ( 17 × 1 ) 86.9786.1083.4084.6694.92
880 nm91.98 ( 19 × 3 ) 79.1383.8980.7981.3189.92
875 nm99.56 ( 21 × 1 ) 87.8990.4887.2990.3096.43
860 nm98.52 ( 21 × 1 ) 85.9989.5286.7789.4494.78
850 nm94.08 ( 21 × 5 ) 79.2979.6078.5078.8389.92
808 nm96.72 ( 19 × 5 ) 79.7685.6782.5983.8594.05
730 nm97.19 ( 21 × 5 ) 62.7776.0959.6262.0681.82
Table 3. The false positive ratio FPR [%] calculated for all compared methods for the best parameters dictated by the ACC optimization.
Table 3. The false positive ratio FPR [%] calculated for all compared methods for the best parameters dictated by the ACC optimization.
WavelengthMDTPGPCMCNATRDLTWD
950 nm0.19660.28490.30390.43860.44390.1670
940 nm0.04810.29850.16170.23180.21040.1039
890 nm0.01750.17160.18730.22090.20000.0666
880 nm0.10670.27040.21500.25320.24650.1326
875 nm0.00570.15790.12490.16490.12850.0464
860 nm0.01990.18180.13790.17760.13810.0689
850 nm0.07720.27460.26850.28390.27990.1324
808 nm0.04360.26780.18100.23640.21550.0798
730 nm0.03790.49130.31620.53730.50560.2426

Share and Cite

MDPI and ACS Style

Bernacki, K.; Moroń, T.; Popowicz, A. Modified Distance Transformation for Image Enhancement in NIR Imaging of Finger Vein System. Sensors 2020, 20, 1644. https://doi.org/10.3390/s20061644

AMA Style

Bernacki K, Moroń T, Popowicz A. Modified Distance Transformation for Image Enhancement in NIR Imaging of Finger Vein System. Sensors. 2020; 20(6):1644. https://doi.org/10.3390/s20061644

Chicago/Turabian Style

Bernacki, Krzysztof, Tomasz Moroń, and Adam Popowicz. 2020. "Modified Distance Transformation for Image Enhancement in NIR Imaging of Finger Vein System" Sensors 20, no. 6: 1644. https://doi.org/10.3390/s20061644

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop