3.1.1. Image Pre-Processing

In order to reduce the noise in the image results generated by the system, a basic averaging method (a filtering technique) was used [50]. Although the method does not completely eliminate blurring especially at the edges, it works well for most parts of the image surface. Continuous application of the filter blurs out the entire image. However, once the filtering is executed to a level where it produces a preferred image (an image that is not multi-colored), the filtering process is stopped. The color of the image represents the mean values of the image, which is derived from using a convolution method of calculation. Consequently, the size of the convolution core acts as the variable (it has been identified as the parameter used in the method and the blurring effect is directly proportional to the growing size of the core) and the values are equivalent to 1.

The equation for calculating a 3 × 3 sized core is shown below.

$$f(i,j) = \frac{1}{9} \sum\_{k=-1}^{1} \sum\_{l=-1}^{1} g\left(i+k, j+l\right) \tag{1}$$

3.1.2. Image Segmentation

The image that is produced by the proposed system is segmented into components with similar characteristics. Segmentation aims at distinguishing scanned objects from one another, as well as from other objects within the surrounding [51]. A variety of methods are available for segmentation. For this study, the thresholding methods has been adopted due to its simplicity in its calculation [52]. Thresholding produces a complete segmentation of objects based on the transformation of the input image *f* on output binary image *f* according to the relationship shown below.

$$f'(i,j) = \begin{cases} 1 \text{ for } f(i,j) \ge T \\ 0 \text{ for } f(i,j) < T \end{cases} \tag{2}$$

where *T* is in an advance defined constant (threshold) and *f*´(*i,j*) = 1 for the image parts of the examined object. Thresholding also helps in testing the elements of an image in a progressive manner, by assigning values to elements of the image in accordance with identified requirements. Nevertheless, this segmentation method is flawed on the ground that selection of an accurate *T* (threshold value) may be difficult. To automatically set the brightness of the image generated by the system, a "global or local characteristics of the image" can be used. The global image characteristics method uses the information from all pixels in the image in order to determine the threshold value. The threshold is subsequently adjusted in accordance with the histogram generated or the mean value of the intensity of each image point obtained within the image [53]. On the other hand, the local image characteristics emphasize the usage of different thresholds for each element of the image, so that the threshold value is then calculated from the surrounding. The pixel value is calculated by using the following formula.

$$f'(i,j) = \begin{cases} 1 \text{ for } f(i,j) \ge (\mu\_{ij} - T\_{\mathfrak{g}}) \\ 0 \text{ for } f(i,j) < (\mu\_{ij} - T\_{\mathfrak{g}}) \end{cases} \tag{3}$$

where μ*ij* denotes an average value of all points from the surrounding. The surrounding size selection is done in accordance with the selected target of the segmentation, and it is mainly dependent on the size and shape of the examined object. *Tg* is a constant, whose value is dependent on the threshold value, and, therefore, it adjusts accordingly. This constant is often a positive number, but can also become negative depending on the prevailing situation. Segmentations are often selected with respect to the required results. As such, they are empirically identified for a given task.

The software system of the proposed identification and verification was used to experiment both image brightening techniques thresholding [52,54]. Solutions produced using the local image characteristics was not selected due to non-homogeneity of the examined object (hand). This non-homogeneity is caused, for example, by the wrinkles on the skin, different color of the skin, etc. Unfortunately, it was not possible to set up the thresholding parameters in the same way as the size of the surrounding, and the *Tg* constant. If it were possible, the resultant image would have been subjected to further processing (for determining the contour of the hand). As a result, the global threshold setting method has been adopted, with the average value of complete image intensity reduced by the constant. This method provides the best image that can be subjected to further processing.

#### 3.1.3. Definition of Biometric Characteristics

In order to propose the use of biometric characteristics from this experimental investigation, the studies by References [27,36] were reviewed. The measured characteristics identified for this study, which is utilized in the design of the biometric device, are discussed and illustrated in Figure 2.

**Figure 2.** Depiction of measured biometric characteristics [9], where at (**a**) graphics shows the measurable characteristics of the palm of the hand and (**b**) described the base as the point of intersection of green lines.

Figure 2a shows the measurable characteristics of the palm of the hand. The individual length of the fingers would be measured from the red point (tip of the finger) to the blue point (the central part of the base of the hand). The base is described as the point of intersection of green lines, known as the neighboring values, as illustrated in Figure 2b. Furthermore, the distance between the fingertips are measured (excluding the distance between the thumb and the finger next to it). This is due to the huge variation in the size of the distance between both fingers. Experimenting this on the proposed scanner shows that the difference in the distance from single user images were found to have a greater value than the differences among other users. The distances between points "values" are also measured, but without the thumbs. The width of the palm, L6, is the last measured feature. For a single finger (Figure 2b), there are 10 points distributed along the finger length. The central red point position is determined from the angle located between the sections (represented by blue vectors).
