**4. Experiment Analysis**

In order to fully prove the result of cross-device hand vein image recognition based on the bit-plane mutual information, this experiment used two different parameters of the device, labeled as first device and second device to collect and classify 50 peoples' hand vein images. Their right and left hand were collected by 10 images, respectively; a total of 2000 dorsal hand vein images with a size of 400×400 were taken. Due to the disparity between the vein networks, right and left hands are considered as different subjects, which makes the number of classes double. In addition, there are differences in parameters such as contrast, brightness, focal length, and lens optical performance of two different devices. The data were collected twice by different devices with a time span of 12 months.

The experiment uses one device for registration and the other for recognition. Data acquisition uses two generations of different acquisition systems. The two devices are two generations of different acquisition systems, their illumination module adopts reflectance illumination scheme of infrared LED array with different wavelength and bandwidth. Device1 uses the 700 nm ~ 1000 nm near-infrared diode source (wideband source) as the active incident source. Device2 uses the near-infrared diode light source with a central band of 850 nm and a radius bandwidth of 50 nm (narrow-band light source) and increases the number of LED array. In the image acquisition module, device1 uses a common camera, the main parameters are as follows: Resolution: 420 lines, output pixels: 640×480, signal to noise ratio: 40 dB, device2 uses an industrial grade camera, the main parameters: Resolution: 570 lines, output pixels: 768×494, signal to noise ratio: 46 dB. In the interface module, the two devices also use different acquisition cards.

In order to ensure the distribution of cross-device dorsal hand vein images, it used automatic collection and didn't limit the volunteers' posture. In addition, the parameter difference between different devices makes it more difficult for recognition based heterogeneous images. We used different types of images (gray-normalized image, binary image, the gray image that only retains the contour of dorsal hand vein and the bit plane image) to experiment separately. This experiment chose optimal number of blocks, bit plane and mutual information calculation mode to compare the result of our algorithm with other algorithms for cross-device images, and then the robustness of the algorithm was verified by the recognition rate.

Due to changes of the collection environment, the images collected by two devices are significantly different, which are mainly reflect in the changes of brightness, displacement and rotation.

The images have a distinct brightness difference in the brightest and darkest areas as shown in Figure 15. it affects the recognition rate to a large extent.

**Figure 15.** Difference in brightness.

The difference in the posture of the person and the handle width of different devices, the back of hand produces a certain displacement, as shown in Figure 16. When the displacement is large, some information on the back of hand will be covered, therefore, it affects the recognition rate to a certain extent.

**Figure 16.** Difference in displacement.

Since the different angles of collector's hands, dorsal hand vein images are deformed, as shown in Figure 17. It can also affect the recognition rate of dorsal hand vein.

**Figure 17.** Difference in rotation angle.

These differences can lead to a significant increase in the difficulty and complexity of recognition of cross-device dorsal hand vein images. Experimental comparison is conducted below to verify that method of this paper has a better effect on overcoming the effects of brightness, displacement and rotation.

First, the gray-normalized image (Figure 4), the binary image (Figure 5) and the gray image that only retains the contour of dorsal hand vein (Figure 6) were divided into 20 × 20 blocks, respectively. Then, the mutual information feature vector between the blocks was obtained by using the calculation modes of horizontal, vertical and eight-neighborhood respectively. Finally, the classification result was output by the Euclidean distance classifier. The recognition rates of three different types of dorsal hand vein images in three modes are shown in Table 1.



Through experiments, it can be found that the recognition rate of the gray-normalized image is less than 50%, and the binary image reaches 86.60%, while the gray image that only retains the contour of dorsal hand vein reaches 89.67%. The gray-normalized image has the effect of the background such as skin, and the binary image completely loses the grayscale information, so the recognition rate is not as good as the gray image that only retains the contour of dorsal hand vein. In the three modes, the recognition rate of the eight-neighborhood mode is higher than the other two modes, which indicates that it is more accurate to calculate the mutual information of adjacent blocks by eight-neighborhood traversal as the texture feature of dorsal hand vein.

In order to make full use of the gray information of the dorsal hand vein and overcome the effects of illumination, brightness, rotation and scale changes in the acquisition environment, the eight bit planes generated by the gray image that only retains the contour of dorsal hand vein was tested separately, and the statistical recognition rate is shown in Figure 18.

**Figure 18.** Recognition rate of different bit planes in three modes.

It can be seen that when the number of blocks is 20 × 20 and the mutual information calculation mode is eight-neighborhood traversal, the recognition rate of the sixth bit plane (b5) reaches the best in this paper, which is 93.33%. The sixth bit plane not only contains the original contour of dorsal hand vein, but also overcomes the influence of brightness and noise to a certain extent, and better reflects the texture features. At the same time, the experiment is compared with other methods on dorsal hand vein recognition. In the long-term research of the dorsal hand vein recognition, the Intelligent Recognition and Image Processing Laboratory of North China University of Technology (NCUT) reproduced some mainstream algorithms on the NCUT hand vein dataset. The results of the comparative experiment are shown in Table 2.


**Table 2.** Comparison of recognition rates about different algorithms under cross-device.

The LBP algorithm is used to research the local grayscale texture features, and it requires a high degree of registration about the position of dorsal hand vein, so the recognition rate is not high. The PCA algorithm treats the sample as a whole, and therefore ignores the local attribute, but the neglected part is likely to contain important separability information, so the effect of cross-device dorsal hand vein recognition is very poor. Although the SIFT algorithm has the characteristics of scale transformation, rotation and illumination invariance, there are fewer feature points taken by different devices, therefore, the recognition rate is also not very high. The position of the feature points generated by the Gaussian random distribution based on the GDRKG random feature point algorithm is not determined, so the probability of matching errors is greatly increased, and the recognition rate is not ideal. The improved SIFT algorithm has achieved a good recognition rate in cross-device experiments, but it relies too much on parameter settings and template selection, and the calculation

speed is very slow. Our method is to calculate the mutual information between adjacent blocks of the bit planes to quantify the texture features of dorsal hand vein, and the Euclidean distance is used for classification. The high recognition rate achieved by the experiment fully demonstrates the effectiveness and feasibility of the proposed method.
