*3.4. Feature Weight Calculation*

Let the number of signatures of the template equal one and the weight value *wa* is increments from 0 to 1 with the interval 0.05. The signatures of the first four signers on SVC1 dataset are selected for training to find the optimal *wa*. On the four signature datasets, the respective EERs under different weights are calculated. The results are shown in Table 4.


**Table 4.** EERs under different weights on four datasets.

\* Only choose *wa* = 0.5~1.0 as there is a long computation time with the most signatures.

From Table 4, it can be seen on the training samples that the minimum EER is obtained when *wa* = 0.85, and the EERs on the other datasets are 3.47%, 12.30%, 12.25%, and 6.07%, respectively. At this point, the same minimums are obtained on SVC1 and MCYT datasets. In Figure 11, we can see that when *wa* is incremented, the EER value changes to a convex function, and the EER on different datasets has only a minimum value.

In fact, we can also see that the EER is not much different when *wa* ∈ [0.6, 0.95], which means that our weight selection has better robustness, and when *wa* ≤ 0.3, The EER has increased dramatically. At the same time, when *wa* = 0, signature verification actually only depends on GSC the signature writing time ratio. On the SVC1 and SVC2 datasets, the EER of each is over 20%, and on the SUSIG dataset, the EER is still less than 5%. It can be seen that the writing time ratio is a better feature distinguishing the test signature. In addition, SVC1&SVC2 is much higher than the SUSIG dataset in the difficulty of signature verification of four signature datasets. This can also be obtained from the whole experimental results.

**Figure 11.** EERs and the mean on four signature datasets with different weight *wa*.

#### *3.5. Experimental Results*

Performance of the system with the maximum EER, the minimum EER, the average EER and the standard deviation of EERs measured in percentage for different number reference signatures of similarity metrics are shown in Table 5.


**Table 5.** Performances of the system with different dataset.

From the results described above, when experiments are implemented on SUSIG, EER = 3.47% can be the best result based on CSM with five reference samples. As for SVC1&SVC2&MCYT, it can be provided EER = 12.30%, EER = 12.25% and EER = 6.07%, respectively. For four different datasets and different number of reference samples, the EERs of test results with #1 template repeated 10 times are arranged in ascending order, as shown in Figure 12.

**Figure 12.** EERs of test results with #1 template repeated 10 times for SUSIG, SVC1, SVC2 and MCYT, respectively.

At the same time, it should be emphasized that the deviation of the maximum and minimum values of EER is more than double almost when #1 sample is randomly selected as templates in ten repetitions as seen in Table 5 and Figure 12. It demonstrates that the selection of template samples is also essential and representative template samples can effectively improve the accuracy of signature verification.

In order to demonstrate the effectiveness of our proposed method, we compare the results of our proposed method with other state-of-the-art methods. It is to be mentioned that each of these methods have different features and classifiers, and it is difficult to make comparisons between them based on different datasets. Hence, we just compare the performance of methods which are carried out on SUSIG, SVC1, SVC2 and MCYT datasets. Nevertheless, the best results methods carried out both on them are taken for comparative studies, which use five genuine reference signature of a signer for enrollment. The best EERs reported from the reference works on SUSIG, SVC1, SVC2 and MCYT are given in Tables 6–9 with one and five genuine reference signatures.


**Table 6.** Comparative studies of state-of-the-art methods implemented on SUSIG.

**Table 7.** Comparative studies of state-of-the-art methods implemented on SVC1.


**Table 8.** Comparative studies of state-of-the-art methods implemented on SVC2.



**Table 9.** Comparative studies of state-of-the-art methods implemented on MCYT.

Furthermore, it should be pointed out that the performance of a signature verification system is related to the number of samples used to build the model. Even in some statistical models, true and false signatures need to be trained. However, in many cases, it is difficult to register a large number of signatures in the actual system, which also limits the practical applications of many excellent methods, but for our signature verification system, the requirement for the number of reference signatures or templates is minimal and even only one signature can be used. Moreover, among the already known single signature systems, our performance is the best, and is very close to that of multi-signature systems.
