*3.1. Dataset and Evaluation Protocol*

The efficacy of the proposal is demonstrated on the publicly available SUSIG, SVC2004 Task1&Task2 and MCYT datasets. The main differences among the four datasets are the acquisition protocol, device, and signer. In the following, the datasets used in this paper are briefly described:


(4) MCYT-100 Subcorpus [36]*:* It is also composed of 100 signers with 25 genuine and 25 forged signatures. For convenience, this subcorpus is called MCYT for short in this paper. The data in MCYT consists of *X*, *Y*, pressure, azimuth, altitude, timestamp, and button status, collected at 100 Hz.

Out of these, one genuine signature is selected randomly to be used as reference sample (template), and the other genuine signatures and all skilled forgeries are used as test samples. Thus, in our work there are 9480 reference and test signatures from 274 signers to be verified in total.

We adopt EER, i.e., the error rate at which false acceptance rate (FAR) and false rejection rate (FRR) are equal, as a measure for characterizing verification performance. In order to obtain reliable results for independent test data, this process of random selection of reference signatures and performance evaluation is repeated ten repetitions.

## *3.2. Parameter Determination for EC*

Taking the template signature of Figure 5 as an example, the signature is from SVC2004 Task1 signer #1, with 147 data points. The total number of segmentations *K* of different data points of the reference signature is shown in Table 3.


**Table 3.** Proposal parameter *K* with different data points of the reference signature.

The optimal segmentation matching calculation between the template signature and itself was executed. The theoretically similarity distance of each segmentation is theoretically zero. The parameters *S* and *Iterations* of the EC are determined by enumeration calculation as seen in Figure 9, where *S* = 4, 8, 12, 16, 20 and *Iterations* = 100, 200, 400, 800. In the abovementioned optimal segmentation matching calculation, although the global optimal parameters are not obtained, the different *S* and *Iterations* can quickly reach the local minimum. Comprehensive calculation of speed and accuracy requirements, choose *S* = 20 and *Iterations* = 400 as the control parameters for EC.

**Figure 9.** The optimal segmentation matching calculation with different EC parameters.

## *3.3. Feature Validity Test*

Select 20 genuine signatures and 20 skilled forged signatures of the first signer on SVC1, take one of the first 10 genuine signatures as a template in turn, and the remaining 19 genuine signatures and 20 skilled forged signatures respectively perform matching calculation, and then calculate *sx*, *sy*, *IVR*, *GSC*, *LSC*, and final *Score*.

Figure 10a–d show the distribution of similarity differences between genuine and skilled forged signatures. Signature verification can be regarded as a two-category problem. It can be seen from the distribution process of the comparison results of the signatures in Figure 10a that only the similarity of *X* and *Y* curves are used, and it is difficult to distinguish the authenticity of each test signature. From Figure 10b, it can be seen that the IVR has a certain degree of discrimination, but there are many indistinguishable confusing signatures. However it can be clearly seen in Figure 10c that the fusion feature LSC and the global feature GSC have a high degree of discrimination. Only a very small number of test signatures are misidentified. In the Formula (11), the above several features are merged, and the similarity of the two curves can be used as a one-dimensional index. The discriminant threshold can be used to directly identify or classify the signature authenticity.

**Figure 10.** One of the first 10 genuine signatures is used as the template signature in turn, and the remaining genuine and skilled forged signatures are used as the comparison result of the test signatures. (**a**) Score *X* and score *Y* distribution; (**b**) Score *IVR* distribution; (**c**) *GSC* and *LSC* distribution; (**d**) 10 templates in turn *Score* distribution.

In Figure 10d, it can also be clearly seen that different template signatures have different discriminating thresholds, and the degree of discrimination between genuine and forged signatures is also different. Using the #4, #5, and #7 signatures as templates, the signature authenticity can be completely distinguished, and the #7 template has the largest degree of discrimination.
