Next Article in Journal
Impact of Co-Channel Interference on Two-Way Relaying Networks with Maximal Ratio Transmission
Previous Article in Journal
Protecting Private Communications in Cyber-Physical Systems through Physical Unclonable Functions
 
 
Article
Peer-Review Record

Writer Identification Using Handwritten Cursive Texts and Single Character Words

Electronics 2019, 8(4), 391; https://doi.org/10.3390/electronics8040391
by Tobias Kutzner 1, Carlos F. Pazmiño-Zapatier 2, Matthias Gebhard 1, Ingrid Bönninger 1, Wolf-Dietrich Plath 1 and Carlos M. Travieso 2,3,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Electronics 2019, 8(4), 391; https://doi.org/10.3390/electronics8040391
Submission received: 30 December 2018 / Revised: 22 March 2019 / Accepted: 23 March 2019 / Published: 1 April 2019
(This article belongs to the Section Computer Science & Engineering)

Round 1

Reviewer 1 Report

The authors propose a method for user verification by analysing handwritten cursive text. The work is interesting, but it needs a thourough review of the English language. The text is in many parts unclear, and typos are found along the whole text.


In the Introduction, I'm missing clear motivations for the proposal of a new method for the problem at hand. A wide collection of existing approaches are analysed and commented, but a critical discussion is not present (or it does not stand out clearly to me, at least). What is missing in the existing approaches that is taken into account and possibly solved by this work?


Figure 4 is not informative.


As all the features are discussed in Section 3, Table 1 containes non-necessary information (and it does not provide more explanations to the text).


There is a 'not found' reference at lines 274, 292-293, 399, 420.


What value of k is used in the kNN algorithm? 


THe kNN classifier shows high results on one data set and bad results on the other data set. Could the authors provide an interpretation of such results? 


In Table 7, results from different methods are compared. Actually, results are obtained on different data sets, making the comparison not fair. The aruthors should directly compare results on the same data, and discuss other methods that provide results on different data only from a qualitative point of view. For the sake of clarity, the results should be reported in differente tables.



Author Response

**********************************************************************

Comment and Response


Reviewer 1: The authors propose a method for user verification by analysing handwritten cursive text.

 

 

**********************************************************************

Comment #1: The work is interesting, but it needs a thourough review of the English language. The text is in many parts unclear, and typos are found along the whole text.

 

Response: Thank you for your comment.

Authors have reviewed English and the text has been improved. It can be observed in the following comments and their answers.

 

**********************************************************************

Comment #2: In the Introduction, I'm missing clear motivations for the proposal of a new method for the problem at hand. A wide collection of existing approaches are analysed and commented, but a critical discussion is not present (or it does not stand out clearly to me, at least). What is missing in the existing approaches that is taken into account and possibly solved by this work?

 

Response: Thank you for your comment.

Authors have added two paragraphs in “Introduction” section, in order to show the motivation and a critical discussion.

 

**********************************************************************

Comment #3: Figure 4 is not informative.

 

Response: Thank you for your comment.

Figure 4 shows how the information is processed before to use it for feature extraction. Authors think it is interesting because it shows the removing of symbols does easy the following task.

 

**********************************************************************

Comment #4: As all the features are discussed in Section 3, Table 1 containes non-necessary information (and it does not provide more explanations to the text).

 

Response: Thank you for your comment.

Authors have included the Table 1 as a summary of all features. All section 3 has to be read in order to know all features, but that information is located with Table 1 with a reference number; and then, to do easy the reading of section 3, using that reference number per each feature. Besides, the same format is used for Tables 6 and 8, and it helps to have clear information.

 

**********************************************************************

Comment #5: There is a 'not found' reference at lines 274, 292-293, 399, 420.

 

Response: Thank you for your comment.

Authors have included these references, and now it is fine.

 

**********************************************************************

Comment #6: What value of k is used in the kNN algorithm? 

 

Response: Thank you for your comment.

The value of k in the kNN algorithm is 1. We have tested with other k, but the tests gave worse results for the classification so the setting was set to 1. Authors have made the appropriate addition in the document.

 

**********************************************************************

Comment #7: THe kNN classifier shows high results on one data set and bad results on the other data set. Could the authors provide an interpretation of such results? 

 

Response: Thank you for your comment.

The value for k in the kNN classifier where not changed for both records. After feature reduction, the results for kNN have also improved. Thus, we assume that the worse results for kNN strongly depend on the features used for the second data set (IAM Online Handwriting Database). The feature reduction using Info Gain Attribute Select brought a significant improvement for the second data (IAM Online Handwriting Database) set see Table 17, but still worse results than the feature reduction with Info Gain Attribute Evaluation for the first dataset (Secure Password DB 150), see Table 9. In summary, it can be stated that both the feature reduction and the classification method have a strong influence on the results for both data sets.

 

**********************************************************************

Comment #8: In Table 7, results from different methods are compared. Actually, results are obtained on different data sets, making the comparison not fair. The aruthors should directly compare results on the same data, and discuss other methods that provide results on different data only from a qualitative point of view. For the sake of clarity, the results should be reported in differente tables.

 

Response: Thank you for your comment.

Authors know that the comparison with different dataset is not significate in order to show the behavior of this approach vs the state of the art. But it gives an idea, when the dataset is the same or not, how the state of the art is working. Therefore, we have included an intermediate line in new Table 18 (old Table 7), and to separate the datasets. Too, we have included a text where this information is shown.

 

**********************************************************************


Author Response File: Author Response.pdf

Reviewer 2 Report

The authors presents a writer identification system based upon a set of 67 features.

The description of the whole method need to be improved. At the moment, it is not possible for another scientist to reproduce the results presented by the authors:

1) section 3 "Methods for feature extraction" need to be written again. The majority of features are not described neither in a textual form, nor in a formal way. My suggestion is to provide a description for each feature. Just for example, what is STAT_SEGMENTS, GEO_REG_ANGLE???

2) Even if for some features a formal description is provided, it is not clear. For example, feature SPHI: the authors define the feature in terms of segment and points. What are the segments they are writing about? It is not clear if a segment is defined between two consecutive points or between a point and all the others. For example, in equation 2 it is not clear who are xn and yn.. Equation 1 and Figure 5 are not clear

3) It is not clearly declared if the features are computed on the whole image or on each connected component of the image.


Experimentation issues:

0) Line 305: "when the distance is too large we refuse the user". What is the meaning of "too large"? It is not proper for a scientific paper.

1) Line 306: "for calculating the ROC curve we alter the distance [...]" To compute the ROC curve you have to vary the decision threshold and not the distance between the point and the centroid! 

2) Line 341-346: the method is evaluated only in a random forgery scenario. It is important to test the method even when the forger provides the same password registered by the user. Features vary for two reasons: the writer is different, the password is different. We know that if the writer and the password change the best result is around 95%. What happens if the writer changes but the text of the password is the same of the one in the training set? 


In the conclusions the authors state: "it is a innovation, a same feature set gives a good answer for different handwritten data".

This conclusion is not supported by the results. The authors obtain the best results on each dataset by using two different subsets of the starting feature set. The authors have to use on dataset 2 the best set of features for dataset 1.


Eventually, the introduction is good but my suggestion is to enrich the part about signature stability, signature verification and dtw in signature verification by citing some of the new papers published in literature in 2017-2018.

 

There are many missing references. Please, check lines number: 274, 293, 420, 464

English must be improved in many parts of the document. Some examples: line 263, 287, 341-347,391. Instead of "own dataset" a better way to call it could be "private dataset" or "not public dataset".

Author Response

**********************************************************************

Comment and Response

 

Reviewer 2: The authors presents a writer identification system based upon a set of 67 features.

 

**********************************************************************

Comment #1: The description of the whole method need to be improved. At the moment, it is not possible for another scientist to reproduce the results presented by the authors:

 

Response: Thank you for your comment.

Authors will try to follow these indications in order to do reproduced this proposal.

 

**********************************************************************

Comment #2: 1) section 3 "Methods for feature extraction" need to be written again. The majority of features are not described neither in a textual form, nor in a formal way. My suggestion is to provide a description for each feature. Just for example, what is STAT_SEGMENTS, GEO_REG_ANGLE???

 

Response: Thank you for your comment.

Authors have reworked section 3 and it has better structure and more details.

 

**********************************************************************

Comment #3: 2) Even if for some features a formal description is provided, it is not clear. For example, feature SPHI: the authors define the feature in terms of segment and points. What are the segments they are writing about? It is not clear if a segment is defined between two consecutive points or between a point and all the others. For example, in equation 2 it is not clear who are xn and yn.. Equation 1 and Figure 5 are not clear

 

Response: Thank you for your comment.

In this paper, a segment stands for the summary of all coordinates from the time when the pen is placed to write until lift the pen. These processes where repeated over the entire signature, so a signature can consist of a minimum of one segment to a maximum of an infinite number of segments. Authors have added a more detailed explanation in section 3 “Methods for feature extraction”.

 

**********************************************************************

Comment #4: 3) It is not clearly declared if the features are computed on the whole image or on each connected component of the image.

 

Response: Thank you for your comment.

The features computed from the coordinates, segments and the time never from the image, but probably from the entirety of the information of the whole signature. Authors have added a more detailed explanation in section 3 “Methods for feature extraction”.

 

**********************************************************************

Comment #5: Experimentation issues:

0) Line 305: "when the distance is too large we refuse the user". What is the meaning of "too large"? It is not proper for a scientific paper.

Response: Thank you for your comment.

Authors mean when the distance of a test instance is greater than the actual decision threshold, we refuse the user. Authors have added a more detailed explanation in this paragraph.

 

**********************************************************************

Comment #6: 1) Line 306: "for calculating the ROC curve we alter the distance [...]" To compute the ROC curve you have to vary the decision threshold and not the distance between the point and the centroid! 

 

Response: Thank you for your comment.

You have true, authors have now formulated this paragraph more clearly: For KNN we use the distance between an instance and the centroid of a class as decision threshold. To calculate the ROC-Curve, we determine the greatest distance over all test instances. Then we alter the threshold from 0 to this determined greatest value. When the distance of a test instance is greater than the actual decision threshold, we refuse the user. For every altered threshold, we calculate the TPR and FPR.

 

**********************************************************************

Comment #7: 2) Line 341-346: the method is evaluated only in a random forgery scenario. It is important to test the method even when the forger provides the same password registered by the user. Features vary for two reasons: the writer is different, the password is different. We know that if the writer and the password change the best result is around 95%. What happens if the writer changes but the text of the password is the same of the one in the training set? 

 

Response: Thank you for your comment.

You have true, for generating the ROC curve authors worked with a real forgery dataset. Therefore, we have different writer for the same password. The method described in paragraphs 341-346 for the automatic generation of imposters where tested by us, but all imposters where rejected, so we decided to work with real imposters (Dataset) to test a real scenario and thus work for the generation of the ROC curve. Authors have revised the paragraph accordingly.

 

**********************************************************************

Comment #8: In the conclusions the authors state: "it is a innovation, a same feature set gives a good answer for different handwritten data".

 

This conclusion is not supported by the results. The authors obtain the best results on each dataset by using two different subsets of the starting feature set. The authors have to use on dataset 2 the best set of features for dataset 1.

 

Response: Thank you for your comment.

Authors mean that a new set of features, with its particular composition, has been implemented. It is the innovation of the proposal. Besides, it was tested on two different datasets and the accuracies show a good answer, and therefore, authors indicate that it is a robust proposal. 

 

The comment is totally true, if both datasets would have the same structure, but both dataset has a bit different. IAM dataset was built with continuous writing and Secure Password DB 150 was built with isolated characters (a password). The generation of data is different and a character with continuous writing changes if it is written on isolated way.

 

Authors say that the set of features is good and it Is the innovation, but according to the type of writing, the feature reduction change. After experiments, authors have observed how the accuracy changes for different combination of feature reduction and classifier, but finally, a good result is found based on the proposed feature set.

 

Authors have included a text in “Conclusion” section, in order to show clearer this comment.

 

**********************************************************************

Comment #9: Eventually, the introduction is good but my suggestion is to enrich the part about signature stability, signature verification and dtw in signature verification by citing some of the new papers published in literature in 2017-2018.

 

Response: Thank you for your comment.

Authors think that this comment can be interesting in order to improve the “Introduction” section. There the following references have been included:

 

Hafemann, L.G., Sabourin R., Oliveira, L.S. Offline handwritten signature verification — Literature review. 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Montreal, QC, 2017, pp. 1-8, 10.1109/IPTA.2017.8310112

 

Al-Hmouz, R., Pedrycz, W., Daqrouq, K. et al. Quantifying dynamic time warping distance using probabilistic model in verification of dynamic signatures. Soft Computing. 2019, 23, 2, 407-418. doi: 10.1007/s00500-017-2782-5

 

This has been included in “Introduction” and “References” sections.

 

**********************************************************************

Comment #10: There are many missing references. Please, check lines number: 274, 293, 420, 464

 

Response: Thank you for your comment.

Authors have included these references, and now it is fine.

 

**********************************************************************

Comment #11: English must be improved in many parts of the document. Some examples: line 263, 287, 341-347,391. Instead of "own dataset" a better way to call it could be "private dataset" or "not public dataset".

 

Response: Thank you for your comment.

Authors have reviewed English and the text has been improved. Besides, we have changed “own dataset” to “private dataset”. Authors too have reviewed the final document together with an English native.

 

 

**********************************************************************

 


Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors have, only partially, taken into account my previous suggestions. The quality of the paper and result presentaiton has improved, but the paper has still lacks of clarity. The main problems regard the quality of the text. Altought it has been reviewed (slightly), there are still typos, gramatical errors and incomplete sentences, which may not fulfill the quality standards required by Electronics. 


Figure 4, although explains a data conversion process, does not containe information necessary to the scientific understanding of the proposed method. It contains only an illustration of the XML format of the data set and the data format used by the software of the authors. It is a technicality, which can be included in an appendix, or only in a software package to be released together with the paper. Including such a Figure in a scientific paper doesn't contribute to the scientific content and value of the work.



There are still typos, missing and incoherent sentences, which make the paper difficult to read. Some (non-exhaustive) examples are below:


The sentence at lines 61-63 is repeated at line 67-69. 

line 105. 'und'

line 118. the subject is missing? anyway the sentence is incomplete.

line 447. 'The first stage was checked the accuracy of each type of features, and finally, for all features' is unclear and grammatically wrong

line 448. 'The 448 second stage was applied feature reduction methods in order to improve the accuracy.' similar to the previous comment

496. 'archives' should be 'achieves'


Author Response

**********************************************************************

Comment and Response

Reviewer 1

 

 

**********************************************************************

Comment #1: The authors have, only partially, taken into account my previous suggestions. The quality of the paper and result presentaiton has improved, but the paper has still lacks of clarity. The main problems regard the quality of the text. Altought it has been reviewed (slightly), there are still typos, gramatical errors and incomplete sentences, which may not fulfill the quality standards required by Electronics.

 

Response: Thank you for your comment.

Authors have reviewed English and the whole document in order to improve.

 

**********************************************************************

Comment #2: Figure 4, although explains a data conversion process, does not containe information necessary to the scientific understanding of the proposed method. It contains only an illustration of the XML format of the data set and the data format used by the software of the authors. It is a technicality, which can be included in an appendix, or only in a software package to be released together with the paper. Including such a Figure in a scientific paper doesn't contribute to the scientific content and value of the work.

 

 

Response: Thank you for your comment.

Authors have removed the figure 4 and renamed the rest of figures.

 

**********************************************************************

Comment #3: There are still typos, missing and incoherent sentences, which make the paper difficult to read. Some (non-exhaustive) examples are below:

The sentence at lines 61-63 is repeated at line 67-69. 

line 105. 'und'

line 118. the subject is missing? anyway the sentence is incomplete.

line 447. 'The first stage was checked the accuracy of each type of features, and finally, for all features' is unclear and grammatically wrong

line 448. 'The 448 second stage was applied feature reduction methods in order to improve the accuracy.' similar to the previous comment

496. 'archives' should be 'achieves'

 

Response: Thank you for your comment.

Authors have modified all these mistakes, and others.

 

**********************************************************************


Author Response File: Author Response.pdf

Reviewer 2 Report

The authors improved the paper as requested by reviewers but the paper is not ready for publication. English has to be improved, some sentences are really cryptic. Typos and Table enumeration have to be verified.

My suggestions for improving the paper:


a) line 257-260: a segment stands for the summary of all coordinates from the time when the pen is placed to write until lift the pen. 

The most used term is “connected component” and in some cases “stroke”. I think that connected component or fragment are better than segment. It is only a suggestion. 


b) lines 51-52 The author’s profile, can be identified from text samples, gender, age range, and handedness of 52 the writer [1-2].  This sentence is not clear


c) line 80: Many recent studies have focused on writer identification used by the signature [5-7]. This sentence is not clear. Many recent studies are focused on signature verification ??


d) line 91: Simple forgeries were detected with an Equal Error Rate (EER) of 0.85% and 91 skilled forgeries with an ERR of 2.13%

It is not clear for the reader if the results refer to the reference set with 5 or 10 signatures. Furthermore, The authors mixed up the results: 0.85% is obtained on  MCYT with 10 references while 2.13% is obtained on SUSIG with 10 references.

My suggestion is to cite the following papers


d1) A. Parziale, M. Diaz, M. A. Ferrer, A. Marcelli, SM-DTW: Stability Modulated Dynamic Time Warping for signature verification, Pattern Recognition Letters, 2018, ISSN 0167-8655, https://doi.org/10.1016/j.patrec.2018.07.029.

d2) A. Sharma, S. Sundaram, On the exploration of information from the dtw cost matrix for online signature verificationIEEE Trans. Cybern., 48 (2) (2017), pp. 611-624


Both the papers presents top performing results on MCYT and Biosecure-ID dataset with only 5 references. Both papers use DTW, so you can cite them also at line 188.


Paper d1 and d3 exploits aspects related to handwriting generation models and the stability concept. So, d1 and d3 should be cited at line 153-154, too

d3) A. Parziale, S. G. Fuschetto, and A. Marcelli. 2013. Exploiting stability regions for online signature verification. InNew Trends in Image Analysis and Processing. Lecture Notes in Computer Science, Vol. 8158. Springer, 112–121.


The following paper should be cited for signature verification and stability :

d4) Impedovo, Donato, Giuseppe Pirlo, and Rejean Plamondon. "Handwritten signature verification: New advancements and open issues." 2012 International Conference on Frontiers in Handwriting Recognition. IEEE, 2012.


e) Section 3.1 needs to be strongly improved. The description of the features is not still clear. I suggest to introduce some parameters, as for example: M is the number of segments of the password under analysis, N_k is the number of points of the k-th segment of the password, etc..


It is not clear if some  features are computed per password or per segment. If I understand well, you have 67 features per password.So, it is not clear how some features are computed. For example, eq. 5 computes the angle for ONE segment. How do you compute the feature POINT_ANGLE if you more than 1 segment???


It should be better to describe the features in the same order they appear in the Table.


line 265 and 266: what is yn?? never introduced before


line 268: what is n? it was never introduced before


line 283: i think equation 6 is wrong. I think the denominator is 1+|…| *|...| and not 1+|…| -|...|


line 287 the last point of the segment is now represented with yn and xn while it was represented with y2 and x2  before

the numerator in eq. 7 is not the euclidean distance of consecutive points, especially is xi and yi are the


line 290_ the day-angle is not clear. please, say more


line 320: it seems NUM_STROKES=SEGMENTS-1 Is right? It has no sense to use both the features


line 340: eq 15, what is the total speed? the maximum? the minimum? the average value??


line 344: x or y…


eq 17 i think you should use odd instead of uneven


eq 18: Tvmax is the feature max_vx why do you use a double notation.. Furthermore, what is v??


line363 and line 347: It is not clear what is the difference between future TIME_V_MAX (line 363) and MAX_VX (347)


eq 21,22,23,24:,25,26 I don’t understand why you need to use those formulas for computing these features. Please, explain.


line 469: writers


line 471: The authors wrote “In this case impostors try to trick the system with a randomly written password “ it is in disagreement with the sentence at line 468. If all the writers wrote the same password, the password is not random. The password is the same, the impostor is randomly selected. Please, remark at section 2 that writers wrote the same password


page 22-23: check table numbers


line 615: aelect instead of select


page 24: The difference between the presented results and the state of the art is not only due to by the dimension of the dataset but also by the adopted verification protocol. Just for example, if i’m not in error,  the 90,28% obtained by 52 is related to the classification of the paragraph while your is a word-level accuracy. So, please clarify this point.


Author Response

**********************************************************************

Comment and Response

**********************************************************************

 

Reviewer 2: The authors improved the paper as requested by reviewers but the paper is not ready for publication. English has to be improved, some sentences are really cryptic. Typos and Table enumeration have to be verified. My suggestions for improving the paper:

 

**********************************************************************

Comment #1: a) line 257-260: a segment stands for the summary of all coordinates from the time when the pen is placed to write until lift the pen. 

The most used term is “connected component” and in some cases “stroke”. I think that connected component or fragment are better than segment. It is only a suggestion. 

 

Response: Thank you for your suggestion.

It has been modified.

 

**********************************************************************

Comment #2: b) lines 51-52 The author’s profile, can be identified from text samples, gender, age range, and handedness of 52 the writer [1-2].  This sentence is not clear

 

Response: Thank you for your comment.

It has been modified.

 

**********************************************************************

Comment #3: c) line 80: Many recent studies have focused on writer identification used by the signature [5-7]. This sentence is not clear. Many recent studies are focused on signature verification ??

 

Response: Thank you for your comment.

It has been modified.

 

**********************************************************************

Comment #4: d) line 91: Simple forgeries were detected with an Equal Error Rate (EER) of 0.85% and 91 skilled forgeries with an ERR of 2.13%

It is not clear for the reader if the results refer to the reference set with 5 or 10 signatures. Furthermore, The authors mixed up the results: 0.85% is obtained on  MCYT with 10 references while 2.13% is obtained on SUSIG with 10 references.

My suggestion is to cite the following papers

 

d1) A. Parziale, M. Diaz, M. A. Ferrer, A. Marcelli, SM-DTW: Stability Modulated Dynamic Time Warping for signature verification, Pattern Recognition Letters, 2018, ISSN 0167-8655, https://doi.org/10.1016/j.patrec.2018.07.029.

d2) A. Sharma, S. Sundaram, On the exploration of information from the dtw cost matrix for online signature verificationIEEE Trans. Cybern., 48 (2) (2017), pp. 611-624

 

Both the papers presents top performing results on MCYT and Biosecure-ID dataset with only 5 references. Both papers use DTW, so you can cite them also at line 188.

 

Paper d1 and d3 exploits aspects related to handwriting generation models and the stability concept. So, d1 and d3 should be cited at line 153-154, too

d3) A. Parziale, S. G. Fuschetto, and A. Marcelli. 2013. Exploiting stability regions for online signature verification. InNew Trends in Image Analysis and Processing. Lecture Notes in Computer Science, Vol. 8158. Springer, 112–121.

 

The following paper should be cited for signature verification and stability :

d4) Impedovo, Donato, Giuseppe Pirlo, and Rejean Plamondon. "Handwritten signature verification: New advancements and open issues." 2012 International Conference on Frontiers in Handwriting Recognition. IEEE, 2012.

 

Response: Thank you for your comment.

Authors have included more details in the line 91, in order to show clearly the result.

Besides, we have included the 4 suggested references from the Reviewer. Now, they are [7], [32], [33] and [51]. The new order of references was updated, too.

 

**********************************************************************

 

Comment #5: e) Section 3.1 needs to be strongly improved. The description of the features is not still clear. I suggest to introduce some parameters, as for example: M is the number of segments of the password under analysis, N_k is the number of points of the k-th segment of the password, etc..

 

Response: Thank you for your comment.

Authors improved section 3.1 and have included more detailed description for the features where the description was still not so clear.

 

**********************************************************************

 

Comment #6: It is not clear if some  features are computed per password or per segment. If I understand well, you have 67 features per password. So, it is not clear how some features are computed. For example, eq. 5 computes the angle for ONE segment. How do you compute the feature POINT_ANGLE if you more than 1 segment???

 

 

Response: Thank you for your comment.

The features generated per segment are summed up and divided by the number of segments of the entire signature. Authors have supplemented the description for all these features accordingly in section 3.1 in the paper.

 

**********************************************************************

 

Comment #7: It should be better to describe the features in the same order they appear in the Table.

 

Response: Thank you for your comment.

Authors have changed the order of described features according to the recommendation in the paper.

 

**********************************************************************

Comment #8: line 265 and 266: what is yn?? never introduced before

 

Response: Thank you for your comment.

Segments consists of coordinates (xi, yi),  (np: number of signature points) so yn(p) is the last y coordinate of the segment. Authors have included more details in section 3.1.

 

**********************************************************************

Comment #9: line 268: what is n? it was never introduced before

 

Response: Thank you for your comment.

Signatures conists of coordinates ,  (n: number of signature points) Authors have included more details in section 3.1.

**********************************************************************

Comment #10: line 283: i think equation 6 is wrong. I think the denominator is 1+|…| *|...| and not 1+|…| -|...|

 

Response: Thank you so much for your comment.

Reviewer has true, authors corrected the denominator of the formula accordingly.

 

**********************************************************************

Comment #11: line 287 the last point of the segment is now represented with yn and xn while it was represented with y2 and x2  before

 

Response: Thank you for your comment.

For better understanding and uniformity, we have now adjusted all the formulas and descriptions to xn, yn in section 3.1.

 

**********************************************************************

Comment #12: the numerator in eq. 7 is not the euclidean distance of consecutive points, especially is xi and yi are the

 

Response: Thank you for your comment.

Authors have improved the formula eq.7 and description of the formula accordingly.

 

**********************************************************************

Comment #13: line 290_ the day-angle is not clear. please, say more

 

Response: Thank you for your comment.

There is no day-angle, if you mean line 290 HYP_ANGLE determines the angle between first and last point of the segment concerning the hypotenuse of the segment. Hypotenuse of segment means the largest deflection of the segment which is clearly visible in the description.

 

**********************************************************************

Comment #14: line 320: it seems NUM_STROKES=SEGMENTS-1 Is right? It has no sense to use both the features

 

Response: Thank you for your comment.

Is right SEGMENTS-1 for this publication we work with tablet or smartphone to generate the signatures and public dataset this feature seems to bring no added value, which you can also see in the results of the feature reduction but for the entire system, which also works with other data sets and input devices such as signature pads and pens, this feature can bring an improvement in the results, for example when writer will start writing but don’t write and set up the pen again for writing (writer has the intention to write but writes nothing, low pressure on the display) we include reason for this feature in section 7 conclusion.

**********************************************************************

Comment #15: line 340: eq 15, what is the total speed? the maximum? the minimum? the average value??

 

Response: Thank you for your comment.

Total speed is the average speed value. Authors modified the description.

 

**********************************************************************

Comment #16: line 344: x or y…

 

Response: Thank you for your comment.

Authors modified this mistake.

**********************************************************************

Comment #17: eq 17 i think you should use odd instead of uneven

 

Response: Thank you for your comment.

Authors modified this mistake.

 

**********************************************************************

Comment #18: eq 18: Tvmax is the feature max_vx why do you use a double notation.. Furthermore, what is v??

 

Response: Thank you for your comment.

Authors removed this formula and add more detailed description.

**********************************************************************

Comment #19: line363 and line 347: It is not clear what is the difference between future TIME_V_MAX (line 363) and MAX_VX (347)

 

Response: Thank you for your comment.

The difference between them is shown by their definitions:

TIME_V_MAX = Sum of total times over all maximum speeds

MAX_VX, VY = Sum of total times over all maximum speeds in x or y direction. Authors add more detailed description to these features.

**********************************************************************

Comment #20: eq 21,22,23,24:,25,26 I don’t understand why you need to use those formulas for computing these features. Please, explain.

 

Response: Thank you for your comment.

Authors improved these formulas for better understanding.

 

**********************************************************************

Comment #21: line 469: writers

 

Response: Thank you for your comment.

It has been modified.

**********************************************************************

Comment #22: line 471: The authors wrote “In this case impostors try to trick the system with a randomly written password “ it is in disagreement with the sentence at line 468. If all the writers wrote the same password, the password is not random. The password is the same, the impostor is randomly selected. Please, remark at section 2 that writers wrote the same password

 

Response: Thank you for your comment.

Line 487-488 has been modified: Therefore, we have different writers for the same password. The password is randomly generated. The imposter knows the password.

 

**********************************************************************

Comment #23: page 22-23: check table numbers

 

Response: Thank you for your comment.

It has been checked, but we do not find errors. The number of tables is right.

 

**********************************************************************

Comment #24: line 615: aelect instead of select

 

Response: Thank you for your comment.

It has been modified.

 

**********************************************************************

Comment #25: page 24: The difference between the presented results and the state of the art is not only due to by the dimension of the dataset but also by the adopted verification protocol. Just for example, if i’m not in error,  the 90,28% obtained by 52 is related to the classification of the paragraph while your is a word-level accuracy. So, please clarify this point.

 

Response: Thank you for your comment.

It is true, for this specific application is very difficult to find the same experiment. There is not a specific dataset for a clear comparison. For this reason, for the validation, we use IAM dataset, which is used for multiple applications, and the most similar was this one. [52] used a classification of paragraph, and in our case, only one word. It was included in order to see the value of accuracies. We think that our case is more restrictive and in the comparison, [52] has better conditions than the actual proposal. So, it is shown the grade of identification. We have used a sentence with this detail.

 

**********************************************************************


Author Response File: Author Response.pdf

Back to TopTop