Next Article in Journal
Research on Apparel Retail Sales Forecasting Based on xDeepFM-LSTM Combined Forecasting Model
Previous Article in Journal
Statistical Machine Learning Regression Models for Salary Prediction Featuring Economy Wide Activities and Occupations
 
 
Article
Peer-Review Record

Multilingual Handwritten Signature Recognition Based on High-Dimensional Feature Fusion

Information 2022, 13(10), 496; https://doi.org/10.3390/info13100496
by Aliya Rexit 1, Mahpirat Muhammat 2, Xuebin Xu 1, Wenxiong Kang 3, Alimjan Aysa 4,* and Kurban Ubul 1,4,*
Reviewer 1: Anonymous
Reviewer 2:
Information 2022, 13(10), 496; https://doi.org/10.3390/info13100496
Submission received: 9 July 2022 / Revised: 8 October 2022 / Accepted: 11 October 2022 / Published: 13 October 2022

Round 1

Reviewer 1 Report

The presented paper tackles the problem of offline signature identification. The datasets of signatures of three different languages were collected for the research.

The experiments were performed using extracted LOMO and HOG features,  with RF and kNN classifiers. The conclusion is that the kNN yields slightly better performance. 

 

The article has serious flaws, many important details are missing.

The experiments are not reproducible, neither is the dataset available for benchmark comparison of the results.  

 

Below please find the comments and recommendations.

1. Introduction. There are important works that are not overviewed,  e.g., SigNet. Also, add recent works from 2022; for example, you can find articles from the DAS 2022 conference. 

There are public signature datasets, e.g.,  BHSig-B and BHSig-H. Compare your dataset to other datasets in terms of number of writers, number of samples for each writer, etc. 

2. Section 2.1.
    - The dataset should be made available. 

     - Please provide the details and illustration of the form used. 

     - Line 119: "manual annotation".  Why manual? It can be done automatically with careful planning of signature collection

       - Line 121: Nor clear what "at different times" means. Different year, different time of the day? Details should be provided. 

      - Line 124: why do you choose to collect 24 samples of signatures. Explain your choice

- Line 129: What does "graph" mean in this context? 

 

3. Section 2.2

  - Explain the choice of the chosen 386x96 size

  - Line 139: Provide the details of the "weighted average"; it is unclear.

 - Figure 3: (C) does not seem ad=s a binary image

                  I don't see any difference between (A) and (B). Please, highlight the difference

4.  Section 3

   - Line 153: It is not clear what "fused" means here. Add all the necessary details such the reader can reproduce the method. 

     Line 156: What is the dimensionality of the resulted vector?

     Line 151: add an appropriate citation for the LOMO.

    Line 177. add a brief description and appropriate citation for the SILPT. 

   Eq. 2. How is the value of parameter t chosen?

   Figs. 6 and 9 are of poor quality.

   Fig. 7 What is the rationale behind gamma normalization? Why gamma normalization is not mentioned in the text? All the details should be explained in the text. 

5. Section 5:
 Table 9 is hard to follow because results on different datasets are compared. I recommend providing separate tables for each dataset.

 

The text should undergo extensive grammar editing, many typos, and grammatical errors.  This is a joint paper, however, in many places, it is written in the first person "I", "my". 

 

 

 

   

 

Author Response

Please see the attachment

Author Response File: Author Response.docx

Reviewer 2 Report

 

Too many abbreviation such as CEDAR, GPDS, and MCYT are not defined before used. This reviewer strongly suggests the authors must improv the writing habits in English written on Misuse of abbreviations.

In this paper, the problem of offline signature recognition of minority languages has been studied, which is with no publicly available dataset and with quite a few studies on. A dataset including three languages (Uyghur, Kazakh, and Chinese) is built up. It is known that the research has filled the gap in the research on signature recognition of minority languages in China. Numerical experiments verify the method significantly. This reviewer recommends it can be accepted to be published.

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

The field of signature recognition is a hot topic in the document image analysis field, and benchmark datasets of understudied scripts are welcome. 

 

However, the article has several flaws. 

The main contribution of the paper is the new dataset. However, the dataset is not released to the scientific community, and it is not clear whether and when it will be available.

The methods are not explained in full detail, making the exact reproduction impossible. 

 

1. Authors were given the recommendation to survey relevant papers from the DAS 2022 workshop, which is of the leading workshops in the document analysis field. However, they chose not to follow the recommendation. Articles must exhibit originality and technical contribution and provide comparisons to recent studies.

2. There are numerous grammar errors and types throughout the paper. 

The article must undergo extensive editing by a qualified English expert. It is advised that authors provide a certificate that the appropriate proofreading has been done.

 

3. Since arXiv publications are not peer-reviewed, I  recommend replacing ArXiv papers with relevant articles from journals and high-ranked conferences. 

 

4. Lines 190-191: the description is not clear. 

 

5. Figure 3 d. The bottom image is in grayscale, and not a binary image. 

 

6. The new benchmark dataset is always welcome.

Since the main contribution of the paper is the new dataset, it should be made publicly available.  Otherwise, the dataset cannot contribute to the scientific community. Neither the code is provided.  The details of the paper are not clearly explained, so the results are not reproducible. 

 

7. Line 142. I still don't understand why "manually casting" is applied. It can and should be performed automatically when the signer signs in the appropriate box, planned during the forms' generation. 

 

8. Use consistent terms. In some places, it is written "writers", in others "authors." 

 

9. Use formal language, e.g., line 189 should be "contamination of features" and not "Concat". 

Author Response

Please see the attachment

Author Response File: Author Response.docx

Back to TopTop