Chinese Fine-Grained Named Entity Recognition Based on BILTAR and GlobalPointer Modules
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis study enhances the precision of BILTAR and GlobalPointer-based fine-grained named entity recognition in Chinese. The BILTAR module is used to gather deep semantic information about data by extracting deep semantic features from the coding information of language models that have already been trained.
1. In the introduction section explain NET in terms of GlobalPointer modules
2. Add the structure of the paper by the end of the introduction
3. Related work is too short, and many important references need to be included. Add some more up-to-date related work and compare the precision of BILTAR and GlobalPointer-based fine-grained techniques with other published works.
4. Model which is Figure 1 need to be explained with an example
5. What is batch size, and seq size in equation 4? Kindly explain the same as Qatt, Katt and Vatt
6. Cluener2020 and Weibo need to be discussed before it appear in table 5.
7. Rename last section as Conclusions and Future Work
Comments on the Quality of English LanguageModerate English Correction is required
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsReviewer's Comments
The manuscript proposed a module in fine-grained entity recognition for Chinese language. The module is built-up by combining BILTAR for extracting the semantic features of the text, and Global Pointer to extract the entities. The proposed work is practical and may find its applications in various of sectors. The experiment outcome looks promising.
The presentation of the manuscript requires improvement. There are also places where the description of the module is not quite comprehensive. Perhaps further explanations and examples can be included. Below are some of the comments which might be helpful in its revision.
(1) “fine-grained named entity recognition” contains a grammar mistake and makes the sentence confusing. Find another name.
(2) On line 46, add a reference for BERT-CRF
(3) On line 51, change “re-search” to “research”
(4) Change “pa-per” to “paper”.
(5) CRF appeared in section 1 before its representation was presented on line 100.
(6) In the sentence “Finally, the model is used the entity boundary calculation method of the Global Pointer module to process all sub-sequences of each text and extracted all entities.”, should “is used” be changed to “uses”?
(7) Change “intro-duces” to “introduces”.
(8) The text needs to be re-organised “With taking the 8-head self-attention mechanism module in this paper as an example, this paper intro-duces the working principle of the multi-head self-attention mechanism. is the input vector of the attention module, and the dimension of is (batch_size, seq_size, word_embedding_size).”
(9) Equation (5) is not an equation. Name it in a different way. Similarly with Equation (7).
(10) In Section 3.3.2, It’s unclear to me how does the TIME layer work. Further explanation is necessary.
Comments on the Quality of English LanguageThe presentation (including the linguistic quality) of the manuscript requires improvement.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe paper is written with clarity and in details regarding how the Chinese named entity can be recognized using two publicly available corpora of Cluender2020 and Weibo. The description of the modeling is sometime too detailed (such as the description of precision, recall, and F-values), but the description of the corpora is rather limited. Throughout the paper, application of a method to these two corpora produced different degrees of increase or decrease in performance. It would be really nice if the authors give brief explanation of why the two corpora resulted in different degrees of performance gain or loss.
Comments on the Quality of English LanguageThere are numerous annoying instances of inappropriate separation of a word by an inappropriate use of hyphens (for example, rep-resented, com-prehensive) throughout the paper. These ill-formed words are to be all corrected. There are a few occasions where two spaces are used between words.
The sentences in the following location need to be re-written: Line 46, Line 47, Line 123 (tertiary), Line 156, Line 163 (I don't get why 'using the BERT of powerful network structure' is there). Line 194 (one of the W needs to be subscripted with v, rather than q); Line 256; Lines 277-278 (I don't understand what the sentence want to convey); Line 387 (why 'really' is there?); Lines 497-498
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors have improved the paper as per the given comments.
Comments on the Quality of English LanguageMinor English corrections are required.
Reviewer 2 Report
Comments and Suggestions for AuthorsThe revision has addressed all recommendations made in the first-round review process. The manuscript can be published in its current form.