Next Article in Journal
Real-Time Nanoscopic Rider Safety System for Smart and Green Mobility Based upon Varied Infrastructure Parameters
Previous Article in Journal
The Impact of a Number of Samples on Unsupervised Feature Extraction, Based on Deep Learning for Detection Defects in Printed Circuit Boards
Previous Article in Special Issue
Authorship Attribution of Social Media and Literary Russian-Language Texts Using Machine Learning Methods and Feature Selection
 
 
Article
Peer-Review Record

Dis-Cover AI Minds to Preserve Human Knowledge

Future Internet 2022, 14(1), 10; https://doi.org/10.3390/fi14010010
by Leonardo Ranaldi 1,*, Francesca Fallucchi 1 and Fabio Massimo Zanzotto 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Future Internet 2022, 14(1), 10; https://doi.org/10.3390/fi14010010
Submission received: 17 October 2021 / Revised: 21 December 2021 / Accepted: 23 December 2021 / Published: 24 December 2021
(This article belongs to the Special Issue Computational Social Science and Natural Language Processing (NLP))

Round 1

Reviewer 1 Report

The authors introduce an extension of transformer networks accounting for syntactic-tree parsing and show enhancements in model performance, providing support to the importance of grammar in language parsing. The authors should be praised for their multidisciplinary framing, including key relevant references on the subject of knowledge modelling and AI. There are a few missing citations that could enrich the manuscript and some minor adjustments in presenting results/interpretation could improve the already good quality of the manuscript.

For these reasons I recommend publication of this manuscript after minor revisions.

---

The Introduction is well written. I think the authors should unpack one statement that they touch marginally: Transformer networks work well at the cost of requiring impressive amounts of data. Even though their original structure is not built based on universal grammar, a key consequence of such massive training is that the network might actually encode various levels of the universal grammar itself, without a clear possibility for the experimenter to identify, account or discriminate against this. A partial evidence for this is the fact that different attention encodings can greatly influence over performance in a variety of lexical decision tasks.

Another point to make in the introduction is a third way between vectorial spaces and Chomsky’s UG, which is represented by cognitive networks (see for reference Siew et al. 2019, which focuses on semantic knowledge in the so-called mental lexicon, and Stella 2021, which focuses on syntactic parsing and sentiment tags in forma mentis networks). These networks provide interpretability to the empirical evidence supporting associative learning in language acquisition and shift the focus to links between concepts rather than latent features of individual concepts. These associations are available to everyone, even very young toddlers, and their interpretable, dynamical structure can be used to predict patterns of knowledge acquisition and prominence. I think the authors should address this third option briefly in the Introduction and then more extensively in the Discussion, as a future direction research. Why? Well, because what the authors realised here with KERMIT is, indeed, a network embedding of textual data including also syntactic networks with part-of-speech tagging, which is close to the work on cognitive network science highlighted above. In future expansions of the approach, the authors might enlarge K to include not only syntactic but also other types of semantic features, like phonological similarities or free associations potentially contributing to language processing.

In Table 1 – Which are the two results tested with the sign test? The wording of the caption is slightly confusing.

Table 1 and the Figures provide only accuracy to test and compare models. However, accuracy only provides a partial picture in model estimation. Could the authors prepare another table with recall/precision values?

Base structure.. -> base structure.

KEMRIT is mentioned many times, is it a typo?

Classification task [?] – missing reference

Results and Discussion should be split in 2 different subsections

In the figure 2 -> in the Figure 2

intereptation pass -> interpretation pass

Author Response

Dear Reviewer,
Thank you for your comments, they are very important to produce a better work.

The improvements made after the revision are listed below.

1) Thank you for your proposal to investigate the field of network science. We have added two very significant articles. In addition, we would like to investigate this field further as it can provide a broader overview.  (the changes can be seen on lines 53-54 and 467-476 in the attached pdf)

2) In Table 1 – Which are the two results tested with the sign test? The wording of the caption is slightly confusing.

In Table 1 the sign test was quite confusing. This was done to make sure that the differences between performances were accurate and true. We modified this by inserting additional different markers for each pair of measures.

3) Table 1 and the Figures provide only accuracy to test and compare models. However, accuracy only provides a partial picture in model estimation. Could the authors prepare another table with recall/precision values? 

In Table 1 we have reported the accuracy because since the datasets tend to be balanced, F1 and accuracy are redundant. In addition, we have reported in the new Table 2 the characteristics of the dataset samples that were used for the experiments. (These data can be seen in Table 2 of the attached pdf).

4) KEMRIT is mentioned many times, is it a typo?

We have standardized all occurrences of the word KERMIT so that there can be no misunderstandings.

5) Results and Discussion should be split in 2 different subsections

The results and the discussions are two very important parts of our analysis. That is why we followed your advice. We also thought that we could deal with a very important topic in the discussion part. Now it is only supposition but we would like to investigate in the future the topic introduced in the discussions.(changes can be observed in subsection 4.2 and 4.3)

 

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Globally, the manuscript is very well written and organized.

Besides some minor English and typing/editing “bugs”, noted in the attached commented PDF file, I have no major concerns.

Comments for author File: Comments.pdf

Author Response

Dear Reviewer,

Thank you very much for your suggestions and corrections in the pdf you sent us.

We have also corrected some small bugs, such as the definition of i in the K function, which is now visible on line 262.

The same has been done for some missing spaces and superfluous 's'.

Thank you again for your suggestions,
Kind regards.

Author Response File: Author Response.pdf

Reviewer 3 Report

The paper presents a way to implement Kernel inspired Encoder with Recursive Mechanism for Interpretable Trees (KERMIT) to investigate the possible meeting point between various learning theories. The main aim is to derive a new vision of knowledge in AI models based on a combination of rules, learning and human knowledge.

This paper deals with an interesting subject and it is well written. However, more technical details regarding the components of KERMIT architecture (Section 3.4), as well as BERT architecture class (Section 4.1.3) should be provided. Also, the measures used to evaluate the text classification performances should be defined.

Author Response

Dear Reviewer,

Thank you very much for your interest and advice.

As advised in the review, we have edited and enriched the following sections:

1) The architecture section. 

We worked on two fronts to extend the description: 
Section "3.5 KERMIT as a meeting point", where we described more about both the visualizer and other technical details.
Section 4.1.1 KERMIT encoder, where we described the input encoding mechanism.

2) To extend the description of the BERT part (Section 4.1.2) we have contextualized the use of this Transformer and how it is applied in text classification tasks.

Finally, we improved the description of the experiments, the data used and enriched the description of the results table.

We share the changes in the attached pdf. The changes are visible in blue.

Thank you very much for your suggestions.
Kindest regards,
The authors.

Author Response File: Author Response.pdf

Reviewer 4 Report

The article entitled "Dis-cover AI minds to preserve Human Knowledge", presents a new system (KERMIT). This article aims to demonstrate that the foundations of language learning are both innate and learned.

The introduction gives the general context, justifies the presented system and ends by ruling on the scientific contributions that the article brings.
However, it is not easy to understand what is at stake in the presented system. It would be relevant to illustrate (perhaps with an example) the type of results expected.

In section 2, the study of nativist and empiricist theories is briefly but effectively conducted. This study justifies the authors' hypothesis as to the relevance of establishing a "hybrid" solution combining the best of these two theories.

The outline of the following sections is provided at the end of section 2, which is very helpful to the reader.

Section 3 discusses the different points of view and justifies the authors' points of view. It leads to the presentation of the system proposed. 
This section is interesting, but the explanations on the proposed system are too brief to allow its reproduction, which prevents the system from being scientifically verified.
The explanations on the development and functioning of the proposed system should be significantly improved.
Finally, the experiments are well detailed and relevant. The conclusions drawn are interesting.

In summary, the article is overall of good quality. The introduction can be improved (for example, with examples), and section 3.5 should be significantly enriched (or even become a separate part).

Author Response

Dear Reviewer,

Thank you very much for your interest and advice.

In the second version we have improved and enriched the description of the section describing the KERMIT framework in two different places: the section "3.5. KERMIT as a meeting point", where we have described the viewer more. The section "4.1.1. KERMIT", where we have further described the input encoding mechanism which is very important.

We have chosen to enrich the two parts without breaking up the sections in order not to lose the thread.
Thank you for your advice.

(We share the changes in the attached pdf. The changes are visible in blue.)

Thank you very much for your suggestions.
Kindest regards,
The authors.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors addressed my comments and the manuscript is suitable for publication.

Author Response

Dear Reviewer,
Thank you very much for your advice.
They helped us to improve our paper, we also revised some passages and corrected the style. The parts in blue correspond to the requests made by the editor.
Kind regards,
Authors.

Author Response File: Author Response.pdf

Reviewer 2 Report

All my previous concerns have been addressed and answered.

Author Response

Dear Reviewer,
Thank you very much for your advice.
They helped us to improve our paper, we also revised some passages and corrected the style. The parts in blue correspond to the requests made by the editor.
Kind regards,
Authors.

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors responded to all my concerns and suggestions. In my opinion, the paper can be published in its current version.

Author Response

Dear Reviewer,
Thank you very much for your advice.
They helped us to improve our paper, we also revised some passages and corrected the style. The parts in blue correspond to the requests made by the editor.
Kind regards,
Authors.

Author Response File: Author Response.pdf

Reviewer 4 Report

The authors have improved the article in accordance with the reviews. Each section has been improved. The main improvements are on the sub-sections 4.1.1 and 4.1.2 4.1.3 which allow a better understanding of the experiments and the results of the presented approach.

Author Response

Dear Reviewer,
Thank you very much for your advice.
They helped us to improve our paper, we also revised some passages and corrected the style. The parts in blue correspond to the requests made by the editor.
Kind regards,
Authors.

Author Response File: Author Response.pdf

Back to TopTop