Next Article in Journal
A Study on the Wind Power Forecasting Model Using Transfer Learning Approach
Next Article in Special Issue
Scaling Automated Programming Assessment Systems
Previous Article in Journal
A 3D Scene Information Enhancement Method Applied in Augmented Reality
Previous Article in Special Issue
Complementing JavaScript in High-Performance Node.js and Web Applications with Rust and WebAssembly
 
 
Article
Peer-Review Record

System for Semi-Automated Literature Review Based on Machine Learning

Electronics 2022, 11(24), 4124; https://doi.org/10.3390/electronics11244124
by Filip Bacinger, Ivica Boticki * and Danijel Mlinaric
Reviewer 2: Anonymous
Reviewer 3:
Electronics 2022, 11(24), 4124; https://doi.org/10.3390/electronics11244124
Submission received: 24 October 2022 / Revised: 25 November 2022 / Accepted: 5 December 2022 / Published: 10 December 2022
(This article belongs to the Special Issue Advanced Web Applications)

Round 1

Reviewer 1 Report

The authors present a new method to generate automatic literature review by means of an artificial intelligence, the manuscript is well presented,  and the methodology is coherent.

 

I have the following suggestions:

1-    On section: 3.2 it states: “All positive examples and an equal number of negative examples obtained by subsampling were used for training in both datasets.”,  I suggest to  create a table where all the data used is presented, How many articles where used in the training of the model? Etc.

2-    Update all references, they are quite outdated.

3-    On section 3.1 it sates: “Two datasets, Mental Health (MH) and Explainable Artificial Intelligence (EXAI), were obtained from the results of manual literature reviews to train machine learning…”, it is CRITICAL to show the details of the literature review process, which methodology did you use? What is the process, it is validated? etc.

A machine learning algorithm is as good as t7he data used to train it, if the data generated by the “review process” is wrong/un validated no matter the performance of the algorithm, It will not work, please explain and validate the process to generate the input data.

4-    Results: please compute the AUC for all the models, discuss the low specificity of the models.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The manuscript discusses about an automated system for performing literature reviews using a combination of different machine learning approaches. 

The following points are observed. 

1. The manuscript lacks proper detailed justification about the need for such a system. It is suggested to provide a detained justification. 

2. For every machine learning approach, training is very important. There is not much information provided on the training part of the proposed system. 

3. Paper organization is missing. It is suggested to provide a paragraph about the paper organization. 

4. The title is slightly misleading. What can be concluded from the manuscript is that it is actually automating only a part of the Literature review process. That is only selection of articles are being performed in an automated manner and rest is to be done manually. It is suggested to clearly discuss the title and the exact work performed in the manuscript.   

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

In this work, the authors proposed a system for automating the literature review processing using machine learning.

1. The method trained a series of machine learning models to classify where an input paper belongs to the topic that is being explored as part of the review process. 

2. A graphical user interface was developed for data queries and running the machine learning models.

Experimental results show that the best results in terms of sensitivity and accuracy for the automated literature review process are achieved by using a combined machine learning model.

While this work is interesting, there are some concerns the authors should address prior to publication.

 

comments

 

1. In table 8, the authors show the results of cross-testing, i.e., the model trained on MH data was tested on EXAI data, and the model trained on EXAI data was tested on MH data. Since the MH data and the EXAI data are about different topics, why a classifier trained on one dataset can be applied to the other dataset? Could the authors provide more explanation?

 

2. Could the authors comment on how the noise/disagreement in labeling affects the performance of the model?

 

3. What is the training/inference time for the proposed model?

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop