Next Article in Journal
HearIt: Auditory-Cue-Based Audio Playback Control to Facilitate Information Browsing in Lecture Audio
Next Article in Special Issue
The Efficacy and Superiority of the Expert Systems in Reservoir Engineering Decision Making Processes
Previous Article in Journal
Special Issue Texture and Color in Image Analysis
 
 
Article
Peer-Review Record

One-Dimensional Convolutional Neural Network with Adaptive Moment Estimation for Modelling of the Sand Retention Test

Appl. Sci. 2021, 11(9), 3802; https://doi.org/10.3390/app11093802
by Nurul Nadhirah Abd Razak 1, Said Jadid Abdulkadir 1,2,*, Mohd Azuwan Maoinser 3, Siti Nur Amira Shaffee 4 and Mohammed Gamal Ragab 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2021, 11(9), 3802; https://doi.org/10.3390/app11093802
Submission received: 14 February 2021 / Revised: 29 March 2021 / Accepted: 3 April 2021 / Published: 22 April 2021

Round 1

Reviewer 1 Report

This study proposed a 1D convolutional neural network using adaptive moment estimation to model the sand retention tests to evaluate plugging and predict sand production and retained permeability. 

The title of the manuscript is different from the one in the citation box on the left. The latter can actually be a better title.

Line 18 and also 59: why do we need deep learning in this study? no background is given to reach this conclusion. You should study and include more references discussing the application of ML/AI and data-driven modeling in different disciplines of petroleum engineering and then also discuss why you want to do this.

Line 19-21: This text is copied from a reference without proper rephrasing or providing reference. Please fix this.

Line 21-22: as discussed, you didn't provide good background or solid reasons why to use deep learning. So, the conclusion of therefore we need to use deep learning in this study does not make sense. You should explain why you selected this approach and talk about the benefits.   

The language must be improved, for example lines: 45-47, 57, 68, 96, 114-115, 150, 151, 156, 174, 181, 182, 188, 193-194, 202, 216, 235, 349, 351, 358, 369, 391, 465-466, 474, 498, 499, 502, 

Line 107, section 2.2.1: the text in this section is very long and unstructured. Try to be more concise and present the main message related to the goals of this study. It is currently presenting different information from literature without any solid conclusion at the end. 

In this section, define what d10, d30, etc. are.

Figure 2: In tuning and iterating the hyperparameters, what are the criteria?

Line 132: what are the general flow properties?

Line 160-162: regarding the contradiction you are referring to, the reference is discussing only a particular system (not generally speaking).

Figure 3: use the same color for bar charts in a and b. What are the units for different parameters? what are the acronyms of screens in b? mention them.

Section 2.2.1: try to better explain the figures and trends with some background and reasoning for your selections.

Line 287: define what a p-value is.

Line 310: what is ∝?

All the equation numbers in the PDF are linked to equations and looks kind of messy. This should be fixed. 

Line 351: regarding data partitioning, it is a good practice to have training, calibration, testing and also blind validation sets. How confident are you with your current model training? 

Section 2.4: generally, this section needs better explanation. The hidden layers, flatten layer should be explained in some details.

Line 446: shown in what?

Table 3: the first and last row are identical. The same comment for the text before and after the table.

Line 504: what is section 2.5?

Equation 18: What is this equation for? it should be discussed with regard to table 5. You need to give more explanation.

Figure 6: this figure should be improved. The graphs are not very clear. What is depth? What about units?

No details about the ML models are provided. It is difficult to judge the performance. Please provide full details of all the models used. 

Both accuracy and speed are important in model development. No details about the time needed to train and develop the CNN model and comparison with other models (ML and maybe also numerical models) are provided. Please comment on this.  

I would like to know more about your model. What are the Adam parameters? How did you manage the overfitting issues?

Author Response

Attached file contains combined comments for reviewer 1 (highlighted in red) and reviewer 2 (highlighted in blue).

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors have done a wonderful study by proposing a convolutional neural network approach to model screen retention tests. This study has implications in providing solutions to address sand production issues in oil and gas reservoirs. Their model helps to select screens with optimal characteristics. The results of this study show that having enough reliable data, the best parameters for slurry and sand pack test can be selected through optimization of the hyperparameters. They also suggest that one-dimensional convolutional neural network would outperform other machine learning approaches. The numerical and modeling workflow and algorithms are described in great details and conclusions are sufficiently supported by the results.

Some minor clarifications and corrections are required to make this manuscript ready for publication. Please follow and address the comments made within the manuscript.

Comments for author File: Comments.pdf

Author Response

Attached file contains combined comments for reviewer 1 (highlighted in red) and reviewer 2 (highlighted in blue).

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Dear Authors,

Please take your time and do a revision regarding the following comments:

- The language must be improved, a few example are: lines 18-19, 24, 98, 119, 160, 227-228, 235, 1281

- Fig. 2: how do you decide on yes/no for iterating and tuning? There must be some criteria even if using trial and error.

- Section 2.4.1: text is again missing. This revised version has even more missing text with equations and cross-reference. I suggest uploading the word file or contact the journal to find a way so that it can be better evaluated.

- Fig.6: in the previous version you named y-axis “depth”. In this revised version, the y-axes is corrected, but also name your x-axes although you explain it in the text.

In the fig. caption, the word “model” is repeated several times.

- I repeat one of the comments: No details about the ML models are provided. It is difficult to judge the performance. Please provide full details of all the models used. Please provide the details of the ML models not only explanation of what they are. Give descriptions in tables.

- Can you provide any performance comparison against numerical simulations (in terms of accuracy)?

- Adam is of course an algorithm for training and optimization, but the question was about the Adam’s parameters used for the training such as: Learning rate, Exponential decay rates for the 1st moment estimates, Exponential decay rates for the 2nd moment estimates, Numerical stability constant. Please provide details.

Author Response

Dear Reviewer,

I have attached a pdf file that is merged and contains the following:

  1. Response to reviewer
  2. Highlighted corrections
  3. Final paper without highlights

Thank you for your comments.

Regards

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report

Dear Authors,

Thank you for the revised version.

  • Fig. 6 and 7: x-axis seems to be "the number of observations". Currently, you show the figure title on the x-axis.
  • Section 3.4 can be removed (I did not ask for it). The comment was asking if you could provide a performance comparison against numerical simulations (in terms of accuracy), but I assume you haven't done any numerical simulations.

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Back to TopTop