Next Article in Journal
Optimisation of Control Algorithm for Hydraulic Power Take-Off System in Wave Energy Converter
Next Article in Special Issue
Kinetic Parameters Estimation of Thermal and Co-Pyrolysis of Groundnut De-oiled Cake and Polyethylene Terephthalate (PET) Waste
Previous Article in Journal
Study on Hydrogen Production by Supercritical Water Gasification of Unsymmetrical Dimethylhydrazine under Multi-Parameters
Previous Article in Special Issue
Torrefaction of Pulp Industry Sludge to Enhance Its Fuel Characteristics
 
 
Article
Peer-Review Record

How to Train an Artificial Neural Network to Predict Higher Heating Values of Biofuel

Energies 2022, 15(19), 7083; https://doi.org/10.3390/en15197083
by Anna Matveeva 1 and Aleksey Bychkov 1,2,*
Reviewer 1:
Reviewer 2:
Reviewer 3:
Energies 2022, 15(19), 7083; https://doi.org/10.3390/en15197083
Submission received: 20 August 2022 / Revised: 20 September 2022 / Accepted: 23 September 2022 / Published: 27 September 2022

Round 1

Reviewer 1 Report

In this paper, the authors used ANN to predict higher heating values of biofuel. In my opinion, this paper is well-written and well-structured. I have several comments for the authors.

  1. Please provide more quantitative information about this paper to the abstract section. The current version needs improvement.
  2. In abstract, could you please add information regarding model inputs and outputs?
  3. In this paper, I noticed the authors used ANN and ANNs. Please make it consistent when using this.
  4. Section 2.1, line 96, please change “artificial neural network” to “ANN”.
  5. In this paper, the authors performed analysis regarding the input features. However, in my option, it would be more useful to perform input feature importance analysis to select which input parameters are important rather than manual selection. Please consider doing that.
  6. It is not very clear to me what are the input features in this study. Please try to explain this information clearly, e.g., what are C, H, N…?
  7. For the ANN architecture tuning part, I am wondering how about the layers with different number of neurons?
  8. For the particular case study, what is the final suggestion for ANN structure, activation functions and training algorithm?
  9. Why did consider k-fold cross-validation method to select ANN parameters?
  10. What are the limitations of this work? Why consider ANN rather random forest?
  11. Please add more information to the conclusion section.

Author Response

Point 1. Please provide more quantitative information about this paper to the abstract section. The current version needs improvement.

Response 1: Done. We have added some sentences. In particular, we have shown that 550 samples are sufficient to ensure convergence of the algorithm; carbon and hydrogen contents are the sufficient elemental analysis data; and volatile matters can be excluded from technical analysis. The minimal required complexity of neural network is ~ 50 neurons.

Point 2. In abstract, could you please add information regarding model inputs and outputs?

Response 2: The model output is indicated: it is the HHV value per se. We have added information about inputs (line 19): “data of ultimate and proximate analysis”

Point 3. In this paper, I noticed the authors used ANN and ANNs. Please make it consistent when using this.

Response 3:  Yes, we use term ‘ANNs’ at start of introduction to emphasize many of possible ANNs. Further only the ‘ANN’ term is used to indicate one certain ANN.

 Point 4. Section 2.1, line 96, please change “artificial neural network” to “ANN”.

Response 4: Done, thank you

 Point 5. In this paper, the authors performed analysis regarding the input features. However, in my option, it would be more useful to perform input feature importance analysis to select which input parameters are important rather than manual selection. Please consider doing that.

Response 5: Yes, we also think about physical importance of the used input parameters: namely, changing of its oxidation degree by carbon and hydrogen (from ultimate analysis) is the main energy-releasing process during combustion, instead of nitrogen-concluded processes. The volatile matter is also excluded as a dependent parameter as it was mentioned in the manuscript. We agree with you that it can be find other well input parameter, for example, specially threated photo of interested biomass, but we act under the current paradigm of the ultimate-proximate analysis data.

Point 6. It is not very clear to me what are the input features in this study. Please try to explain this information clearly, e.g., what are C, H, N…?

Response 6: We are sorry for the unclear introduction. C, H, and N are carbon, hydrogen and nitrogen, respectively. Those are the features of ultimate analysis. We changed notation for Table 2 to make this clear.

Point 7. For the ANN architecture tuning part, I am wondering how about the layers with different number of neurons?

Response 7: As demonstrated in Figure 5, the increased number of layers reduces stability. Therefore, we choose one-layer architecture. Redistribution of neurons between layers was tested for the two-layer architecture (see Figure 6a). It shows that except few border cases, redistribution has no effect there.

Point 8. For the particular case study, what is the final suggestion for ANN structure, activation functions and training algorithm?

Response 8: Thank you for the comment, we have added summarization in lines 358-360 in the Conclusion section: “Our final suggestion for the ANN structure is as follows: perceptron with 100 neurons at hidden layer; rectangular unit function (relu) as activation functions and adaptive moment estimation (adam) as the training algorithm.”

 Point 9. Why did consider k-fold cross-validation method to select ANN parameters?

Response 9: The k-fold cross-validation method was used for estimating the final ANN performance rather than for selecting its parameters.

 Point 10. What are the limitations of this work? Why consider ANN rather random forest?

Response 10: Like other different ANNs, our ANN will guarantee work only within the range where it was trained. However, here it is a very wide range: it is the widest among other ranges reported previously. We consider ANN rather than any other machine learning approach because we would like to change the focus of discussion from small datasets and hidden hyperparameter optimization to step-by-step demonstration of hyperparameter tuning. ANN was convenient for such consideration. Other machine learning approaches were excluded due to reasonable restrictions on the scope of the article.

 Point 11. Please add more information to the conclusion section.

Response 11: Done. We have added the final suggestion for the ANN structure (point 8) and an answer for point 10: they really make our manuscript more clear.

Reviewer 2 Report

the only advice is to improve the explanation of the methodology used, also using logical patterns

Author Response

Point 1. The only advice is to improve the explanation of the methodology used, also using logical patterns

Response 1: The methodology used has been described in the manuscript and the algorithm we used is available in the Supplementary files. We hope that the changes have made the article more logically clear.

Reviewer 3 Report

The manuscript employ an artificial neural network to predict  heating values of biofuel, but in the paper, the prediction results of neural networks are simply compared, and the influence of the model structure, activation function and solver of neural networks  is analyzed. No more theory and method innovation is proposed in this manuscript.

Author Response

Point 1. The manuscript employ an artificial neural network to predict  heating values of biofuel, but in the paper, the prediction results of neural networks are simply compared, and the influence of the model structure, activation function and solver of neural networks  is analyzed. No more theory and method innovation is proposed in this manuscript.

Response 1: We did not set the task to once again teach the neural network with an accuracy of 99.99(9). Such works are numerous and they differ slightly from each other. We have introduced and described for the first time the method of ANN hyperparameter tuning. We hope that our insight into the application of neural networks will help researchers use this useful tool more accurately.

Round 2

Reviewer 3 Report

The manuscript is well written and deserves to be accepted for publication.

Back to TopTop