Next Article in Journal
Crack Propagation in the Tibia Bone within Total Knee Replacement Using the eXtended Finite Element Method
Previous Article in Journal
Quantitative Inspection of Complex-Shaped Parts Based on Ice-Coupled Ultrasonic Full Waveform Inversion Technology
 
 
Article
Peer-Review Record

Deep Reinforcement Learning-Based Multi-Hop State-Aware Routing Strategy for Wireless Sensor Networks

Appl. Sci. 2021, 11(10), 4436; https://doi.org/10.3390/app11104436
by Aiqi Zhang, Meiyi Sun, Jiaqi Wang, Zhiyi Li, Yanbo Cheng and Cheng Wang *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2021, 11(10), 4436; https://doi.org/10.3390/app11104436
Submission received: 19 April 2021 / Revised: 10 May 2021 / Accepted: 11 May 2021 / Published: 13 May 2021

Round 1

Reviewer 1 Report

The results seem to be sound. The title of the paper gives ‘Deep Reinforcement Learning based…’, while in the present paper, the neural network models are not clearly described.

The authors mentioned two key questions that they want to solve, traffic flow prediction and how to make the enhanced nodes respond intelligently to the changing of network topology. For the first question, the authors propose to use RNN combined with DDPG to do the prediction. There is no description of RNN model, what is input and output of RNN? what is the hyperparameters used to produce results?

The abbreviated phrases should be written in full the first time that they are used, check line 90 (Double-Deep Q Network), line 97 (WSNS or WSNs?), line 153 (RNN)

The English language should be carefully revised, please check throughout the paper and pay more attentions to plural and singular form of the same word

In line 35, “the typical feature of these nodes generally is limited storage, power and…” should change into “the typical features of these nodes generally are limited storage, power and…”

Author Response

Response to Reviewer 1 Comments

 

Dear Editor and Reviewer,

We are writing in response to your review commentary on our paper, “Deep Reinforcement Learning based Multi-hop State Aware Routing Strategy for Wireless Sensor Networks”.

We are very grateful for the valuable questions and instructive comments that have been raised. The time and effort on reviewing this paper is highly appreciated. We believe that we have been able to address each of the comments. Following are our responses to each of your questions and comments.

 

 

Point 1: The results seem to be sound. The title of the paper gives ‘Deep Reinforcement Learning based…’, while in the present paper, the neural network models are not clearly described. The authors mentioned two key questions that they want to solve, traffic flow prediction and how to make the enhanced nodes respond intelligently to the changing of network topology. For the first question, the authors propose to use RNN combined with DDPG to do the prediction. There is no description of RNN model, what is input and output of RNN? what is the hyperparameters used to produce results?

 

Response 1: As suggested, we added a figure of the RNN structure and explained the input and output of the network. The hyperparameters of the RNN are similar to [23].

 

  1. Ramakrishnan, N.; Soni, T. Network Traffic Prediction Using Recurrent Neural Networks, Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications, Orlando, USA, 2018, 187-193.

 

Point 2: The abbreviated phrases should be written in full the first time that they are used, check line 90 (Double-Deep Q Network), line 97 (WSNS or WSNs?), line 153 (RNN).

 

Response 2: We checked the full text to ensure that acronyms only appear the first time they are introduced.

 

Point 3: The English language should be carefully revised, please check throughout the paper and pay more attentions to plural and singular form of the same word.

 

Response 3: We checked the grammar of the whole paper.

 

Point 4: In line 35, “the typical feature of these nodes generally is limited storage, power and…” should change into “the typical features of these nodes generally are limited storage, power and…”.

 

Response 4: As suggested, we rewrite the sentence.

 

 

At last, thank you again for your suggestions.

 

Best wishes,

All Authors

 

Author Response File: Author Response.docx

Reviewer 2 Report

Presented research is current and fully described and has potential for further development. Paper is very well organized and the readership is good.

For the sake of completeness the experiment should be mentioned about devices on which research was conducted (i.e. memory, processor, disk) and the software/languages which was used for calculations in conducted research. 
Moreover stypos should be corrected like in line 27 unnecessary space.

Author Response

Response to Reviewer 2 Comments

 

Dear Editor and Reviewer,

We are writing in response to your review commentary on our paper, “Deep Reinforcement Learning based Multi-hop State Aware Routing Strategy for Wireless Sensor Networks”.

We are very grateful for the valuable questions and instructive comments that have been raised. The time and effort on reviewing this paper is highly appreciated. We believe that we have been able to address each of the comments. Following are our responses to each of your questions and comments.

 

 

Point 1: For the sake of completeness the experiment should be mentioned about devices on which research was conducted (i.e. memory, processor, disk) and the software/languages which was used for calculations in conducted research.

 

Response 1: The input part of MHSA-TFF is completed by Python, and the deep learning model is built by TensorFlow with the following experimental environment: AMD Ryzen R5 @ 3.5GHz × 4, 16G RAM, gtx1060 GPU, 6G VRAM.

 

Point 2: Moreover stypos should be corrected like in line 27 unnecessary space..

 

Response 2: As suggested, we have revised the text.

 

 

 

At last, thank you again for your suggestions.

 

Best wishes,

All Authors

 

Author Response File: Author Response.docx

Back to TopTop