Next Article in Journal
Reinventing Web Security: An Enhanced Cycle-Consistent Generative Adversarial Network Approach to Intrusion Detection
Next Article in Special Issue
PDPHE: Personal Data Protection for Trans-Border Transmission Based on Homomorphic Encryption
Previous Article in Journal
Impact Analysis of Cyber Attacks against Energy Communities in Distribution Grids
 
 
Article
Peer-Review Record

Ship Network Traffic Engineering Based on Reinforcement Learning

Electronics 2024, 13(9), 1710; https://doi.org/10.3390/electronics13091710
by Xinduoji Yang 1,†, Minghui Liu 2,†, Xinxin Wang 2, Bingyu Hu 2, Meng Liu 2 and Xiaomin Wang 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Reviewer 5: Anonymous
Electronics 2024, 13(9), 1710; https://doi.org/10.3390/electronics13091710
Submission received: 1 April 2024 / Revised: 22 April 2024 / Accepted: 23 April 2024 / Published: 29 April 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Please see the attachment.

Comments for author File: Comments.pdf

Comments on the Quality of English Language

I think it if fit for a research journal, even though some mistakes have been found, which need to be fixed

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors
  1. Section 2.2 (Q-learning): Provide a more detailed explanation of the Q-learning update rule, including the role of the discount factor (γ) and the learning rate (α).
  2. Text in Section 3.1 (Traffic balancing): 'When receiving an instruction, we must select an action from the left and right paths as the reinforcement learning action.' - Clarify how the action space is defined in the reinforcement learning framework, particularly for more complex network topologies.
  3. This is an interesting proposed method; check how this can be extended to Security when applying Federated learning vs. Reinforcement learning - doi: 10.1109/HONET59747.2023.10374608.
  4. Section 3.2 (Importance of instructions): Provide more details on how the reward function prioritizes high-importance instructions while also considering network load balancing. Cross-ref with https://doi.org/10.1016/j.cose.2023.103578

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Except the paper title, there is no other specification regarding the particularities of the traffic in the ship network, compared to the traffic in a general data network. In section 2.1 cables, optical cables, and satellite communications are mentioned, but is not very clear how this all subnetworks are connected in the specific ship networks, and what data type and traffic characteristics has this subnetworks. May be the authors can consider this in the paper.

Introduction cites some very old references (e.g., 8, 20) and other references do not contain information related to paper subject. Some examples: ref. 4 deals with triangle routing in mobile visited/home networks, ref. 6 is related to optical networks, ref. 5, 9 and 10 are from medical literature, and the same is valid for ref. 14, 18, 19, 21, 22.  

Fig. 2 is very similar to a figure in ref. 7.

The paper compares Q-learning algorithm with OSPF. The Q-learning algorithm is briefly described, using a single one equation. How are set the parameters (learning rate, discount factor) used in this equation?  What is the authors contribution to the algorithm? A comparison with other ML algorithms can better highlight the advantages, if any.  

In line 76 and section 3.2 is introduced Instruction priority. What exactly is an instruction? a specific data packet? something else?

Fig. 5 has no units on y axis.

Section 3.3 uses Abilene dataset collected in March 2004. Has this data set comparable statistical properties with specific data in ship network?

Fig. 7: Maximum utilization rate: how is this rate defined and how was computed? in what conditions?

A global network utilization rate is less significant as the network segments utilization rate. The later parameter allow to identify the network bottlenecks. They are some other parameters characterizing the network performance, like  congestion, latency, packet loss. How are those parameters affected/improved by the proposed algorithm?

Comments on the Quality of English Language

English is fine; however, the text should be carefully revised to correct some mistakes 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

Under the ship networks with limited bandwidth, unstable network connections, high latency, and command priority, this study presented the solutions by using reinforcement learning method. To evaluate the performance of the reinforcement learning, the work performed the experimental results adopting the simple structure and abilene dataset network diagram. It should be first noted that reinforcement learning is nothing new. Furthermore, it is difficult to confirm the legitimacy of the work in that it is a performance evaluation in the proposed structure.

It should be mathematically proved that it can provide an optimal solution in at least a few typical used network structures. In particular, the practical aspects of the proposed methodology and the limitations of the research must be presented. Finally, it is evaluated that a full revision of the introduction, main contents, and conclusion is necessary.

Comments on the Quality of English Language

Addition and correction are overall needed.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 5 Report

Comments and Suggestions for Authors

The manuscript has an attractive title. However, it is not clear to me what domain it was written for. The authors use terms for ship or traffic engineering, but which in their conception belong to the IT field and not to the transport field. However, in the bibliography I find references to articles from the naval or medical field. For a logical understanding of the research, however, I recommend creating a concentrator table with common terms with other fields in which to explain their meaning for each field separately.

Also, the application component in the manuscript is reduced to concepts and processing of some data for which the source is unclear. They are real? Are they from the naval field? Are they from the field of networking? What is the motivation for using these models and what is the contribution of the authors?

I look forward to an improved, clearer and more current version of the manuscript

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

Some problems was addressed in the second version, but they still some minor corrections are needed.

The paper compares the Q-learning algorithm with only OSPF.  A comparison with other ML algorithms would have been useful to better highlight the authors' contribution

References 8 and 18 are from medical field. Maybe the authors can cite similar references regarding machine learning in the manuscript domain, that is data networks.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

Nothing to describe.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 5 Report

Comments and Suggestions for Authors

The research presented in the manuscript is for a narrow research segment. The use of terms specific to other fields can induce misunderstandings. However, the authors define these terms in the text in an attempt to eliminate this inconvenience. I recommend the publication of this manuscript if the other reviewers closer to this narrow field of research agree.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop