Next Article in Journal
Fatigue Characteristics of Fe-Based Shape-Memory Alloys
Next Article in Special Issue
Detecting Cyber Threat Event from Twitter Using IDCNN and BiLSTM
Previous Article in Journal
A Spectral-Aware Convolutional Neural Network for Pansharpening
Previous Article in Special Issue
Hybrid Malware Classification Method Using Segmentation-Based Fractal Texture Analysis and Deep Convolution Neural Network Features
 
 
Review
Peer-Review Record

A Systematic Review of Defensive and Offensive Cybersecurity with Machine Learning

Appl. Sci. 2020, 10(17), 5811; https://doi.org/10.3390/app10175811
by Imatitikua D. Aiyanyo 1, Hamman Samuel 2,* and Heuiseok Lim 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2020, 10(17), 5811; https://doi.org/10.3390/app10175811
Submission received: 10 August 2020 / Revised: 20 August 2020 / Accepted: 20 August 2020 / Published: 22 August 2020

Round 1

Reviewer 1 Report

This paper systematically synthesized the knowledge base in the domain of cybersecurity with ML. The paper covered current challenges and future directions of ML in cybersecurity while surveying nearly two decades of research in the applications of ML to security. By employing the Systematic Literature Reviews model, three research questions on ML techniques are addressed. The paper is useful for researchers and practitioners as far as I can see. However, the presentation should be improved. It can be amended and improved in varied ways as I commented below. I will give my recommendation based on the quality of the revision.

1. Line 10, "several research papers" is not appropriate. This paper reviewed more than one hundred papers.
2. Line 35, some explanations are needed to justify why the security chain is only as strong as the weakest link. Can it be the case that the it is stronger or weaker than the weakest link?
3. Line 98, other databases the authors mentioned may also have the same problem with Google Scholar, for example, IEEEXplore can have overlap with Scopus. How did you go with that?
4. The search strategy described in Section 2.2 is interesting. It is recently reported in 'subgraph robustness of complex networks under attacks' that some sampling methods are of hidden bias and may lead to very different results in systematic science. In light of this, the authors can explain whether their data collection strategy has some hidden bias and justify that any potential error as discussed in the above-mentioned work is minimal.
5. Line 143, can you give an example when the unsupervised alternative may not give the best result? This will be more convincing I think.
6. The confusion matrix discussed in the last paragraph of Section 2 is important in AI related research. However, its influence goes much wider than this area but is often overlooked by researchers. It is good to inform readers in a visible survey paper like this that false positive and false negative have essential applications in general network systems, an example work is false positive and false negative effects on network attacks.
7. In the second paragraph of Section 3, some interesting observations were made. For example, the number of the selected reviewed papers that used defensive approaches in 2012 was about three times the number of papers leveraging offensive approaches in that same year. Can you offer some insights or explanations for these?
8. Line 263-264, this statement is inaccurate. Key conditions are missing here.
9. Line 313, do you mean a specific campus (which one?) of UC system or the sum of some different campuses.
10. In Section 4.1, the essential difficulty for high dimensionality of network data is not discussed very clearly.
11. In Section 4.2, it should be noted that the class overlap between threat and legitmiate data can evolve as a function of time. That is to say, it is a temporal issue. In Hybrid consensus for averager-copier-voter networks with non-rational agents, a node can be malicious or normal in different time due to the change of its own performance and the performance of its neighbours. I feel this temporal aspect is overlooked in the discussion here.
12. Table 2 in appendix is nice. When is the citation number counted? The date should be mentioned in the caption of the table.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The article has an overview character. The authors do not present the results of new researches. However, the literature analysis performed provides the knowledge base in cybersecurity with Machine Learning.

Despite the nature of the review article, it is possible to draw conclusions from the literature analysis performed. The current content of the section conclusions is more like an abstract and should be corrected.

When analysing the editing aspect of the article, most of the content is legible. Only text in figure 1 is difficult to see, it should be corrected before publication.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

The paper has been revised well. I have no further comments.

Back to TopTop