Next Article in Journal
DiPLIP: Distributed Parallel Processing Platform for Stream Image Processing Based on Deep Learning Model Inference
Next Article in Special Issue
Entropy-Driven Adaptive Filtering for High-Accuracy and Resource-Efficient FPGA-Based Neural Network Systems
Previous Article in Journal
Road-Based Multi-Metric Forwarder Evaluation for Multipath Video Streaming in Urban Vehicular Communication
Previous Article in Special Issue
Resource Partitioning and Application Scheduling with Module Merging on Dynamically and Partially Reconfigurable FPGAs
 
 
Article
Peer-Review Record

Accelerating Event Detection with DGCNN and FPGAs

Electronics 2020, 9(10), 1666; https://doi.org/10.3390/electronics9101666
by Zhe Han, Jingfei Jiang *, Linbo Qiao, Yong Dou, Jinwei Xu and Zhigang Kan
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Electronics 2020, 9(10), 1666; https://doi.org/10.3390/electronics9101666
Submission received: 30 August 2020 / Revised: 29 September 2020 / Accepted: 5 October 2020 / Published: 13 October 2020
(This article belongs to the Special Issue Advanced AI Hardware Designs Based on FPGAs)

Round 1

Reviewer 1 Report

The paper presents an application intended for event detection using CNN and FPGA implementation that claims is it the first of its kind.

English correction: - productizing - multiplying
Remarks on the paper:

- ELMo, Open AI GPT - please chack all acronyms and explain before using
- Fig 5 must be better commented. Describe the axes?
- Softmax - is hardware circuit or it is a software running in some CPU? Detail a little bit more the architecture and how works.
- weak conclusion
- explain what F1 score is, even it is described on the Internet. Explain why is a good measure for the present application.
- explain why the time is so critical as it needs to be reduced from few hundreds us for CPUs or GPUs to 15us.

Author Response

Dear Reviewers:

Thank you for the comments concerning our manuscript. Those comments are all valuable and very helpful for revising and improving our paper. We have studied comments carefully and have made the correction, which we hope to meet with approval.

Please see the attachment for point-by-point responses.

We look forward to hearing from you regarding our submission. We would be glad to respond to any further questions and comments that you may have.

Sincerely,

Zhe Han

Author Response File: Author Response.pdf

Reviewer 2 Report

Good efforts and well written manuscript that has needed flow. Authors should consider following during revision:

On all models, wonder why recall values are much lower than precision values. This should be intensively discussed with proper references. All abbreviations should be defined when they were firstly used both in the manuscript and the abstract even the list of abbreviations was provided.

Specific comments:

L27: What is expensive process for? Clarify it whether cost of money or something else like energy, resource, etc.

L42-43: Combine these two sentences from Collobert to engineering.

L56-60: Why did you put these three sentences which are related with RNN after LSTM? These seem to be located before LSTM at line 53.

L66: Provide the definition of GOPs even it was defined in the list of abbreviations. This should be revised throughout the manuscript which the authors failed.

L79: Change “dilate gated convolutional neural network (DGCNN)” to “DGCNN” as it was already defined in the previous sentence.

L117: Continuous use of “However” seem not to be smooth connection. Please check the context again.

L122: You should set the range or definition of k before use it.

L143-146: Need to provide reference(s).

Figure 2: Please provide detailed information for both convolution such as kernel, stride, etc. Also correct the start points of two arrows from the second to the third convolution in the dilated convolution.

L174: DGCNN was already defined in line 79.

L175: A period is missing after layer.

L179: ”Y is the output result” should come first as this is the first one in the equation (2).

Table 1: Why doesn't BERT-CRF have precision and recall values in Table 1 even it shows the highest F1-score?

Table 2: Low recall values compare to high precision values indicates many Fn cases of Chinese event detection. The authors should intensively discuss why this happened with proper references.

L238: Where is following function?

Table 5: This should be located after Fig. 4 as it came after Fig. 4.

L243: Provide the definition of BRAM even you provided in the list of abbreviations. Also, you provide the definition of BRAM later in line 281. Revise them accordingly.

L254: Do not use as follows. Instead, use more specific equation number.

L259: What kind of performance do you mean? F1-score? Please specify it.

L281: Block RAM (BRAM) -> BRAM

L328: Is 1-D convolution right? I believe the authors claimed they used 2-D convolution.

L361: Please specify how you get 12% improvement? Did it come from 72.7 to 84.6? In that case, the authors should use 12% point as it was improved from 72.7% to 84.6% or 13.6% from 72.7 to 84.6 as (84.6 – 72.7) X 100 /72.7 (%).

L373: GPU. -> GPU, respectively.

-END-

Author Response

Dear Reviewers:

Thank you for the comments concerning our manuscript. Those comments are all valuable and very helpful for revising and improving our paper. We have studied comments carefully and have made the correction, which we hope to meet with approval.

Please see the attachment for point-by-point responses.

We look forward to hearing from you regarding our submission. We would be glad to respond to any further questions and comments that you may have.

Sincerely,

Zhe Han

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

It is good for next.

Back to TopTop