Next Article in Journal
IoT-Enabled Soil Nutrient Analysis and Crop Recommendation Model for Precision Agriculture
Next Article in Special Issue
A Temporal Transformer-Based Fusion Framework for Morphological Arrhythmia Classification
Previous Article in Journal
Comparison between an RSSI- and an MCPD-Based BLE Indoor Localization System
Previous Article in Special Issue
An Enhanced Virtual Cord Protocol Based Multi-Casting Strategy for the Effective and Efficient Management of Mobile Ad Hoc Networks
 
 
Review
Peer-Review Record

Model Compression for Deep Neural Networks: A Survey

by Zhuo Li 1, Hengyi Li 1 and Lin Meng 2,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Submission received: 29 January 2023 / Revised: 28 February 2023 / Accepted: 1 March 2023 / Published: 12 March 2023
(This article belongs to the Special Issue Feature Papers in Computers 2023)

Round 1

Reviewer 1 Report

This manuscript presents a detailed  survey on Model Compression and Acceleration for Deep Neural Networks.The topic looks interesting. I have several comments as follows. Hopefully, they could be of use to improve the manuscript.

 

1) The importance of the survey could be better discussed. Particular, what problem does this work attempt to solve?

2.) As for the implications of your research, what changes should be implemented as a result of the findings of the work? How does this work add to the body of knowledge on this topic?

3) All the full names should be given the abbreviations appear for the first time.

4) Both motivations and contributions are unclear.

5) Related work section is suggested to set subsections depending the advances reviewed.

6) Figures' resolution needs improvements. High-quality figures are strongly suggested. Please consider using the EPS format.

7) More evaluation metrics should be considered to make the experiments more sufficient, such as G-mean.

8) Discussions of relevant literature could be further enhanced, which can help better motivate the current study and link to the existing work. Authors might consider the following relevant recent work in the field of applying machine learning techniques to better motivate the usefulness of machine learning approaches, such as

-Symmetry 2022  https://doi.org/10.3390/sym14101976; 

-Scientific Reports 11, 1447 (2021)  https://doi.org/10.1038/s41598-021-81216-5;

-arXiv (2022)  https://doi.org/10.48550/arXiv.2210.04252

 

Hence they should be briefly  discussed in the related work section.

 

9). Authors might further elaborate the limitations of the current work and the outlook for future studies.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This survey on deep neural networks is very meaningful, and the paper is well written. It can be modified according to the following suggestions:

(1) In the Abstract: “Currently, the rapid development of deep neural networks (DNNs), which have shown excellent performance in various computer vision tasks”, it should not be a complete statement.

(2) For Figure 2. Comparison of quantization-aware training (QAT, left) and post-training quantization (PTQ, right), has the author ever thought about connecting the left and right figures to make them more vivid?

(3) Optimize Figure 5, such as background color, font, etc.

(4) The words in Figure 6 are too small, please enlarge appropriately.

(5) The authors can add the next work plan after the conclusion.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Thanks for submitting your paper to our journal. This is a timely literature review on model compression and acceleration. However, I feel the two parts are not strongly correlerated. Model compression is already a big topic and there are many works developed in this direction. I like the brief summary that you have presented in the paper. Acceleration, in my understanding, is a different topic. For example, it is a widely studied area today: how to accelerate model training/inference on edge devices. It has something to do with model compression, because the hardware constraints usually requires the compression of models before we can run the model on the edge device. But that direction is not what you focus on.

 

I suggest you modify the name and just focus on model compression. You have already included many valuable content in this direction. In the end of the paper, you might include a discussion section, talking about the application of model compression (i.e., to facilitate the deployment of models on edge devices).  But the major part of the paper is about model compression.

 

Anyway, I feel this literature review is well written. I also believe it can benefit the readers when they conduct research on model compression.

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The revised manuscript is now suitable for publication.

Back to TopTop