Next Article in Journal
Deep Reinforcement Learning-Based Dynamic Pricing for Parking Solutions
Previous Article in Journal
Training Performance Indications for Amateur Athletes Based on Nutrition and Activity Lifelogs
 
 
Article
Peer-Review Record

Optimization of Linear Quantization for General and Effective Low Bit-Width Network Compression

Algorithms 2023, 16(1), 31; https://doi.org/10.3390/a16010031
by Wenxin Yang 1,†, Xiaoli Zhi 2,*,† and Weiqin Tong 2,†
Reviewer 1:
Reviewer 2: Anonymous
Algorithms 2023, 16(1), 31; https://doi.org/10.3390/a16010031
Submission received: 7 December 2022 / Revised: 23 December 2022 / Accepted: 27 December 2022 / Published: 4 January 2023

Round 1

Reviewer 1 Report


Comments for author File: Comments.pdf

Author Response

Dear professor,

       Thank you very much for taking time to judge my paper.Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper proposed an optimization of linear quantization using two methods, first prunes the network by unstructured pruning and then quantizes the weights by two-stage quantization. An ablation study was carried out on several pre-trained deep-learning models such as ResNet-50, Inception-v3, and DenseNet-121. The proposed method reduced the bit-width network with little accuracy loss. The study is interesting, however, several modifications can be made to improve the current representation of the paper.

- Please rewrite the abstract (simplifying the content) by introducing the importance, and problems of the study in one/two sentences. Then the proposed method can be described in two/three sentences. Finally, the dataset, performance metrics, and results can be presented in four sentences. The expected implication of the study can be mentioned in one line sentence.

- This paper can have added value in network pruning of artificial neural networks: https://doi.org/10.3390/info13100488

- Please update all Figures by increasing the font size and quality. Considering a font size of 16 and figure quality least 300 DPI (dots per inch).

- Figures 5 and 7 should be moved to section results and discussions.

- I didn’t catch the meaning of Figure 8. Why there is a device in the figure?? Please clarify it!

- What datasets is/are used in this study? It should be discussed and presented as clearly as possible.

- How the ‘total sparsity’, ‘top-1’, and ‘top-5’ was calculated? Please describe them!

- The original deep-learning model reference should be included in Tables 2 and 3 (citations are needed here).

- What are the limitations and future works of this study? Please describe them in the conclusion section!

Author Response

Dear professor,

      Thank you very much for taking time to judge my paper.Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

accept after the second revision

Reviewer 2 Report

Most of the comments have been addressed by the authors, and there’s no more concern about the current manuscript. 

Back to TopTop