Next Article in Journal
Evaluation of FIB-4, NFS, APRI and Liver Function Tests as Predictors for SARS-CoV-2 Infection in the Elderly Population: A Matched Case-Control Analysis
Previous Article in Journal
Switching between Enzyme Replacement Therapies and Substrate Reduction Therapies in Patients with Gaucher Disease: Data from the Gaucher Outcome Survey (GOS)
 
 
Article
Peer-Review Record

Deep Learning-Based Automatic Detection of ASPECTS in Acute Ischemic Stroke: Improving Stroke Assessment on CT Scans

J. Clin. Med. 2022, 11(17), 5159; https://doi.org/10.3390/jcm11175159
by Pi-Ling Chiang 1,†, Shih-Yen Lin 2,†, Meng-Hsiang Chen 1, Yueh-Sheng Chen 1, Cheng-Kang Wang 1, Min-Chen Wu 3, Yii-Ting Huang 4, Meng-Yang Lee 5, Yong-Sheng Chen 2,* and Wei-Che Lin 1,*
Reviewer 1:
Reviewer 2:
J. Clin. Med. 2022, 11(17), 5159; https://doi.org/10.3390/jcm11175159
Submission received: 10 August 2022 / Revised: 24 August 2022 / Accepted: 29 August 2022 / Published: 31 August 2022
(This article belongs to the Section Nuclear Medicine & Radiology)

Round 1

Reviewer 1 Report

- to improve the quality of images, graphs and tables with captions and abbreviations. Moreover, Figure 2 could be integrated with a scatter diagram to better describe the results of both human and CNN diagnosis

- to separate Limitations and Conclusion paragraph

- to insert some brain images to demonstrate the burden methods to discern ictus

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The authors developed a deep learning-based automatic detection (DLAD) algorithm for ASPECTS in Acute Ischemic Stroke. I have several concerns:

-> There are no any recent studies cited in the manuscript. Is this because of the field is not getting attention anymore?

-> More sample size will increase the impact of the paper.

-> Deep learning model training should be explained in details. For example, any data augmentation techniques applied, architecture of the model, loss function, selected hyper parameters (batch size, learning rate).

-> DL models usually suffer from generalizability. One model trained on one institutional dataset performs poorly on images from another institution. Is this case for ur model? Is there any public datasets available to prove your model is generalizable? Having additional dataset from another source (public or other institutions) would greatly improve the significance of the paper.

-> The authors said that annotations were defined by experts. However, there could be inter reader variabilities among expert annotations. Do you think such variability affect the model performance?

-> Which method is employed for kappa score calculation?

-> Structure of the manuscript should be reorganized. I realized that some parts in the results section should be in the methodology actually.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Dear Authors,

you did a good job of reviewing. 

Reviewer 2 Report

The authors addressed all of my concerns. Thank you for this valuable study. 

Back to TopTop