Next Article in Journal
The Accuracy of Three Intraoral Scanners in the Oral Environment with and without Saliva: A Comparative Study
Next Article in Special Issue
Imaging Techniques for the Assessment of the Bone Osteoporosis-Induced Variations with Particular Focus on Micro-CT Potential
Previous Article in Journal
A Novel Explicit Analytical Multibody Approach for the Analysis of Upper Limb Dynamics and Joint Reactions Calculation Considering Muscle Wrapping
Previous Article in Special Issue
Disentangled Autoencoder for Cross-Stain Feature Extraction in Pathology Image Analysis
 
 
Article
Peer-Review Record

Detection of Ki67 Hot-Spots of Invasive Breast Cancer Based on Convolutional Neural Networks Applied to Mutual Information of H&E and Ki67 Whole Slide Images†

Appl. Sci. 2020, 10(21), 7761; https://doi.org/10.3390/app10217761
by Zaneta Swiderska-Chadaj 1, Jaime Gallego 2, Lucia Gonzalez-Lopez 3 and Gloria Bueno 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2020, 10(21), 7761; https://doi.org/10.3390/app10217761
Submission received: 1 September 2020 / Revised: 26 October 2020 / Accepted: 27 October 2020 / Published: 2 November 2020
(This article belongs to the Special Issue Recent Advances in Biomedical Image Processing)

Round 1

Reviewer 1 Report

The authors apply three DL CNN classification methods to a set of 50 whole slide images sourced from the AIDPATH breast cancer database. The third method is the most novel which combines Ki67 and H&E WSIs by means of color deconvolution. Overall, I think this is interesting work and the approach is robust. With that said, there are some items that need to be addressed prior to publication. I also recommend thorough English editing.

field of views (FOV)

whole slide images (WSI)

Specific Comments:

14: A space is needed between the words ‘Index;Proliferation’.

42: The word ‘occurred’ should be ‘occurring’.

62: ‘H%E’ should read ‘H&E’. Also, ‘Hematoxylin and Eosin’ should be used here, the first time it is written.

87: ‘Wsis’ should read ‘WSI’.

125, 130: Please clarify ‘flip up’. Do you mean ‘flip vertically’?

238: ‘area’ should read ‘areas’.

Figure 8: Difficult to read. Quality of chart needs to be improved.

General Research Comments:

The Otsu method has a few different implementations. Can you be more specific with the exact thresholding method used?

Author Response

The authors would like to thank this reviewer for the useful comments, which have been addressed in detail. Please, see the attached document with a detailed answer to all comments. 

Author Response File: Author Response.pdf

Reviewer 2 Report

This manuscript used CNN to extract image tiles that shows tumor area in H&E staining and Ki67 staining. That is an important study but there are several problems in this manuscript.

First of all, the manuscript has many typos and grammatical errors. For example,

Line 62: H%E -> H&E

Line 207: important do not count -> important not to count

Line 209: inmunopositive -> immunopositive

Figure 8: Numbers are written with comma in A and C, while period (dot) in B.

The manuscript must be carefully edited by authors or, if needed, professional editors before submission to peer review process. I strongly recommend this because careless mistakes would impress reviewers that the entire research was carelessly conducted and would cause underestimation of the manuscript content.

By the way, the main task of the current research is segmentation of image. Why ‘classification’ network was used instead of ‘segmentation’ network such as U-Net (mentioned in Line 75)?

AlexNet was used. It is quite famous but the oldest CNN. These days, AlexNet is rarely used, and instead, deeper networks such as Inception and ResNet are frequently used. I would like to suggest the authors to try them, too.

Introduction is too long. It should be concise.

Table 1 can be omitted. It is not a review article but an original article.

Hardware and software used must be described. In Line 302, the time taken was described but what PC was used? Was GPU used? If so, how many? What framework was used, Tensorflow, Keras, or Pytorch?

Author Response

The authors would like to thank this reviewer for the useful comments, which have been addressed in detail. Please, see the attached document with a detailed answer to all comments. 

Author Response File: Author Response.pdf

Reviewer 3 Report

In this paper, the authour present three different deep learning-based approaches to automatically detect and quantify Ki67 hot-spot areas, to detect invasive breast cancer.

The paper is overall well written and well presented, scientifically sound, and very interesting. However, I have some remarks that need to be addressed to make the work worthy of publication:

1. Section 3 does not present the reasons that led the authors to develop three different methods, based on different data sources (Ki67 and/or H&E) and different architectures, to address the problem. Without these reasons, it may seem that several attempts have been made, and the three best ones have been chosen "a posteriori". I would suggest instead to clarify the preliminary reasons that led to proposing these three different solutions;

2. The paper is a bit vague about the configuration of the CNN parameters. In particular, the authors claim to have fine-tuned, using parameters and network configurations transferred from a different domain. However, it should be made explicit from which context these parameters were taken, whether an additional preliminary parameter exploration was done, and above all that the final dataset used for the validation was not somehow used for the optimization of the parameters. In addition, a small table with the main parameters used would be welcome;

3. As the paper basically addresses a multi-class classification problem, it would be important to specify the distribution of tiles between classes (and therefore the level of balance of the dataset);

4. It is not clear how the split training-test was carried out (in particular, whether the testing subset used is the same or not for the three methods). Furthermore, how were the confusion matrices obtained? With a one-shot experiment, or through cross-validation or hold-out validation? Please clarify;

5. There are some small typos in the text (e.g.: fine-tuning appears more than once with two 'n's);

6. In the introductory part, the authors should specify that deep learning and CNN are currently used in many research fields, not exclusively in the medical field, and adding some references like (not exhaustively):

1) pedestrian recognition: Zhang, S., Wen, L., Bian, X., Lei, Z., & Li, S. Z. (2018). Occlusion-aware R-CNN: detecting pedestrians in a crowd. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 637-653);

2) surveillance: Muhammad, Khan, Tanveer Hussain, and Sung Wook Baik. "Efficient CNN based summarization of surveillance videos for resource-constrained devices." Pattern Recognition Letters 130 (2020): 370-375;

3) market forecasting: Barra, S., Carta, S. M., Corriga, A., Podda, A. S., & Recupero, D. R. (2020). Deep learning and time series-to-image encoding for financial forecasting. IEEE/CAA Journal of Automatica Sinica7(3), 683-692.

On the whole, however, I very much appreciated this work and I believe that once the above problems have been tackled, it will be worthy of acceptance in this prestigious journal.

Author Response

The authors would like to thank this reviewer for the useful comments, which have been addressed in detail. Please, see the attached document with a detailed answer to all comments. 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Unfortunately, I could not find that the authors improved the manuscript in response to my suggestion.

Author Response

Thank you for your review. Please, find in the .pdf document detailed answer to all your questions. 

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors addressed the problems and limitations I had reported in a timely manner. In the light of the changes made, I believe that this paper is now worthy of publication in this prestigious journal.

Author Response

Thank you for the review. 

Round 3

Reviewer 2 Report

I understand that the authors did not use segmentation networks (U-Net etc) although I suggested. In addition, they did not use newer networks but conventional AlexNet (2012) although I suggested. I think that the newness of the current manuscript is limited.

But, I also understand that the authors corrected many points from the first version. So, could you please consider citing the following papers?

 

Kitao et al. 2019 Annals of Nuclear Medicine reported Ki-67 predicted using PET imaging.

https://pubmed.ncbi.nlm.nih.gov/30196378/

 

Kawauchi et al. 2020 BMC Cancer reported the usefulness of convolutional neural network to diagnose PET.

https://pubmed.ncbi.nlm.nih.gov/32183748/

Author Response

Dear reviewer,

Thank you for your review.

The suggested papers was cited (see reference [4] and [5] of the manuscripts reviewed).

We tested U-Net using a previous network built by authors ([39]) with Ki67 hot-spot training, but we did not get better results, that is why we did not include it. The paper's main contirubtion is to test the usefulness of using ki67 stain together with HE information. This has been clarified in the reviewed manuscript.

We hope the manuscripts has been improved and you find suitable for publication. 

Best regards, The authos. 

Back to TopTop