Next Article in Journal
Salivary Transcriptome and Mitochondrial Analysis of Autism Spectrum Disorder Children Compared to Healthy Controls
Previous Article in Journal
Will the Artificial Intelligence Touch Substitute for the Human Touch?
 
 
Technical Note
Peer-Review Record

A Practical Guide to Manual and Semi-Automated Neurosurgical Brain Lesion Segmentation

NeuroSci 2024, 5(3), 265-275; https://doi.org/10.3390/neurosci5030021
by Raunak Jain 1, Faith Lee 1, Nianhe Luo 1, Harpreet Hyare 2 and Anand S. Pandit 3,4,*
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
NeuroSci 2024, 5(3), 265-275; https://doi.org/10.3390/neurosci5030021
Submission received: 25 June 2024 / Revised: 30 July 2024 / Accepted: 31 July 2024 / Published: 2 August 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

It is great pleasure to read this manuscript and watch the supplementary videos. I think the authors did a great job in guiding trainees to use ITK-SNAP to segment meningioma, glioblastoma multiforme (GBM) and subarachnoid haemorrhage (SAH). The manuscript is well written in English and is comfortable to read.

The part I would suggest the manuscript be improved is about the quality control. I recommend the authors specify the accuracy of semi-automated segmentation by a trainee. How long need a trainee to be able to pass with the semi-automated segmentation training? How much percent of the image segementation are accurate after review by experts? Providing these numbers will convince people this is a great platform to train neurosurgical trainees and researchers.

Author Response

The part I would suggest the manuscript be improved is about the quality control. I recommend the authors specify the accuracy of semi-automated segmentation by a trainee. How long need a trainee to be able to pass with the semi-automated segmentation training? How much percent of the image segementation are accurate after review by experts? Providing these numbers will convince people this is a great platform to train neurosurgical trainees and researchers. 

We have addressed your questions to a significant degree in section 1.3. There are few references which compare methods as you have suggested – which have been added to the manuscript.  We would be happy to add further but were writing within the constraints of the word limit and scope of the article

Reviewer 2 Report

Comments and Suggestions for Authors

A brief summary:

The authors optimized manual and semi-automated segmentation on three neurosurgical lesions: meningioma, GBM, and SAH, using MRIcron, MATLAB, and ITK-SNAP. However, the conclusion emphasizing the enhancement should be reworded, as the data for efficiency and reproducibility are lacking.

General concept comments:

1.        The authors claimed that their pipeline improved efficiency and reproducibility; however, no data was shown to prove this key conclusion. In 1.2.7.2 and Figure 5, the formulas for performance statistical metrics, dice similarity coefficient, and Jaccard index were included. However, the results of the comparison with previous methods, or manual vs semi-automated, were noticeably absent.

2.        While the patient scans are anonymized, providing the demographics and inclusion criteria would be beneficial. This information, such as the number of patients in each disease category, would significantly enhance the readers' understanding and confidence in the study’s statistical power.

3.        Figure 1 illustrates the pipeline. However, only images from PACS are included. Integrating the scans from the MICCAI Multimodal Brain Tumor Segmentation Challenge (BRATS) into one pipeline will cover all the scans used in this study. If the radiomics and error metrics in Figure 1 were checked, why was no data shown? If not, why are they presented in the pipeline?

Specific comments :

1.        In 1.2.2, the authors said, “To perform a full segmentation, including manual analysis of the images, 20 minutes were required for GBM, 15 minutes for meningioma, and 30 minutes for SAH.” Are those descriptions for the manual segmentation?

2.        In 1.2.3, only ITK-SNAPA was mentioned; MRIcron and MATLAB are missing. A specific version of each software will help others follow the guidelines and faithfully repeat the pipeline.

3.        In 1.2.6, manual segmentation was used for meningiomas due to complex morphology. How about the quality of automated pipelines for this category? 

4.        Nowadays, AI is excellent at image recognition; any algorithm was proposed to segment the scans automatically. If so, some discussion should be about the future of manual segmentation performed by expert surgeons and radiologists.

Author Response

  1. To better compare the semi-automated segmentation and manual segmentation, quality control need to be better addressed. There should be a main figure to show the comparison of the rigidity and precision of the semi-automated segmentation. – 

Both due to the scope of the article - namely a technical note on how semi-automated segmentation can be performed rather than a head to head trial and comparison - this figure has not been added. We have however, added a new comparison section on manual versus semi-automated segmentation in 1.3 and in line with yours and the previous reviewer comments.

  1. The authors should insert scale bars for readers to better estimate the size of different lesions.

All relevant figures have been edited to add scale bars. 

  1. It is unnecessary to include Figure 5 in this manuscript; it is not providing the readers with any new information. If this Venn diagram need to be kept, the authors should add more information in the figure to better relate to their own  studies/examples.

Thank you for pointing this out - Figure 5 has been removed.  

  1. Also Figure 5 and the error metrics (1.2.7.2) in the method section, Jaccard index penalizes the difference between two objects more, whereas Dice more strongly weighs the commonalities. Also, Jaccard does not consider the total volume of the object (size of the lesions) whereas Dice does consider the volume. Therefore, in this specific case Dice similarity coefficient will be a better choice to evaluate the segmentations.

The information above regarding the Jaccard index and the Dice coefficient has been added to 1.2.7.2 as suggested. 

Reviewer 3 Report

Comments and Suggestions for Authors

The authors provided a guide for both manually and semi automatically to perform image segmentation of neurosurgical cranial lesions. The authors attached very detailed videos to demonstrate the segmentation using ITK-SNAP. It can potentially increase the efficiency compared to manual analysis. Although very practical and well-drafted manuscript, some concerns should be addressed.

1. To better compare the semi-automated segmentation and manual segmentation, quality control need to be better addressed. There should be a main figure to show the comparison of the rigidity and precision of the semi-automated segmentation.

2. The authors should insert scale bars for readers to better estimate the size of different lesions.

3. It is unnecessary to include Figure 5 in this manuscript; it is not providing the readers with any new information. If this Venn diagram need to be kept, the authors should add more information in the figure to better relate to their own  studies/examples.

4. Also Figure 5 and the error metrics (1.2.7.2) in the method section, Jaccard index penalizes the difference between two objects more, whereas Dice more strongly weighs the commonalities. Also, Jaccard does not consider the total volume of the object (size of the lesions) whereas Dice does consider the volume. Therefore, in this specific case Dice similarity coefficient will be a better choice to evaluate the segmentations.

Author Response

  

  1. To better compare the semi-automated segmentation and manual segmentation, quality control need to be better addressed. There should be a main figure to show the comparison of the rigidity and precision of the semi-automated segmentation. – 

Both due to the scope of the article - namely a technical note on how semi-automated segmentation can be performed rather than a head to head trial and comparison - this figure has not been added. We have however, added a new comparison section on manual versus semi-automated segmentation in 1.3 and in line with yours and the previous reviewer comments.

  1. The authors should insert scale bars for readers to better estimate the size of different lesions.

All relevant figures have been edited to add scale bars. 

  1. It is unnecessary to include Figure 5 in this manuscript; it is not providing the readers with any new information. If this Venn diagram need to be kept, the authors should add more information in the figure to better relate to their own  studies/examples.

Thank you for pointing this out - Figure 5 has been removed.  

  1. Also Figure 5 and the error metrics (1.2.7.2) in the method section, Jaccard index penalizes the difference between two objects more, whereas Dice more strongly weighs the commonalities. Also, Jaccard does not consider the total volume of the object (size of the lesions) whereas Dice does consider the volume. Therefore, in this specific case Dice similarity coefficient will be a better choice to evaluate the segmentations.

The information above regarding the Jaccard index and the Dice coefficient has been added to 1.2.7.2 as suggested. 

 

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

General concept comments:

1.        The authors claimed that their pipeline improved efficiency and reproducibility; however, no data was shown to prove this key conclusion. In 1.2.7.2 and Figure 5, the formulas for performance statistical metrics, dice similarity coefficient, and Jaccard index were included. However, the comparison results with previous methods, or manual vs semi-automated, were noticeably absent.

2.        While the patient scans are anonymized, providing the demographics and inclusion criteria would be beneficial. This information, such as the number of patients in each disease category, would significantly enhance the readers' understanding and confidence in the study’s statistical power.

3.        Figure 1 illustrates the pipeline. However, only images from PACS are included. Integrating the scans from the MICCAI Multimodal Brain Tumor Segmentation Challenge (BRATS) into one pipeline will cover all the scans used in this study. If the radiomics and error metrics in Figure 1 were checked, why was no data shown? If not, why are they presented in the pipeline?

Specific comments :

1.        In 1.2.2, the authors said, “To perform a full segmentation, including manual analysis of the images, 20 minutes were required for GBM, 15 minutes for meningioma, and 30 minutes for SAH.” Are those descriptions for the manual segmentation?

2.        In 1.2.3, only ITK-SNAPA was mentioned; MRIcron and MATLAB are missing. A specific version of each software will help others follow the guidelines and faithfully repeat the pipeline.

3.        In 1.2.6, manual segmentation was used for meningiomas due to complex morphology. How about the quality of automated pipelines for this category? 

4.        Nowadays, AI is excellent at image recognition; any algorithm was proposed to segment the scans automatically. If so, some discussion should be about the future of expert surgeons and radiologists performing manual segmentation.

Author Response

The authors claimed that their pipeline improved efficiency and reproducibility; however, no data was shown to prove this key conclusion. In 1.2.7.2 and Figure 5, the formulas for performance statistical metrics, dice similarity coefficient, and Jaccard index were included. However, the results of the comparison with previous methods, or manual vs semi-automated, were noticeably absent. 

A section on comparison between manual and semi-automated segmentation has been added to 1.3.  We have toned down the conclusion about efficiency and reproducibility to reflect the level of evidence available in the current literature

 

  1. While the patient scans are anonymized, providing the demographics and inclusion criteria would be beneficial. This information, such as the number of patients in each disease category, would significantly enhance the readers' understanding and confidence in the study’s statistical power.

Inclusion criteria and demographics for the BRATS database consisting of glioblastoma scans have been provided in 1.2.4. 

For the GBM and meningioma scans, the patient’s age and gender have been provided in 1.2.4. 

 

  1. Figure 1 illustrates the pipeline. However, only images from PACS are included. Integrating the scans from the MICCAI Multimodal Brain Tumor Segmentation Challenge (BRATS) into one pipeline will cover all the scans used in this study. If the radiomics and error metrics in Figure 1 were checked, why was no data shown? If not, why are they presented in the pipeline?

The radiomics and error metrics have been removed from Figure 1. 

Specific comments : 

 

  1. In 1.2.2, the authors said, “To perform a full segmentation, including manual analysis of the images, 20 minutes were required for GBM, 15 minutes for meningioma, and 30 minutes for SAH.” Are those descriptions for the manual segmentation? 

Confusion regarding the descriptions in 1.2.2 has been clarified and the description has been simplified. 

2. In 1.2.3, only ITK-SNAPA was mentioned; MRIcron and MATLAB are missing. A specific version of each software will help others follow the guidelines and faithfully repeat the pipeline. 

The additional details regarding MRIcron and MATLAB have been added to 1.2.3. 

 

  1. In 1.2.6, manual seggmentation was used for meningiomas due to complex morphology. How about the quality of automated pipelines for this category?  

Information regarding automated pipelines for meningioma segmentation have been added to 1.3.  

  1. Nowadays, AI is excellent at image recognition; any algorithm was proposed to segment the scans automatically. If so, some discussion should be about the future of manual segmentation performed by expert surgeons and radiologists.

Discussion of the future of manual segmentation performed by expert surgeons and radiologists has been added to 1.3. We fully agree with the reviewer's point and would be happy to extend this - but were limited by the technical limits of the article

Round 3

Reviewer 2 Report

Comments and Suggestions for Authors

Most of my concerns were addressed.

Back to TopTop