Next Article in Journal
Endoscopic Management of Esophageal Cancer
Previous Article in Journal
Molecular Pathological Characteristics of Thyroid Follicular-Patterned Tumors Showing Nodule-in-Nodule Appearance with Poorly Differentiated Component
Previous Article in Special Issue
A Comprehensive Prospective Comparison of Acute Skin Toxicity after Hypofractionated and Normofractionated Radiation Therapy in Breast Cancer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning

1
CHA Bundang Medical Center, Department of Radiation Oncology, CHA University School of Medicine, Seongnam 13496, Korea
2
Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea
3
Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to the work.
Cancers 2022, 14(15), 3581; https://doi.org/10.3390/cancers14153581
Submission received: 2 June 2022 / Revised: 8 July 2022 / Accepted: 20 July 2022 / Published: 22 July 2022
(This article belongs to the Special Issue Radiation Therapy for Breast Cancers)

Abstract

:

Simple Summary

We investigated the contouring data of organs at risk from 40 patients with breast cancer who underwent radiotherapy. The performance of denoising-based auto-segmentation was compared with manual segmentation and conventional deep-learning-based auto-segmentation without denoising. Denoising-based auto-segmentation achieved superior segmentation accuracy on the liver compared with AccuContourTM-based auto-segmentation. This denoising-based auto-segmentation method could provide more precise contour delineation of the liver and reduce the clinical workload.

Abstract

Objective: This study aimed to investigate the segmentation accuracy of organs at risk (OARs) when denoised computed tomography (CT) images are used as input data for a deep-learning-based auto-segmentation framework. Methods: We used non-contrast enhanced planning CT scans from 40 patients with breast cancer. The heart, lungs, esophagus, spinal cord, and liver were manually delineated by two experienced radiation oncologists in a double-blind manner. The denoised CT images were used as input data for the AccuContourTM segmentation software to increase the signal difference between structures of interest and unwanted noise in non-contrast CT. The accuracy of the segmentation was assessed using the Dice similarity coefficient (DSC), and the results were compared with those of conventional deep-learning-based auto-segmentation without denoising. Results: The average DSC outcomes were higher than 0.80 for all OARs except for the esophagus. AccuContourTM-based and denoising-based auto-segmentation demonstrated comparable performance for the lungs and spinal cord but showed limited performance for the esophagus. Denoising-based auto-segmentation for the liver was minimal but had statistically significantly better DSC than AccuContourTM-based auto-segmentation (p < 0.05). Conclusions: Denoising-based auto-segmentation demonstrated satisfactory performance in automatic liver segmentation from non-contrast enhanced CT scans. Further external validation studies with larger cohorts are needed to verify the usefulness of denoising-based auto-segmentation.

1. Introduction

In radiotherapy planning, organs at risk (OARs) are manually delineated by physicians based on computed tomography (CT) scans. Accurate contouring of OARs is essential for precise radiotherapy. OARs are manually delineated and carefully reviewed by physicians. However, it is a time-consuming process associated with an increased workload. The manual segmentation of OARs can take 1 h per patient due to the large number of axial slices. As such, atlas-based and deep-learning algorithms based on convolutional neural network auto-segmentation have been developed to alleviate the labor-intensive delineation of OARs [1,2,3,4,5,6,7]. Machine learning approaches, especially deep learning with multi-layered neural networks, have been actively applied to treatment planning in radiotherapy [8,9,10,11,12,13,14,15,16,17,18,19]. Many studies have investigated the deep-learning-based auto-segmentation of OARs for various disease sites [1,2,20,21,22,23,24,25,26,27]. The application of deep-learning algorithms based on convolutional neural networks has been proven effective and has shown high performance in delineating OARs [12,13,14,15,16,17,18,19,20,21,22]. Clinical practices use several commercially available deep-learning contouring products, including AccuContourTM (Manteia Medical Technologies Co. Ltd., Xiamen, China).
Contrast-enhanced planning CT is used for the delineation of target volumes and OARs, and intravenous CT contrast can enhance normal tissue visualization and delineation [28,29]. Due to safety concerns associated with CT contrast, contrast-enhanced planning CT cannot be performed for all patients [30]. Therefore, non-contrast CT images have been used, but the boundary between the OARs and neighboring structures may be indistinguishable due to the suboptimal image quality of non-contrast CT [28,29]. In real-world clinical practice, portions of automatically generated contours in non-contrast CT require manual corrections to make them clinically acceptable. The noise in non-contrast CT images disturbs the visualization of structures, which increases the uncertainty of image segmentation. Deep-learning-based auto-segmentation algorithms have to overcome these image-related problems.
An image processing technique with improved performance, such as a denoising algorithm that can remove noise while maintaining the edge structure, is needed to achieve high-contrast CT images [31,32,33,34,35]. In a previous study, we implemented an anisotropic total variation (ATV) denoiser to enhance the image quality of low-dose cone-beam CT [36]. In this study, we establish that the segmentation accuracy can be improved significantly when this denoising technique is applied to increase the signal difference between structures of interest and unwanted noise in non-contrast CT. Moreover, we investigate the accuracy of segmentation when denoised non-contrast CT images are used as input images for deep-learning-based auto-segmentation of OARs. We compare the performance of this approach with manual segmentation and conventional deep-learning-based auto-segmentation without denoising.

2. Materials and Methods

2.1. Data and Delineation

Ethical approval for this study was obtained from the Institutional Review Board (IRB) of Yonsei University Health System, Gangnam Severance Hospital (Approval No.: 3-2021-0276). All methods were performed in accordance with the relevant guidelines and regulations. Due to the retrospective nature of this study, informed consent was waived by the IRB of Gangnam Severance Hospital. We used non-contrast planning CT scans of female patients with breast cancer who underwent modified radical mastectomy or breast-conserving surgery and received postoperative radiotherapy between 2019 and 2020 [37]. Forty patients were randomly chosen. The median age was 49 years old (range, 30–77 years), and the median body mass index was 22 kg/m2 (range 17–32 kg/m2). There were 22 patients with left breast cancer and 18 patients with right breast cancer. No patients had previously undergone surgical procedures for lung, heart, esophagus, spine, and upper abdominal organs at the time of conducting non-contrast planning CT. The CT images were acquired on a Siemens Sensation Open scanner (Siemens, Forchheim, Germany) using the following parameters: 120 kVp (scan voltage) and 3 mm slice thickness (layer thickness). Scans were conducted with tube current modulation, an adaptive method in which the current changes as the gantry is rotated. We obtained 81–123 slices per patient. All patients were scanned in the supine position with a customized arm support using a breast board. In this study, the OARs included the heart, right and left lungs, esophagus, spinal cord, and liver. The contours were manually delineated by two experienced radiation oncologists. The radiation oncologist was blinded to the results of delineation of the OARs by other radiation oncologists.

2.2. Deep-Learning-Based Auto-Segmentation

Recently, various deep-learning-based auto-segmentation methods have been developed to assist with image segmentation tasks. Satisfactory organ segmentation results have been reported [1], and some commercial products have been implemented in clinics for CT-based automatic segmentation. In this study, a commercially available deep learning contouring software “AccuContourTM” (Manteia Medical Technologies Co. Ltd., Xiamen, China) was used to generate the information required for treatment planning. It automatically segments the OARs, including the head-and-neck, thorax, abdomen, and pelvis for both males and females. AccuContourTM is based on the U-net model [38] pre-trained by the vendors. U-net is a fully convolutional network (FCN) based model with end-to-end scheme proposed for image segmentation [38,39,40]. In U-net, the network for obtaining the overall context information of the image and that for accurate localization are symmetrically configured. U-net is a model that applies up-sampling and skip architecture of concepts that are more extended than FCNs, resulting in the U-net’s structure demonstrating superior performance in several image segmentation problems by leveraging data augmentation with only a small amount of learning data.
The trained model data were collected from multiple centers, and data cleaning was performed. Initial contours generated by the deep-learning model were corrected by post-processing with graph-based models. The accuracy was further improved by combining local and global information from the image and initial segmentation results. With these procedures, the contouring workload was reduced from hours to less than a minute for each patient. This segmentation technique was applied to delineate the OARs.

2.3. Anisotropic Total Variation Denoiser-Based Auto-Segmentation

An ATV denoiser [36] was applied to the CT images to augment the intensity difference between the striking features and unwanted noise by combining the conduction coefficient used in the anisotropic diffusion filter [41]. The minimization of the ATV objective function implies that edges with high contrast relative to the surroundings are preserved, and noisy voxels with low contrast are smoothed [42].
The ATV objective function, R(V), can be expressed as follows:
R ( V ) = j R ( V j ) = j w j D ( V j )
where wj is the anisotropic penalty with different weights for neighbors at the same distance and D(Vj) is the discrete gradient transform with backward difference in the jth indexed value of the CT images.
D ( V j ) = D ( V ( x , y ) ) = ( V ( x , y ) V ( x 1 , y ) ) 2 + ( V ( x , y ) V ( x , y 1 ) ) 2
w j = m N j e x p [ ( V j V m δ ) 2 ]
where index j identifies the index of voxel elements in the CT image, and V ( x , y ) is the voxel element at the 2D position (x, y). Equivalently, Nj represents the set of neighbors of the jth voxel element. We only considered four first-order neighbors in this study. Empirically, the most meaningful results were derived with the parameter δ set to 80% of the cumulative distribution function histogram that accumulates the gradient at each voxel of a CT image.
The ATV objective function in Equation (1) was minimized using the steepest gradient descent method with an adaptive step size. It is expressed as follows.
V j t + 1 = V j t λ R ( V j ) / | R ( V ) |
λ = γ j ( V j t ) 2
where λ is an adaptive parameter that reduces the smoothing degree as the iteration progresses [27,31]. The square root of all voxel elements updated in each step is used to change λ gradually to smaller values with an increase in the number of iterations. A scaling parameter γ was used to escape local minimization due to sudden changes. This value starts initially at 1.0 and decreases linearly by multiplication with a constant value ( 0.8 ) when R ( V ) in the current iteration step is greater than that in the preceding step. R ( V j ) is the gradient of the objective function R ( V ) at the jth indexed pixel [32]. The root-sum-square of the gradient calculated at all the pixels, | R ( V ) | , is required for the normalized gradient calculation [32]. The number of iterations is fine-tuned for the gradient descent optimizer. In this study, the optimal number of iterations was set to 20. The parameters used to optimize the denoising method were based on the manuaaly adjusted analysis. The pseudo-code of the ATV denoiser is presented in Appendix A.
The proposed image processing pipeline includes three steps. The first step is denoising. The anisotropic total variation was used for each image set to smooth noisy pixels while preserving the intensity of the edges during denoising. For the second step, the denoised CT images were used as input data in the AccuContourTM segmentation module based on the U-net model pre-trained by the vendors. The third step involves contour segmentation. Six auto-generated contour sets (heart, left lung, right lung, esophagus, spinal cord, and liver) for each CT image were generated using a deep-learning-based auto-segmentation framework (Figure 1).

2.4. Quantitative Analysis

The noise power spectrum (NPS) was calculated using open-source software (imQuest, Duke University, Durham, NC, USA) that uses the technology described in TG233 of the AAPM [43,44] to assess the image quality characteristics without and with ATV denoiser. The quantity of in-plane noise was evaluated using two-dimensional NPS. For CT images, NPS can be determined in structures with a homogenous area. In this study, the liver was selected for NPS calculations, and overall frequencies were compared using 1D profiles.
The manual contours drawn by two radiation oncologists were considered the ground truth in this study, against which the AccuContourTM-based and denoising-based auto-segmentations were compared. To quantitatively evaluate the accuracy of AccuContourTM-based and denoising-based auto-segmentations, the Dice similarity coefficient (DSC) was used to evaluate the performance of the proposed method. The DSC method calculates the overlapping results of two different volumes according to the following equation:
DSC = 2 | A B | | A | + | B |
where A is the manual segmentation volume, and B is the auto-segmentation volume (AccuContourTM and denoising). DSC is a measure of overlap between two contours, from “0” to “1,” where “1” indicates a complete overlap. We considered a Dice score of 0.80 as an acceptable match [45].
Wilcoxon matched pairs signed-rank test was conducted, and statistical significance was defined as p < 0.05 for evaluating differences in the results of the DSC.

3. Results

Figure 2 shows the NPS curves without and with the ATV denoiser using planning CT images from 40 patients with breast cancer. Three square ROIs were placed at different positions in the liver area with uniform magnitude, as shown in Figure 2a. The ROIs were extended to five adjacent consecutive slices contained within the liver area. The average NPS peak frequency was obtained as 0.127 mm−1 without denoiser and 0.035 mm−1 for the ATV denoiser. The NPS peaks ranged from 209 to 957 HU2 mm2 without the denoiser and 66 to 481 HU2 mm2 for the ATV denoiser. As such, the NPS peak was on average lower with the ATV denoiser than without the denoiser. The peak spatial frequency values of NPS for ATV denoiser shifted to lower spatial frequencies in comparison to no denoiser. Numerically, the NPS average spatial frequencies were obtained as 0.142 mm−1 for the ATV denoiser and 0.295 mm−1 for no denoiser. Images with the ATV denoiser smoothed out with a lower noise amplitude, as indicated by the average frequencies of the NPS curves, resulting in a monotonous texture.
The results of DSC versus the manual contours from radiation oncologists 1 and 2 are shown in Table 1 and Table 2, respectively. The average DSC outcomes were higher than 0.80 in all OARs, except for the esophagus. The AccuContourTM-based and denoising-based auto-segmentations of the esophagus were below acceptable standards. In a comparison of AccuContourTM-based and denoising-based auto-segmentation, the differences were not statistically significant for the lungs, esophagus, or spinal cord (p > 0.05). The denoising-based auto-segmentations achieve superior segmentation accuracy on the liver and inferior segmentation accuracy on the heart compared with AccuContourTM-based auto-segmentations (p < 0.05).

4. Discussion

In this study, we compared the auto-contouring results in five organ structures using the commercial deep-learning contouring program AccuContourTM with those obtained from an anisotropic total variation denoiser. Both the AccuContourTM-based and denoising-based auto-segmentation were considered to yield an acceptable accuracy for generating contours of the heart, lungs, spinal cord, and liver. However, these techniques yielded limited performance for the esophagus.
Deep-learning algorithms based on convolutional neural networks and AccuContourTM have yielded satisfactory performance outcomes for the automatic segmentation of OARs. However, some parts of automatic segmentation of the liver in non-contrast CT required manual corrections to make them clinically acceptable (Figure 3). In non-contrast CT images, it might be difficult to delineate the fuzzy boundaries between the liver and adjacent organs, owing to low soft tissue contrast between the liver and its surrounding organs. In this study, the auto-segmentation results showed a significant improvement in the DSC when using denoising-based auto-segmentations of the liver, compared to using AccuContourTM -based auto-segmentation. An ATV denoiser could enhance the image quality of CT by removing noisy areas, and this may lead to improved segmentation boundaries. Figure 3 shows that some parts surrounding the gall bladder, pancreas, duodenum, large vessel, or kidney are included in AccuContourTM-based auto-segmentation of the liver. However, denoising-based auto-segmentations could delineate the liver accurately by distinguishing the surrounding organs of CT images. These results indicate that the performance of denoising-based auto-segmentation is superior to that of AccuContourTM-based auto-segmentation. All deep-learning-based auto-segmentations should be carefully reviewed and approved by the radiation oncologists before use for a treatment plan. In some CT slices, major or minor errors of deep-learning-based auto-contour are present, and correction is required. The denoising-based auto-segmentation might convert “large or minor errors” of conventional deep-learning-based auto-segmentation to “minor errors” (a small amount of editing needed) or “no correction”. The denoising-based auto-segmentation could be a practical tool for reducing the clinical workload of radiotherapy planning.
The esophagus is one of the most challenging OARs in thoracic organ auto-segmentation. In this study, the performance of AccuContourTM-based and denoising-based auto-segmentation was below a satisfactory level for the esophagus. Previous studies have reported that the DSCs of deep-learning-based auto-segmentation do not exceed 0.8 for the esophagus [40,46,47,48,49]. Due to the absence of a consistent intensity contrast between the esophagus and neighboring tissues in non-contrast CT images, the boundaries between the esophagus and surrounding soft tissues are not well-defined. Figure 4 shows that some parts of the surrounding pulmonary vessel were included in AccuContourTM-based and denoising-based auto-segmentation of the esophagus. In addition, the appearance of the esophagus varies depending on whether it is filled with air or not. Figure 5 shows that the air-filled regions of the esophagus were not included in AccuContourTM-based and denoising-based auto-segmentations. The segmentation results for the esophagus obtained from the denoising-based auto-segmentation may still be inaccurate and unsatisfactory.
It has been demonstrated that auto-segmentations for the heart and lungs yield higher DSCs, with an average of over 0.9 [45,48,49,50,51]. This study also showed that the average DSC outcomes of the heart and lungs are higher than 0.9. High-contrast edges and distinct structural boundaries of the heart and lungs were detected easily in both the AccuContourTM-based and denoising-based auto-segmentation. Therefore, AccuContourTM-based or denoising-based auto-segmentation for the heart and lungs can be used without major adjustments. The denoising-based auto-segmentations achieve inferior segmentation accuracy on the heart compared with AccuContourTM -based auto-segmentations. The denoising-based auto-segmentation volumes are slightly larger than manual contours (Figure 6A,D) or smaller than manual contours (Figure 6B,C). Denoising-based auto-segmentation did not improve the accuracy of auto-segmentation of the heart.
This study had several limitations. Although 40 patients were randomly selected, selection bias in terms of the CT samples may be present. Since the results of this study were generated by only one proprietary software and CT scan, selection bias may have impacted our results.
Further, the contouring bias of the physicians may impact our results. Therefore, further external validation studies involving multiple experts, hospitals, and a larger sample size are needed to overcome these limitations. Moreover, denoising-based auto-segmentation should be compared using software other than AccuContourTM.
However, in several deep-learning-based automatic segmentation studies, a single experienced radiation oncologist delineated the organ at risk or the clinical target volume [10,11,14,50]. In this study, two radiation oncologists delineated the organ at risk because it was thought that inter-observer variability may exist. The results of the Dice similarity coefficient of the five organs at risk from radiation oncologist 1 and 2 were consistent, so the results of this study are expected to be relatively reliable.

5. Conclusions

CT images are subject to noise that can affect the boundary between the adjacent organs, providing potentially limited contrast. Reducing the noise level in CT images provides the best visualization of structures, which increases the accuracy of image segmentation.
By combining the denoising algorithm in the deep-learning based auto-segmentation, the denoising-based auto-segmentation results of the liver from non-contrast CT scans were slightly superior to those of commercial conventional deep-learning-based auto-segmentation and had greater similarities with the ground truth. This denoising-based auto-segmentation could provide a more precise contour delineation of the liver, thus reducing the clinical workload. The results of this study require validation through further studies using a higher sample size, which will compare denoising-based auto-segmentation using software other than AccuContourTM.

Author Contributions

Conceptualization, H.L. and I.J.L.; methodology, J.H.I. and H.L.; software, H.L.; validation, I.J.L. and H.L.; formal analysis, J.H.I.; investigation, H.L. and J.H.I.; data acquisition, Y.C., J.S. and J.S.H.; data curation, Y.C.; writing-original draft preparation, J.H.I. and I.J.L.; writing-review and editing, H.L.; supervision, I.J.L. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1A2C2011556) and a grant from the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI) funded by the Ministry of Health & Welfare, Republic of Korea (HI19C1330).

Institutional Review Board Statement

Ethical approval for this study was obtained from the Institutional Review Board of Yonsei University Health System, Gangnam Severance Hospital.

Informed Consent Statement

The need for patient consent was waived due to the retrospective nature of the study design.

Data Availability Statement

All data generated or analyzed during this study are included in the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

F o r   n 1   t o   a l l   i m a g e s   d o
F o r   j 1   t o   a l l   v o x e l s   d o
   C a l c u l a t e   D ( V j )   u s i n g   E q .   ( 2 )
  Create gradient CDF histogram using D ( V j )
End For
δ V a l u e   a t   80 %   o f   g r a d i e n t   C D F
End For
F o r   n 1   t o   a l l   i m a g e s   d o
R ( V ) 0 ,   r 1 ,   r r e d 0.8
F o r   j 1   t o   a l l   v o x e l s   d o
   w j 0
   F o r   m 1   t o   N j   d o
    w j w j + e x p [ ( V j V m δ ) 2 ]
   E n d   F o r
  Calculate D ( V j ) using Equation (2)
    R ( V j ) w j D ( V j )
    R ( V ) R ( V ) + R ( V j )
End For
F o r   t 1   t o   20   d o
   | R ( V ) | 0
   F o r   j 1   t o   a l l   v o x e l s   d o
    λ j V j 2
    λ λ × r
    Calculate   R ( V j )
    | R ( V ) | | R ( V ) | + ( R ( V j ) ) 2
  End For
   | R ( V ) | | R ( V ) | 2
   F o r   j 1   t o   a l l   v o x e l s   d o
    V j V j + λ R ( V j ) / | R ( V ) |
   Calculate D ( V j ) using Equation (2)
  End For
  Calculate R ( V ) using w j   a n d   D ( V j )
  While R( V j ) > R ( V )   d o
    r r × r r e d
    λ λ × r
    F o r   j 1   t o   a l l   v o x e l s   d o
     V j V j + λ R ( V j ) | R ( V ) |
    Calculate D ( V j ) using Equation (2)
   End For
   Calculate R ( V )
  End While
  Update V j   t o   V j
End For
End For

References

  1. Cardenas, C.E.; Yang, J.; Anderson, B.M.; Court, L.E.; Brock, K.B. Advances in auto-segmentation. In Seminars in Radiation Oncology; Elsevier: Amsterdam, The Netherlands, 2019; pp. 185–197. [Google Scholar]
  2. Kosmin, M.; Ledsam, J.; Romera-Paredes, B.; Mendes, R.; Moinuddin, S.; de Souza, D.; Gunn, L.; Kelly, C.; Hughes, C.; Karthikesalingam, A. Rapid advances in auto-segmentation of organs at risk and target volumes in head and neck cancer. Radiother. Oncol. 2019, 135, 130–140. [Google Scholar] [CrossRef] [PubMed]
  3. Ayyalusamy, A.; Vellaiyan, S.; Subramanian, S.; Ilamurugu, A.; Satpathy, S.; Nauman, M.; Katta, G.; Madineni, A. Auto-segmentation of head and neck organs at risk in radiotherapy and its dependence on anatomic similarity. Radiat. Oncol. J. 2019, 37, 134. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Hwee, J.; Louie, A.V.; Gaede, S.; Bauman, G.; D’Souza, D.; Sexton, T.; Lock, M.; Ahmad, B.; Rodrigues, G. Technology assessment of automated atlas based segmentation in prostate bed contouring. Radiat. Oncol. 2011, 6, 1–9. [Google Scholar] [CrossRef] [Green Version]
  5. Xu, Y.; Xu, C.; Kuang, X.; Wang, H.; Chang, E.I.C.; Huang, W.; Fan, Y. 3d-sift-flow for atlas-based ct liver image segmentation. Med. Phys. 2016, 43, 2229–2241. [Google Scholar] [CrossRef] [PubMed]
  6. Daisne, J.-F.; Blumhofer, A. Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: A clinical validation. Radiat. Oncol. 2013, 8, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Tong, N.; Gou, S.; Yang, S.; Ruan, D.; Sheng, K. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med. Phys. 2018, 45, 4558–4567. [Google Scholar] [CrossRef] [Green Version]
  8. Kiljunen, T.; Akram, S.; Niemelä, J.; Löyttyniemi, E.; Seppälä, J.; Heikkilä, J.; Vuolukka, K.; Kääriäinen, O.-S.; Heikkilä, V.-P.; Lehtiö, K. A deep learning-based automated ct segmentation of prostate cancer anatomy for radiation therapy planning-a retrospective multicenter study. Diagnostics 2020, 10, 959. [Google Scholar] [CrossRef]
  9. Windisch, P.; Koechli, C.; Rogers, S.; Schröder, C.; Förster, R.; Zwahlen, D.R.; Bodis, S. Machine learning for the detection and segmentation of benign tumors of the central nervous system: A systematic review. Cancers 2022, 14, 2676. [Google Scholar] [CrossRef]
  10. Kim, N.; Chun, J.; Chang, J.S.; Lee, C.G.; Keum, K.C.; Kim, J.S. Feasibility of continual deep learning-based segmentation for personalized adaptive radiation therapy in head and neck area. Cancers 2021, 13, 702. [Google Scholar] [CrossRef]
  11. Yoo, S.K.; Kim, T.H.; Chun, J.; Choi, B.S.; Kim, H.; Yang, S.; Yoon, H.I.; Kim, J.S. Deep-learning-based automatic detection and segmentation of brain metastases with small volume for stereotactic ablative radiotherapy. Cancers 2022, 14, 2555. [Google Scholar] [CrossRef]
  12. Comelli, A.; Dahiya, N.; Stefano, A.; Vernuccio, F.; Portoghese, M.; Cutaia, G.; Bruno, A.; Salvaggio, G.; Yezzi, A. Deep learning-based methods for prostate segmentation in magnetic resonance imaging. Appl. Sci. 2021, 11, 782. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, X.; Li, K.-W.; Yang, R.; Geng, L.-S. Review of deep learning based automatic segmentation for lung cancer radiotherapy. Front. Oncol. 2021, 11, 2599. [Google Scholar] [CrossRef] [PubMed]
  14. Jin, X.; Thomas, M.A.; Dise, J.; Kavanaugh, J.; Hilliard, J.; Zoberi, I.; Robinson, C.G.; Hugo, G.D. Robustness of deep learning segmentation of cardiac substructures in noncontrast computed tomography for breast cancer radiotherapy. Med. Phys. 2021, 48, 7172–7188. [Google Scholar] [CrossRef] [PubMed]
  15. van Rooij, W.; Dahele, M.; Brandao, H.R.; Delaney, A.R.; Slotman, B.J.; Verbakel, W.F. Deep learning-based delineation of head and neck organs at risk: Geometric and dosimetric evaluation. Int. J. Radiat. Oncol. Biol. Phys. 2019, 104, 677–684. [Google Scholar] [CrossRef]
  16. Van der Veen, J.; Willems, S.; Deschuymer, S.; Robben, D.; Crijns, W.; Maes, F.; Nuyts, S. Benefits of deep learning for delineation of organs at risk in head and neck cancer. Radiother. Oncol. 2019, 138, 68–74. [Google Scholar] [CrossRef] [PubMed]
  17. Elguindi, S.; Zelefsky, M.J.; Jiang, J.; Veeraraghavan, H.; Deasy, J.O.; Hunt, M.A.; Tyagi, N. Deep learning-based auto-segmentation of targets and organs-at-risk for magnetic resonance imaging only planning of prostate radiotherapy. Phys. Imaging Radiat. Oncol. 2019, 12, 80–86. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Savenije, M.H.; Maspero, M.; Sikkes, G.G.; van der Voort van Zyp, J.; Kotte, T.; Alexis, N.; Bol, G.H.; van den Berg, T.; Cornelis, A. Clinical implementation of mri-based organs-at-risk auto-segmentation with convolutional networks for prostate radiotherapy. Radiat. Oncol. 2020, 15, 1–12. [Google Scholar] [CrossRef]
  19. Zabel, W.J.; Conway, J.L.; Gladwish, A.; Skliarenko, J.; Didiodato, G.; Goorts-Matthews, L.; Michalak, A.; Reistetter, S.; King, J.; Nakonechny, K. Clinical evaluation of deep learning and atlas-based auto-contouring of bladder and rectum for prostate radiation therapy. Pract. Radiat. Oncol. 2021, 11, e80–e89. [Google Scholar] [CrossRef]
  20. Diniz, J.O.B.; Ferreira, J.L.; Diniz, P.H.B.; Silva, A.C.; de Paiva, A.C. Esophagus segmentation from planning ct images using an atlas-based deep learning approach. Comput. Methods Programs Biomed. 2020, 197, 105685. [Google Scholar] [CrossRef]
  21. Liu, Y.; Lei, Y.; Fu, Y.; Wang, T.; Tang, X.; Jiang, X.; Curran, W.J.; Liu, T.; Patel, P.; Yang, X. Ct-based multi-organ segmentation using a 3d self-attention u-net network for pancreatic radiotherapy. Med. Phys. 2020, 47, 4316–4324. [Google Scholar] [CrossRef]
  22. Zhu, J.; Chen, X.; Yang, B.; Bi, N.; Zhang, T.; Men, K.; Dai, J. Evaluation of automatic segmentation model with dosimetric metrics for radiotherapy of esophageal cancer. Front. Oncol. 2020, 10, 1843. [Google Scholar] [CrossRef] [PubMed]
  23. van der Heyden, B.; Wohlfahrt, P.; Eekers, D.B.; Richter, C.; Terhaag, K.; Troost, E.G.; Verhaegen, F. Dual-energy ct for automatic organs-at-risk segmentation in brain-tumor patients using a multi-atlas and deep-learning approach. Sci. Rep. 2019, 9, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Song, Y.; Hu, J.; Wu, Q.; Xu, F.; Nie, S.; Zhao, Y.; Bai, S.; Yi, Z. Automatic delineation of the clinical target volume and organs at risk by deep learning for rectal cancer postoperative radiotherapy. Radiother. Oncol. 2020, 145, 186–192. [Google Scholar] [CrossRef]
  25. Liu, Z.; Liu, X.; Guan, H.; Zhen, H.; Sun, Y.; Chen, Q.; Chen, Y.; Wang, S.; Qiu, J. Development and validation of a deep learning algorithm for auto-delineation of clinical target volume and organs at risk in cervical cancer radiotherapy. Radiother. Oncol. 2020, 153, 172–179. [Google Scholar] [CrossRef]
  26. Kim, N.; Chang, J.S.; Kim, Y.B.; Kim, J.S. Atlas-based auto-segmentation for postoperative radiotherapy planning in endometrial and cervical cancers. Radiat. Oncol. 2020, 15, 1–9. [Google Scholar] [CrossRef] [PubMed]
  27. Biratu, E.S.; Schwenker, F.; Ayano, Y.M.; Debelee, T.G. A survey of brain tumor segmentation and classification algorithms. J. Imaging 2021, 7, 179. [Google Scholar] [CrossRef]
  28. Minogue, S.; Gillham, C.; Kearney, M.; Mullaney, L. Intravenous contrast media in radiation therapy planning computed tomography scans–current practice in ireland. Tech. Innov. Patient Support Radiat. Oncol. 2019, 12, 3–15. [Google Scholar] [CrossRef] [Green Version]
  29. Spampinato, M.V.; Abid, A.; Matheus, M.G. Current radiographic iodinated contrast agents. Magn. Reson. Imaging Clin. 2017, 25, 697–704. [Google Scholar] [CrossRef]
  30. Huynh, K.; Baghdanian, A.H.; Baghdanian, A.A.; Sun, D.S.; Kolli, K.P.; Zagoria, R.J. Updated guidelines for intravenous contrast use for ct and mri. Emerg. Radiol. 2020, 27, 115–126. [Google Scholar] [CrossRef]
  31. Lee, H.; Park, J.; Choi, Y.; Park, K.R.; Min, B.J.; Lee, I.J. Low-dose cbct reconstruction via joint non-local total variation denoising and cubic b-spline interpolation. Sci. Rep. 2021, 11, 1–15. [Google Scholar] [CrossRef]
  32. Lee, H.; Sung, J.; Choi, Y.; Kim, J.W.; Lee, I.J. Mutual information-based non-local total variation denoiser for low-dose cone-beam computed tomography. Front. Oncol. 2021, 11, 751057. [Google Scholar] [CrossRef] [PubMed]
  33. Kollem, S.; Reddy, K.R.L.; Rao, D.S. A review of image denoising and segmentation methods based on medical images. Int. J. Mach. Learn. Comput. 2019, 9, 288–295. [Google Scholar] [CrossRef] [Green Version]
  34. Lee, H.; Fahimian, B.P.; Xing, L. Binary moving-blocker-based scatter correction in cone-beam computed tomography with width-truncated projections: Proof of concept. Phys. Med. Biol. 2017, 62, 2176. [Google Scholar] [CrossRef] [PubMed]
  35. Lee, H.; Xing, L.; Lee, R.; Fahimian, B.P. Scatter correction in cone-beam ct via a half beam blocker technique allowing simultaneous acquisition of scatter and image information. Med. Phys. 2012, 39, 2386–2395. [Google Scholar] [CrossRef]
  36. Lee, H.; Yoon, J.; Lee, E. Anisotropic total variation denoising technique for low-dose cone-beam computed tomography imaging. Prog. Med. Phys. 2018, 29, 150–156. [Google Scholar] [CrossRef] [Green Version]
  37. Yang, G.; Chang, J.S.; Shin, K.H.; Kim, J.H.; Park, W.; Kim, H.; Kim, K.; Lee, I.J.; Yoon, W.S.; Cha, J. Post-mastectomy radiation therapy in breast reconstruction: A patterns of care study of the korean radiation oncology group. Radiat. Oncol. J. 2020, 38, 236. [Google Scholar] [CrossRef]
  38. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  39. Ibtehaz, N.; Rahman, M.S. Multiresunet: Rethinking the u-net architecture for multimodal biomedical image segmentation. Neural Netw. 2020, 121, 74–87. [Google Scholar] [CrossRef]
  40. Dong, X.; Lei, Y.; Wang, T.; Thomas, M.; Tang, L.; Curran, W.J.; Liu, T.; Yang, X. Automatic multiorgan segmentation in thorax ct images using u-net-gan. Med. Phys. 2019, 46, 2157–2168. [Google Scholar] [CrossRef] [Green Version]
  41. Perona, P.; Malik, J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 629–639. [Google Scholar] [CrossRef] [Green Version]
  42. Wang, J.; Li, T.; Xing, L. Iterative image reconstruction for cbct using edge-preserving prior. Med. Phys. 2009, 36, 252–260. [Google Scholar] [CrossRef] [Green Version]
  43. Greffier, J.; Hamard, A.; Pereira, F.; Barrau, C.; Pasquier, H.; Beregi, J.P.; Frandon, J. Image quality and dose reduction opportunity of deep learning image reconstruction algorithm for ct: A phantom study. Eur. Radiol. 2020, 30, 3951–3959. [Google Scholar] [CrossRef] [PubMed]
  44. Samei, E.; Bakalyar, D.; Boedeker, K.L.; Brady, S.; Fan, J.; Leng, S.; Myers, K.J.; Popescu, L.M.; Ramirez Giraldo, J.C.; Ranallo, F. Performance evaluation of computed tomography systems: Summary of aapm task group 233. Med. Phys. 2019, 46, e735–e756. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Chung, S.Y.; Chang, J.S.; Choi, M.S.; Chang, Y.; Choi, B.S.; Chun, J.; Keum, K.C.; Kim, J.S.; Kim, Y.B. Clinical feasibility of deep learning-based auto-segmentation of target volumes and organs-at-risk in breast cancer patients after breast-conserving surgery. Radiat. Oncol. 2021, 16, 1–10. [Google Scholar] [CrossRef] [PubMed]
  46. Gibson, E.; Giganti, F.; Hu, Y.; Bonmati, E.; Bandula, S.; Gurusamy, K.; Davidson, B.; Pereira, S.P.; Clarkson, M.J.; Barratt, D.C. Automatic multi-organ segmentation on abdominal ct with dense v-networks. IEEE Trans. Med. Imaging 2018, 37, 1822–1834. [Google Scholar] [CrossRef] [Green Version]
  47. Vu, C.C.; Siddiqui, Z.A.; Zamdborg, L.; Thompson, A.B.; Quinn, T.J.; Castillo, E.; Guerrero, T.M. Deep convolutional neural networks for automatic segmentation of thoracic organs-at-risk in radiation oncology–use of non-domain transfer learning. J. Appl. Clin. Med. Phys. 2020, 21, 108–113. [Google Scholar] [CrossRef]
  48. Zhu, J.; Zhang, J.; Qiu, B.; Liu, Y.; Liu, X.; Chen, L. Comparison of the automatic segmentation of multiple organs at risk in ct images of lung cancer between deep convolutional neural network-based and atlas-based techniques. Acta Oncol. 2019, 58, 257–264. [Google Scholar] [CrossRef] [PubMed]
  49. Men, K.; Geng, H.; Biswas, T.; Liao, Z.; Xiao, Y. Automated quality assurance of oar contouring for lung cancer based on segmentation with deep active learning. Front. Oncol. 2020, 10, 986. [Google Scholar] [CrossRef]
  50. Choi, M.S.; Choi, B.S.; Chung, S.Y.; Kim, N.; Chun, J.; Kim, Y.B.; Chang, J.S.; Kim, J.S. Clinical evaluation of atlas-and deep learning-based automatic segmentation of multiple organs and clinical target volumes for breast cancer. Radiother. Oncol. 2020, 153, 139–145. [Google Scholar] [CrossRef]
  51. Feng, X.; Qing, K.; Tustison, N.J.; Meyer, C.H.; Chen, Q. Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3d images. Med. Phys. 2019, 46, 2169–2180. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Deep-learning-based auto-segmentation framework.
Figure 1. Deep-learning-based auto-segmentation framework.
Cancers 14 03581 g001
Figure 2. Noise power spectrum (NPS) for evaluating the noise texture and magnitude. (a) Examples of ROIs selected for NPS calculation on each slice. NPS curves using 40 patient data: (b) 1 to 20 cases without denoiser, (c) 1 to 20 cases with ATV, (d) 21 to 40 cases without denoiser, and (e) 21 to 40 cases with ATV.
Figure 2. Noise power spectrum (NPS) for evaluating the noise texture and magnitude. (a) Examples of ROIs selected for NPS calculation on each slice. NPS curves using 40 patient data: (b) 1 to 20 cases without denoiser, (c) 1 to 20 cases with ATV, (d) 21 to 40 cases without denoiser, and (e) 21 to 40 cases with ATV.
Cancers 14 03581 g002aCancers 14 03581 g002b
Figure 3. Example cases showing radiation oncologist 1’s manual contour (red), radiation oncologist 2’s manual contour (blue), AccuContourTM-based auto-segmentation (green), and denoising-based auto-segmentation (yellow) for the liver. AccuContourTM-based auto-segmentation over-contoured (A) gallbladder, (B) pancreas and portal vein, (C) pancreas and duodenum, (D) duodenum, (E) pancreas, duodenum, and kidney, and (F) gallbladder.
Figure 3. Example cases showing radiation oncologist 1’s manual contour (red), radiation oncologist 2’s manual contour (blue), AccuContourTM-based auto-segmentation (green), and denoising-based auto-segmentation (yellow) for the liver. AccuContourTM-based auto-segmentation over-contoured (A) gallbladder, (B) pancreas and portal vein, (C) pancreas and duodenum, (D) duodenum, (E) pancreas, duodenum, and kidney, and (F) gallbladder.
Cancers 14 03581 g003
Figure 4. Example cases showing (A) radiation oncologist 1’s manual contour (red), (B) radiation oncologist 2’s manual contour (blue), (C) AccuContourTM-based auto-segmentation (green), and (D) denoising-based auto-segmentation (yellow) when the boundaries between the esophagus and surrounding soft tissues are not well-defined.
Figure 4. Example cases showing (A) radiation oncologist 1’s manual contour (red), (B) radiation oncologist 2’s manual contour (blue), (C) AccuContourTM-based auto-segmentation (green), and (D) denoising-based auto-segmentation (yellow) when the boundaries between the esophagus and surrounding soft tissues are not well-defined.
Cancers 14 03581 g004
Figure 5. Example cases showing (A) radiation oncologist 1’s manual contour (red), (B) radiation oncologist 2’s manual contour (blue), (C) denoising-based auto-segmentation (yellow), and (D) AccuContourTM-based auto-segmentation (green) for the esophagus.
Figure 5. Example cases showing (A) radiation oncologist 1’s manual contour (red), (B) radiation oncologist 2’s manual contour (blue), (C) denoising-based auto-segmentation (yellow), and (D) AccuContourTM-based auto-segmentation (green) for the esophagus.
Cancers 14 03581 g005
Figure 6. Example cases (AD) showing radiation oncologist 1’s manual contour (red), radiation oncologist 2’s manual contour (blue), AccuContourTM-based auto-segmentation (green), and denoising-based auto-segmentation (yellow) for the heart.
Figure 6. Example cases (AD) showing radiation oncologist 1’s manual contour (red), radiation oncologist 2’s manual contour (blue), AccuContourTM-based auto-segmentation (green), and denoising-based auto-segmentation (yellow) for the heart.
Cancers 14 03581 g006
Table 1. Comparison of the dice similarity coefficients (DSC) results generated from AccuContourTM and denoising-based auto-segmentations using the radiation oncologist 1’s manual contours as reference.
Table 1. Comparison of the dice similarity coefficients (DSC) results generated from AccuContourTM and denoising-based auto-segmentations using the radiation oncologist 1’s manual contours as reference.
CaseHeartRight LungLeft LungEsophagusSpinal CordLiver
* Manteia† DenoiserManteiaDenoiserManteiaDenoiserManteiaDenoiserManteiaDenoiserManteiaDenoiser
Case 10.9640.9600.9820.9830.9810.9810.6850.7420.7770.8240.9530.954
Case 20.9510.9470.9800.9800.9790.9790.6700.7230.8910.8730.9360.939
Case 30.9580.9520.9820.9820.9790.9790.6600.6020.8810.8840.9570.963
Case 40.9310.9280.9710.9700.9750.9750.7440.7190.8660.8580.9370.938
Case 50.9080.8940.9710.9710.9750.9750.7450.6720.8770.8770.9480.952
Case 60.9300.9290.9820.9820.9730.9740.5730.6910.8770.8770.8900.926
Case 70.9780.9770.9510.9510.9540.9550.7350.7300.8760.8700.8740.864
Case 80.9290.9260.9710.9720.9700.9700.6690.6620.8590.8560.9640.963
Case 90.9380.9360.9780.9780.9720.9720.6640.7320.8140.8190.9590.960
Case 100.9520.9470.9820.9820.9810.9810.7650.7860.8830.8770.9590.963
Case 110.9450.9440.9790.9790.9770.9780.7460.7370.8380.8310.9200.924
Case 120.9460.9410.9620.9630.9640.9660.6800.6340.8680.8480.9330.933
Case 130.9520.9500.9840.9840.9810.9820.6810.6950.8650.8540.9260.932
Case 140.9340.9300.9810.9810.9810.9810.5700.6330.8580.8540.9090.933
Case 150.9300.9260.9750.9760.9730.9730.7020.7440.8590.8650.8930.931
Case 160.9060.8980.9340.9340.9400.9400.6910.6370.8700.8700.9560.959
Case 170.9390.9320.9770.9770.9720.9720.6970.6970.8570.8630.9540.955
Case 180.9500.9480.9760.9760.9790.9790.7250.6970.8780.8820.9390.942
Case 190.9100.8870.9810.9810.9810.9790.6530.6170.8530.8590.8730.885
Case 200.8770.8750.9830.9840.9750.9750.7300.7750.8530.8460.9540.956
Case 210.9190.9210.9810.9810.9780.9780.7270.7190.8190.8010.9400.941
Case 220.9990.9680.9980.9920.9980.9920.9900.7970.8050.8070.9350.951
Case 230.9560.9540.9770.9770.9800.9790.6250.6360.8780.8720.9540.957
Case 240.9630.9620.9860.9860.9790.9800.7280.7250.8720.8570.9550.956
Case 250.9190.9200.9870.9870.9790.9780.7670.7630.8720.8740.9640.967
Case 260.9660.9660.9900.9890.9860.9870.8130.8210.8920.8760.9640.966
Case 270.9630.9610.9920.9920.9890.9890.6500.6380.7970.8150.9590.960
Case 280.9630.9630.9910.9910.9840.9850.7850.7890.8530.8610.9590.960
Case 290.9670.9670.9850.9850.9760.9780.7650.7680.8230.8360.9340.937
Case 300.9510.9510.9850.9850.9800.9800.6610.6710.8180.8350.9270.930
Case 310.9500.9500.9860.9860.9840.9840.6680.6500.8410.8440.8960.899
Case 320.9470.9460.9860.9860.9800.9800.7910.7780.8650.8630.9110.910
Case 330.9560.9560.9620.9620.9750.9750.7710.7800.8310.8460.9480.950
Case 340.9500.9500.9780.9780.9790.9800.7400.7250.8610.8570.9430.946
Case 350.9470.9490.9770.9770.9810.9810.7490.7390.8870.8800.9560.959
Case 360.9370.9380.9850.9850.9770.9780.7880.7820.8810.8830.9500.952
Case 370.9450.9450.9840.9840.9800.9810.7300.7270.8790.8820.9390.938
Case 380.9330.9340.9870.9870.9830.9820.8090.8190.8720.8780.9470.949
Case 390.9450.9450.9680.9620.9740.9750.7590.7430.8870.8940.9520.953
Case 400.9110.9110.9820.9820.9770.9780.6640.6720.8760.8680.9470.947
Average0.9430.9400.9790.9780.9770.9770.7190.7170.8580.8680.9380.943
p-value0.0000.0910.0950.7050.7620.000
* AccuContourTM (Manteia Medical Technologies Co. Ltd., Xiamen, China)-based auto-segmentation, † Denoising-based auto-segmentation.
Table 2. Comparison of the dice similarity coefficients (DSC) results generated from AccuContourTM and denoising-based auto-segmentations using the radiation oncologist 2’s manual contours as reference.
Table 2. Comparison of the dice similarity coefficients (DSC) results generated from AccuContourTM and denoising-based auto-segmentations using the radiation oncologist 2’s manual contours as reference.
CaseHeartRight LungLeft LungEsophagusSpinal CordLiver
* Manteia† DenoiserManteiaDenoiserManteiaDenoiserManteiaDenoiserManteiaDenoiserManteiaDenoiser
Case 10.9640.9600.9810.9810.9800.9800.6850.7420.7490.7850.9530.955
Case 20.9370.9340.9850.9850.9820.9820.6960.7300.8720.8440.9320.934
Case 30.9640.9620.9840.9840.9800.9810.6380.5970.8650.8670.9490.956
Case 40.9120.9090.9730.9720.9790.9790.7410.6970.8570.8640.9270.929
Case 50.9070.8950.9770.9770.9770.9760.7210.6250.8480.8480.9410.947
Case 60.9270.9270.9870.9870.9800.9800.5770.6930.8870.8920.8840.920
Case 70.9860.9890.9960.9970.9950.9960.8080.8390.8700.8730.8490.859
Case 80.9250.9220.9640.9630.9730.9730.6580.6890.8720.8510.9540.953
Case 90.9340.9330.9790.9780.9740.9740.7020.7470.8510.8520.9520.952
Case 100.9160.9090.9840.9840.9840.9830.7320.7560.8690.8520.9570.959
Case 110.9380.9390.9860.9860.9740.9740.6830.6830.7860.7490.9120.915
Case 120.9490.9430.9770.9770.9670.9690.7070.6550.7780.7470.9250.925
Case 130.9350.9350.9850.9850.9820.9820.6650.6920.8440.8230.9210.925
Case 140.9350.9330.9840.9840.9790.9790.6330.6950.8230.7900.9020.927
Case 150.9070.9090.9830.9830.9760.9760.6600.6480.8350.8410.9210.880
Case 160.9260.9360.9850.9850.9790.9790.6810.6370.7810.7870.9530.956
Case 170.9390.9320.9770.9770.9720.9720.6970.6970.8040.8220.9490.949
Case 180.9500.9510.9820.9820.9820.9820.7340.7550.8700.8800.9320.936
Case 190.8900.8670.9850.9850.9830.9810.6300.6040.8600.8650.8570.869
Case 200.8500.8480.9860.9860.9790.9780.7380.7430.8850.8600.9490.950
Case 210.9540.9550.9910.9910.9860.9860.7300.7310.8530.8350.9460.947
Case 220.9990.9680.9950.9900.9950.9890.9020.7520.7900.8270.9410.952
Case 230.9560.9540.9750.9750.9780.9780.6220.6320.8380.8160.9530.955
Case 240.9630.9620.9840.9840.9770.9780.7260.7240.8380.8200.9560.958
Case 250.9240.9250.9860.9850.9780.9800.7670.7630.8600.8640.9620.965
Case 260.9660.9660.9860.9860.9830.9840.8130.8210.8680.8610.9600.961
Case 270.9630.9610.9900.9900.9870.9880.6360.6270.8400.8370.9590.959
Case 280.9630.9630.9910.9910.9840.9850.7850.7900.8530.8620.9510.951
Case 290.9670.9670.9840.9830.9740.9760.7650.7680.8470.8270.9340.936
Case 300.9510.9510.9840.9840.9790.9800.6610.6710.8570.8400.9380.939
Case 310.9320.9320.9860.9860.9830.9830.6680.6500.8520.8430.8950.897
Case 320.9390.9380.9850.9850.9780.9780.7910.7780.8570.8550.9120.911
Case 330.9560.9560.9600.9600.9750.9750.7100.7090.8570.8390.9480.950
Case 340.9490.9480.9760.9760.9760.9770.7390.7230.8760.8610.9470.949
Case 350.9350.9340.9820.9820.9800.9800.7490.7390.8800.8740.9530.955
Case 360.9360.9360.9840.9840.9760.9760.7930.7860.8600.8400.9510.952
Case 370.9450.9450.9820.9820.9780.9790.6690.6620.8120.8210.9380.936
Case 380.9440.9450.9860.9860.9810.9810.8090.8190.8700.8740.9500.951
Case 390.9550.9550.9650.9600.9740.9750.6820.6630.8840.8920.9530.953
Case 400.9110.9110.9800.9800.9770.9770.6240.6300.8740.8750.9480.948
Average0.9400.9380.9820.9820.9790.9800.7110.7090.8470.8410.9350.938
p-value0.0080.0910.0970.9840.0820.000
* AccuContourTM (Manteia Medical Technologies Co. Ltd., Xiamen, China)-based auto-segmentation, † Denoising-based auto-segmentation.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Im, J.H.; Lee, I.J.; Choi, Y.; Sung, J.; Ha, J.S.; Lee, H. Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers 2022, 14, 3581. https://doi.org/10.3390/cancers14153581

AMA Style

Im JH, Lee IJ, Choi Y, Sung J, Ha JS, Lee H. Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers. 2022; 14(15):3581. https://doi.org/10.3390/cancers14153581

Chicago/Turabian Style

Im, Jung Ho, Ik Jae Lee, Yeonho Choi, Jiwon Sung, Jin Sook Ha, and Ho Lee. 2022. "Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning" Cancers 14, no. 15: 3581. https://doi.org/10.3390/cancers14153581

APA Style

Im, J. H., Lee, I. J., Choi, Y., Sung, J., Ha, J. S., & Lee, H. (2022). Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers, 14(15), 3581. https://doi.org/10.3390/cancers14153581

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop