Next Article in Journal
Design of Fairway Width Based on a Grounding and Collision Risk Model in the South Coast of Korean Waterways
Previous Article in Journal
A Deconvolutional Deblurring Algorithm Based on Dual-Channel Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Positron Emission Tomography Image Segmentation Based on Atanassov’s Intuitionistic Fuzzy Sets

1
CITAB—Centre for the Research and Technology of Agro-Environmental and Biological Sciences, UTAD University, Quinta de Prados, 5001-801 Vila Real, Portugal
2
Departamento de Ciências da Comunicação e Tecnologias da Informação, ISMAI—Universidade da Maia, Avenida Carlos de Oliveira Campos—Castêlo da Maia, 4475-690 Maia, Portugal
3
Departamento de Automatica y Computacion, Campus Arrosadia s/n, Universidad Publica de Navarra, 31006 Pamplona, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(10), 4865; https://doi.org/10.3390/app12104865
Submission received: 26 March 2022 / Revised: 7 May 2022 / Accepted: 8 May 2022 / Published: 11 May 2022

Abstract

:
In this paper, we present an approach to fully automate tumor delineation in positron emission tomography (PET) images. PET images play a major role in medicine for in vivo imaging in oncology (PET images are used to evaluate oncology patients, detecting emitted photons from a radiotracer localized in abnormal cells). PET image tumor delineation plays a vital role both in pre- and post-treatment stages. The low spatial resolution and high noise characteristics of PET images increase the challenge in PET image segmentation. Despite the difficulties and known limitations, several image segmentation approaches have been proposed. This paper introduces a new unsupervised approach to perform tumor delineation in PET images using Atanassov’s intuitionistic fuzzy sets (A-IFSs) and restricted dissimilarity functions. Moreover, the implementation of this methodology is presented and tested against other existing methodologies. The proposed algorithm increases the accuracy of tumor delineation in PET images, and the experimental results show that the proposed method outperformed all methods tested.

1. Introduction

Segmentation of digital images is the procedure of partitioning an image into disjoint parts, regions, classes, or subsets so that every part must fulfill an unmistakable and very characterized property and attribute. Image segmentation is an essential step towards the analysis of image information. Image segmentation plays an important role in digital image processing and is used in almost every field of science; for example, digital image processing, satellite imaging, computer vision, biometrics, medical images and other [1,2,3,4,5,6,7].
In this work, we use image segmentation to detect and delineate tumors in PET images.
Positron emission tomography, or PET, is a technique for the imaging of physiological processes in humans. This technique presents the distribution of a radioactive emitter monitored by surrounding detectors. With the aid of mathematical algorithms, the image is constructed based on the distribution of the marker. PET has become an indispensable tool to ensure more accurate treatment of patients and early diagnosis for the treatment of cancer [8,9,10,11,12,13,14,15,16,17]. PET is heavily used in medicine, biology, neurology, and pharmaceutical research of brain activity, blood, or glucose.
PET images are known for their high sensitivity and low spatial resolution. Furthermore, PET images have low signal-to-noise ratios and suffer from noise caused by random and scattered coincidences. With these conditions, the difficulties for a successful image segmentation increase [9,18,19,20,21,22,23,24,25]. The delineation of tumors in PET images is a crucial step because the determination of this boundary should be kept as small as possible to minimize damage to healthy tissue by future treatments, but the boundary must ensure the inclusion of the entire extent of the diseased tissue. Different methods have been proposed using several segmentation techniques [26,27,28,29,30]. Fixed threshold methods do not find a consensus value to correctly delineate the tumor [3,11]; adaptive threshold methods have better results, but most of the times the results depend on standardized uptake values (SUV) [20,31,32,33] or source-to-background ratio (SBR) [23,34,35] values, and these values are not always available; the iterative threshold methods depend on knowing ground truth metrics or limiting the number of iterations, which can change the output values [23,36]. The best region-based methods are region of interest (ROI)- or seed-dependent, and edge-based methods do not have good results because of the faded image edges, caused by the noisy characteristic of PET images. Stochastic-based methods with very different approaches presented several results [22,37,38,39,40]. The use of deep learning-based methods is limited both by the problem’s nature (precision of the delineation) and the number of labeled samples needed. Nevertheless, some deep learning-based methods have also been recently proposed [41,42,43].
We will introduce a new approach to PET image segmentation, using an iterative thresholding method based on Atanassov’s intuitionistic fuzzy sets (A-IFSs). Hereupon, the presented algorithm finds and delineates tumors in PET images in a non-supervised way. The method is invariant to the images size, seeds position, region shape, and SUV or SBR values, resulting in a better and more efficient tumor delineation procedure.

2. Fuzzy Logic-Based Image Thresholding Using A-IFSs

Atanassov’s intuitionistic fuzzy sets (A-IFSs) have been successfully used to determine the optimal threshold value for gray-level image segmentation. Atanassov’s intuitionistic fuzzy index values are used for representing the hesitance of an expert on determining whether a pixel of the image belongs to the background or the object of the image [44] and we use these values to determine if the pixel belongs to a non-healthy tissue or a healthy tissue.
Melo-Pinto et al. [44] proposed the following membership functions to represent the relationship between each pixel to the background Q ˜ B t or the object Q ˜ O t :
μ Q ˜ B t = F d q L 1 , m B ( t ) L 1
μ Q ˜ O t = F d q L 1 , m O ( t ) L 1
For each t, the mean of the intensities of the pixels that belong to the background m B t and the mean of the intensities of gray of the pixels that belong to the object m O t are given by the following expressions:
h ( q ) being the number of pixels of the image with the intensity q, and the function F ( x ) = 1 0.5 x and the restricted dissimilarity function d ( x , y ) = | x y | , constructed from the automorphisms φ 1 ( x ) = φ 2 ( x ) = x for all x [ 0 , 1 ] , where m B t and m O t are represented as follows:
m B ( t ) = q   =   0 t q h ( q ) q   =   0 t h ( q )
m O ( t ) = q = t + 1 L 1 q h ( q ) q = t + 1 L 1 h ( q )
With the proposal to split the object from the background, it is essential to accurately determine the property that must be fulfilled by the pixels that belong to the object. This property establishes the form of the membership function associated with the set that represents the object. Usually, this property is not positively known, and the selection of membership function is conditioned by the missing knowledge/ignorance of the expert who constructs the membership function.
Considering that we need a multi-thresholding algorithm to refine the threshold selection in tumor delineation in PET images, we will use the following expression to calculate the Atanassov’s intuitionistic index π .
π ( q ) = 1 μ Q ˜ B t ( q ) , 1 μ Q ˜ O t ( q )
Atanassov’s intuitionistic index π represents the missing knowledge/ignorance of the expert in determining the membership value of a specific pixel to the background or the object of the image.

3. PET Image Segmentation with Iterative Thresholding Using A-IFSs

In this section, we present a general A-IFSs-based multi-level image thresholding that, besides its low computational cost, autonomously determines the thresholds based on the image pixels’ gray-levels homogeneity. The method selects iteratively, using the images characteristic, the best threshold value to identify the damaged tissues in PET images.
In order to compute the threshold, the proposed algorithm is applied an unspecified number of times to the image using a divide and conquer strategy. First, the algorithm is applied to the original image Q with its pixels’ gray-levels [ 0 , L 1 ] , determining the threshold value t i corresponding to the smallest entropy of the image Q. This threshold value is then used to create two sub-images: the sub-image with intensities lower than t i with its pixels’ gray-levels [ 0 , t i 1 ] , and the sub-image with intensities greater than t i with its pixels’ gray-levels [ t i + 1 , L 1 ] . Finally, the algorithm is applied to the sub-image, which has a higher amplitude between gray-level entropy values. Each sub-image processed is marked and cannot be processed again.
In Figure 1 we represent the computational process resulting from the application of the algorithm to a given image, employing a binary tree where each tree node contains the description of the images’ gray-scale and the threshold value obtained through the application of the algorithm to that image.
The proposed algorithm consecutively divides the resulting sub-images by means of the threshold value t i obtained through the application of the general algorithm to each one of them. To enhance the algorithm with the capability to self-stop the process of determining new thresholds and consequently sub-dividing the images, we use the region’s homogeneity introduced by the following algorithm.

A New Approach to Perform Tumor Delineation in PET Image with Iterative Thresholding Using A-IFSs

Tumor delineation in PET images is a crucial step because the boundary has to ensure the inclusion of all of the damaged cells, but if the boundary is too large, it will endanger healthy tissues during future treatments [11]. A-IFSs find the best threshold value to split the object from the background, and therefore to refine the threshold to match the small tumor region, we apply the A-IFSs to a selected sub-image until we obtain the pretended pixel gray-scale intensities’ homogeneity.
In our iterative threshold algorithm, we use the homogeneity (H) value of the sub-image to determine when the algorithm stops searching for another threshold. We define H as:
H = i , j p ( i , j ) 1 + i j
We also define the entropy amplitude C with a value that will determine which sub-image will be processed in the next iteration. To determine the value of C we need to sub-divide the image Q i into two sub-images Q i + 1 with gray-levels q ( x , y ) [ L o w i , t i 1 ] and Q i + 2 with gray-levels q ( x , y ) [ t i + 1 , H i g h i ] and check if the image Q i was processed to prevent reprocessing the same image again. Then, for all of the sub-images Q i + 1 and Q i + 2 the value C is calculated as:
C Q i = M A X Q i M I N Q i
with
M A X Q i = m a x { ε T ( q ) } , q ( x , y ) [ L o w i , H i g h i ] ,
and
M I N Q i = m i n { ε T ( q ) } , q ( x , y ) [ L o w i , H i g h i ] ,
The sub-image with the highest C Q i is selected for forward processing.
The algorithm stops when H achieves the desired value. For tumor delineation in PET images, the algorithm stops when H is greater than or equal to 0.999. This value was obtained experimentally. We tested several numbers of iterations and from H 0.999 there were no significant improvements in the sub-image homogeneity (Figure 2). The PET 1–4 images issued in Figure 2 correspond to the PET 1–4 breast cancer images presented in Figure 3. These images were selected due to the different tumor-adjacent tissue characteristics they present, resulting in significant homogeneity differences after the first iteration. Despite these differences, after some iterations the method converges for all images and, since we used the maximum entropy amplitude to select the sub-image to be processed, the algorithm stops, and no other sub-image needs to be processed.

4. Performance Evaluation

In order to test the performance of the proposed methodology, we applied the proposed algorithms to a set of images available from the Cancer Image Archive (TCIA). TCIA is a service that de-identifies and hosts an extensive archive of medical images of cancer accessible for public download. The images are available on the TCIA website: http://www.cancerimagingarchive.net/, accessed on 1 March 2019. The data are organized as “Collections”, typically patients related by a common disease (e.g., lung cancer), image modality (MRI, CT, PET, etc.) or research focus. In this work, we used 40 gray-scale images with 512 × 512 pixels each. Some examples are shown in Figure 4. Other examples and respective ground truth results, obtained by medical experts, are shown in (Figure 5).
To evaluate the different segmentation methods, we use the intersection over union ( I o U ) and pixel accuracy measures. The I o U measures the intersection over the union of the labeled segments and reports the average. The I o U value can be calculated as follows:
I o U = t a r g e t p r e d i c t i o n t a r g e t p r e d i c t i o n
Pixel accuracy is the ratio of the correctly classified elements over all available elements and can be calculated as follows:
a c c u r a c y = T P + T N T P + T N + F P + F N
Although the accuracy measure is more prone to misleading high values due to the classes’ imbalance (tumor background), it was used to establish the differences in the resulting evaluation when using I o U .

5. Results

The A-IFSs algorithm was tested with the data set introduced previously. Figure 3 shows examples of some original images, their corresponding results produced by the proposed A-IFSs-based methodology, and their corresponding ground truth. Using the I o U and the pixel accuracy values to evaluate the segmentation results, we confirmed that the use of the homogeneity value, with H 0.999, to obtain the threshold value, resulted in the best segmentation for each image.

5.1. Fixed Threshold Comparison

Over the last years, several studies from different authors confirmed that the most accepted global threshold values, in lesion delineation, and the most suitable for segmenting lesions, are between 34 % and 50 % of the image max intensity plus the best threshold 58 % value from the fixed-threshold experimental evaluation performed. The results of the comparison of different fixed threshold values with the ground truth show that the best fixed threshold value is image-dependent (Table 1). Therefore, we used the most-used values in the literature ( 40 % and the 50 % ) in our comparison studies [19,24,25,45,46,47,48,49].

5.2. Comparison with Different Segmentation Algorithms

In this section, the proposed algorithm is compared to existing algorithms using the IoU and pixel accuracy measures.
We evaluated our algorithm against existing similar methodologies, fuzzy C-means (FCM), K-means, and the affinity propagation (AP) algorithms. Since our algorithm detects and delineates the tumor, having as a final result a binary image, we adapted the existing algorithms to produce similar results. In each method, we also used different parameter values in order to choose the best performance setup.
The FCM and K-means algorithms need to define a seed and a number of clusters to perform the segmentation. With the objective to have a binary image as a result, we set the number of clusters at 2 and handcrafted the seed position to the max intensity of the image (assuming the best possible seed position), resulting in a binary image. To achieve a binary image using the AP, we tested different weighting parameters, m and n, achieving the best result when m = 1 and n = 6 .
Figure 6 shows the results obtained with the I o U method, and Figure 7 shows the pixel accuracy. Each algorithm output result is compared to the ground truth.
The obtained results show that the fixed threshold t 50 % had better performance than the fixed threshold t 40 % considering any of the two evaluation measures, and the fixed threshold t 40 % had the worst segmentation results of the tested methods. The fixed threshold t 58 % obtained the second-best I o U and accuracy values (Table 2 and Table 3). These are expected results because this threshold was obtained for our data set. According to both measurement values, the FCM had the highest distribution of values (as can be seen in Figure 6) and the lowest I o U values. The K-means and AP methods showed similar performance, with AP achieving better I o U scores despite the higher distribution of the accuracy results. The proposed algorithm outperformed all of the other methodologies, and this result is even more noticeable (around at least 10 % increase in performance) when using an evaluation measure ( I o U ) not biased by the numerous background pixels, and with a low distribution of values in accuracy. Considering that our methodology is unsupervised, in the 40 images, some produced a wide range of I o U values. However, it is the method that obtained the best global results. The observed results indicate that A-IFSs has more precision and repeatability.

6. Discussion

In this paper, a new A-IFSs-based segmentation algorithm is proposed to increase the accuracy of tumor delineation in PET images. This is a crucial step in order to minimize the damage by future treatments, but optimally including the entire extent of the diseased tissue. In the proposed methodology, we introduced an A-IFSs-based segmentation approach that, unlike the existing methodologies, works without any previous processing or human interaction defining a ROI, any seed position, or SBR value. Moreover, being an AIFSs-based methodology, and considering that AIFSs were proven to efficiently deal with image uncertainties in the past, the proposed methodology was able to better deal with the PET images’ uncertain tumor boundaries. Atanassov’s intuitionistic fuzzy index values are used for representing the hesitance of an expert on determining whether a pixel of the image belongs to a non-healthy tissue or a healthy tissue. In order to verify its effectiveness, five existing representative methods were used for comparison [25,45,50,51,52]. Considering solely the fixed thresholds methods, the fixed threshold t 58 % , having been experimentally selected for our data set, obtained the second-best IoU and accuracy values. Regarding the other tested methods, FCM had the highest distribution of values and the lowest IoU values while the K-means and AP methods showed similar performance, with AP achieving better IoU scores despite the higher distribution of the accuracy results. Overall, the experimental results show that, despite not using previous information or making use of human interaction, the proposed method has achieved the best performance among all methods tested showing more precision and repeatability. Notably, the overlapping of the target area and the predicted area increases significantly with this method, representing an improvement, outperforming all of the other methodologies tested for tumor delineation in PET images. The hierarchical partition of the pixel intensity classes of the method together with a non-binary definition of class membership seems to be the main reason for the superior performance over the other methods. However, since the method relies on image intensity clustering, this may be understood as a limitation, and future work should include spatial feature information.

7. Conclusions

In this work a new A-IFSs-based segmentation algorithm is proposed for tumor delineation in PET breast cancer images. The proposed methodology is a low computational cost, fully unsupervised, and effective tool for tumor delineation in PET images. Future work is intended for tumor delineation of different human cancer types.

Author Contributions

Conceptualization, P.C., H.B. and P.M.-P.; methodology, P.C., T.B., H.B. and P.M.-P.; validation, P.C. and P.M.-P.; investigation, P.C., T.B., H.B. and P.M.-P.; data curation, P.C. and T.B.; writing—original draft preparation, T.B.; writing—review and editing, P.C., H.B. and P.M.-P.; supervision, P.C. and P.M.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by by National Funds by FCT—Portuguese Foundation for Science and Technology, under the project UIDB/04033/2020.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cheriet, M.; Said, J.; Suen, C. A recursive thresholding technique for image segmentation. IEEE Trans. Image Process. 1998, 7, 918–920. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Pal, N.; Pal, S. A review on image segmentation techniques. Pattern Recognit. 1993, 26, 1277–1294. [Google Scholar] [CrossRef]
  3. Seerha, G.K.; Kaur, R. Review on recent image segmentation techniques. Int. J. Comput. Sci. Eng. 2013, 5, 109–112. [Google Scholar]
  4. Seerha, M.; Sankur, B. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 146–165. [Google Scholar]
  5. Vidhya, K.; Revathi, S.; Ashwini, S.S.; Vanitha, S. Review on digital image segmentation techniques. Int. Res. J. Eng. Technol. 2016, 3, 618–619. [Google Scholar]
  6. Zhang, Y. A survey on evaluation methods for image segmentation. Pattern Recognit. 1996, 29, 1335–1346. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, Y. A review of recent evaluation methods for image segmentation. In Proceedings of the Sixth International Symposium on Signal Processing and its Applications, Kuala Lumpur, Malaysia, 13–16 August 2001; Volume 1, pp. 148–151. [Google Scholar]
  8. Delbeke, D.; Martin, W.H. Pet and pet-ct for evaluation of colorectal carcinoma. Semin. Nucl. Med. 2004, 34, 209–223. [Google Scholar] [CrossRef] [Green Version]
  9. Drever, L.; Roa, W.; McEwan, A.; Robinson, D. Iterative threshold segmentation for pet target volume delineation. Med. Phys. 2007, 34, 1253–1265. [Google Scholar] [CrossRef]
  10. Fahey, F.H.; Kinahan, P.E.; Doot, R.K.; Kocak, M.; Thurston, H.; Poussaint, T.Y. Variability in pet quantitation within a multicenter consortium. Med. Phys. 2010, 37, 3660–3666. [Google Scholar] [CrossRef]
  11. Foster, B.; Bagci, U.; Mansoor, A.; Xu, Z.; Mollura, D.J. A review on segmentation of positron emission tomography images. Comput. Biol. Med. 2014, 50, 76–96. [Google Scholar] [CrossRef] [Green Version]
  12. Syed, R.; Bomanji, J.B.; Nagabhushan, N.; Hughes, S.; Kayani, I.; Groves, A.; Gacinovic, S.; Hydes, N.; Visvikis, D.; Copland, C.; et al. Impact of combined 18F-FDG PET/CT in head and neck tumours. Br. J. Cancer 2005, 92, 1046–1050. [Google Scholar] [CrossRef] [Green Version]
  13. Paulino, A.C.; Koshy, M.; Howell, R.; Schuster, D.; Davis, L.W. Comparison of ct- and fdg-pet-defined gross tumor volume in intensity-modulated radiotherapy for head-and-neck cancer. Int. J. Radiat. Oncol. Biol. Phys. 2005, 61, 1385–1392. [Google Scholar] [CrossRef] [PubMed]
  14. Schöder, H.; Larson, S.M.; Yeung, H.W.D. Pet/ct in oncology: Integration into clinical management of lymphoma, melanoma, and gastrointestinal malignancies. J. Nucl. Med. 2004, 45, 72S–81S. [Google Scholar] [PubMed]
  15. Townsend, D.W. Basic science of pet and pet/ct. In PET Clin; Delbeke, D., Bailey, D.L., Townsend, D.W., Maisey, M.Ñ., Eds.; Springer: London, UK, 2006; pp. 1–16. [Google Scholar]
  16. Faso, E.A.; Gambino, O.; Pirrone, R. Head–Neck Cancer Delineation. Appl. Sci. 2021, 11, 2721. [Google Scholar] [CrossRef]
  17. Tamal, M. A Phantom Study to Investigate Robustness and Reproducibility of Grey Level Co-Occurrence Matrix (GLCM)-Based Radiomics Features for PET. Appl. Sci. 2021, 11, 535. [Google Scholar] [CrossRef]
  18. Berthon, B.; Häggström, I.; Apte, A.; Beattie, B.J.; Kirov, A.S.; Humm, J.L.; Marshall, C.; Spezi, E.; Larsson, A.; Schmidtlein, C.R. Petstep: Generation of synthetic pet lesions for fast evaluation of segmentation methods. Phys. Med. 2015, 31, 969–980. [Google Scholar] [CrossRef] [Green Version]
  19. Biehl, K.J.; Kong, F.-M.; Dehdashti, F.; Jin, J.-Y.; Mutic, S.; El Naqa, I.; Siegel, B.A.; Bradley, J.D. 18f-fdg pet definition of gross tumor volume for radiotherapy of non–small cell lung cancer: Is a single standardized uptake value threshold approach appropriate? J. Nucl. Med. 2006, 47, 1808–1812. [Google Scholar]
  20. Drever, L.; Robinson, D.M.; McEwan, A.; Roa, W. A local contrast based approach to threshold segmentation for pet target volume delineation. Med. Phys. 2006, 33, 1583–1594. [Google Scholar] [CrossRef]
  21. Hatt, M.; Laurent, B.; Ouahabi, A.; Fayad, H.; Tan, S.; Li, L.; Lu, W.; Jaouen, V.; Tauber, C.; Czakon, J.; et al. The first miccai challenge on pet tumor segmentation. Med. Image Anal. 2018, 44, 177–195. [Google Scholar] [CrossRef] [Green Version]
  22. Hatt, M.; Cheze le Rest, C.; Turzo, A.; Roux, C.; Visvikis, D. A fuzzy locally adaptive bayesian segmentation approach for volume determination in pet. IEEE Trans. Med. Imaging 2009, 28, 881–893. [Google Scholar] [CrossRef] [Green Version]
  23. Jentzen, W.; Freudenberg, L.; Eising, E.G.; Heinze, M.; Brandau, W.; Bockisch, A. Segmentation of pet volumes by iterative image thresholding. J. Nucl. Med. 2007, 48, 108–114. [Google Scholar] [PubMed]
  24. Schinagl, D.A.X.; Vogel, W.V.; Hoffmann, A.L.; Van Dalen, J.A.; Oyen, W.J.; Kaanders, J.H.A.M. Comparison of five segmentation tools for 18f-fluoro-deoxy-glucose–positron emission tomography–based target volume definition in head and neck cancer. Int. J. Radiat. Oncol. Biol. Phys. 2018, 69, 1282–1289. [Google Scholar] [CrossRef] [PubMed]
  25. Vees, H.; Senthamizhchelvan, S.; Miralbell, R.; Weber, D.C.; Ratib, O.; Zaidi, H. Assessment of various strategies for 18f-fet pet-guided delineation of target volumes in high-grade glioma patients. Eur. J. Nucl. Med. Mol. Imaging 2009, 36, 182–193. [Google Scholar] [CrossRef] [PubMed]
  26. Gu, Y.; Kumar, V.; Hall, L.; Goldgof, D.; Li, C.; Korn, R.; Bendtsen, C.; Velazquez, E.; Dekker, A.; Aerts, H.; et al. Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach. Pattern Recognit. 2013, 46, 692–702. [Google Scholar] [CrossRef] [Green Version]
  27. Gu, Y.; Feng, Y.; Sun, J.; Zhang, N.; Lin, W.; Sa, Y.; Wang, P. Automatic lung tumor segmentation on pet/ct images using fuzzy markov random field model. Comput. Math. Methods Med. 2014, 2014, 401201. [Google Scholar] [CrossRef] [PubMed]
  28. Ju, W.; Xiang, D.; Wang, L.; Kopriva, I.; Chen, X. Random walk and graph cut for co-segmentation of lung tumor on pet-ct images. IEEE Trans. Image Process. 2015, 24, 5854–5867. [Google Scholar] [CrossRef] [Green Version]
  29. Preethi, S.; Aishwarya, P. An efficient wavelet-based image fusion for brain tumor detection and segmentation over pet and mri image. Multimed. Tools. Appl. 2021, 80, 14789–14806. [Google Scholar] [CrossRef]
  30. Rubinstein, E.; Salhov, M.; Nidam-Leshem, M.; White, V.; Golan, S.; Baniel, J.; Bernstein, H.; Groshar, D.; Averbuch, A. Unsupervised tumor detection in dynamic pet/ct imaging of the prostate. Med. Image Anal. 2019, 55, 27–40. [Google Scholar] [CrossRef]
  31. Baba, S.; Isoda, T.; Maruoka, Y.; Kitamura, Y.; Sasaki, M.; Yoshida, T.; Honda, H. Diagnostic and prognostic value of pretreatment suv in 18F-FDG/PET in breast cancer: Comparison with apparent diffusion coefficient from diffusion-weighted mr imaging. J. Nucl. Med. 2014, 55, 736–742. [Google Scholar] [CrossRef] [Green Version]
  32. Nestle, U.; Kremp, S.; Schaefer-Schuler, A.; Sebastian-Welsch, C.; Hellwig, D.; Rübe, C.; Kirsch, C.-M. Comparison of different methods for delineation of 18F-FDG PET–positive tissue for target volume definition in radiotherapy of patients with non–small cell lung cancer. J. Nucl. Med. 2005, 46, 1342–1348. [Google Scholar]
  33. Schaefer, A.; Kremp, S.; Hellwig, D.; Rübe, C.; Kirsch, C.-M.; Nestle, U. A contrast-oriented algorithm for FDG-PET-based delineation of tumour volumes for the radiotherapy of lung cancer: Derivation from phantom measurements and validation in patient data. Eur. J. Nucl. Med. Mol. Imaging 2008, 35, 1989–1999. [Google Scholar] [CrossRef] [PubMed]
  34. Matheoud, R.; Della Monica, P.; Secco, C.; Loi, G.; Krengli, M.; Inglese, E.; Brambilla, M. Influence of different contributions of scatter and attenuation on the threshold values in contrast-based algorithms for volume segmentation. Phys. Med. 2011, 27, 44–51. [Google Scholar] [CrossRef] [PubMed]
  35. Riegel, A.C.; Bucci, M.K.; Mawlawi, O.R.; Johnson, V.; Ahmad, M.; Sun, X.; Luo, D.; Chandler, A.G.; Pan, T. Target definition of moving lung tumors in positron emission tomography: Correlation of optimal activity concentration thresholds with object size, motion extent, and source-to-background ratio. Med. Phys. 2010, 37, 1742–1752. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Lopes, N.V.; Couto, P.A.M.; Bustince, H.; Melo-Pinto, P. Automatic histogram threshold using fuzzy measures. IEEE Trans. Image Process. 2010, 19, 199–204. [Google Scholar] [CrossRef] [PubMed]
  37. Bagci, U.; Yao, J.; Caban, J.; Turkbey, E.; Aras, O.; Mollura, D.J. A graph-theoretic approach for segmentation of PET images. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; Volume 1, pp. 8479–8482. [Google Scholar]
  38. Belhassen, S.; Zaidi, H. A novel fuzzy C-means algorithm for unsupervised heterogeneous tumor quantification in PET. Med. Phys. 2010, 37, 1309–1324. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Dunn, J.C. A fuzzy relative of the ISODATA process and its use in detecting compact well-separated clusters. J. Cybern. 1973, 32–57. [Google Scholar] [CrossRef]
  40. Foster, B.; Bagci, U.; Luna, B.; Dey, B.; Bishai, W.; Jain, S.; Xu, Z.; Mollura, D.J. Robust segmentation and accurate target definition for positron emission tomography images using Affinity Propagation. In Proceedings of the 2013 IEEE 10th International Symposium on Biomedical Imaging, San Francisco, CA, USA, 7–11 April 2013; Volume 1, pp. 1461–1464. [Google Scholar]
  41. Ding, Y.; Gong, L.; Zhang, M.; Li, C.; Qin, Z. A multi-path adaptive fusion network for multimodal brain tumor segmentation. Neurocomputing 2020, 412, 19–30. [Google Scholar] [CrossRef]
  42. Schwyzer, M.; Ferraro, D.; Muehlematter, U.; Curioni-Fontecedro, I.; Huellner, M.; Schulthess, G.; Kaufmann, P.; Burger, I.; Messerli, M. Automated detection of lung cancer at ultralow dose pet/ct by deep neural networks—Initial results. Lung Cancer 2018, 126, 170–173. [Google Scholar] [CrossRef]
  43. Zhang, R.; Cheng, C.; Zhao, X.; Li, X. Multiscale mask r-cnn-based lung tumor detection using pet imaging. Mol. Imaging 2019, 18, 1–8. [Google Scholar] [CrossRef] [Green Version]
  44. Melo-Pinto, P.; Couto, P.; Bustince, H.; Barrenechea, E.; Pagola, M.; Fernandez, J. Image segmentation using atanassov’s intuitionistic fuzzy sets. Expert. Syst. Appl. 2013, 40, 15–26. [Google Scholar] [CrossRef]
  45. Day, E.; Betler, J.; Parda, D.; Reitz, B.; Kirichenko, A.; Mohammadi, S.; Miften, M. A region growing method for tumor volume segmentation on pet images for rectal and anal cancer patients. Med. Phys. 2009, 36, 4349–4358. [Google Scholar] [CrossRef] [PubMed]
  46. Erdi, Y.E.; Mawlawi, O.; Larson, S.M.; Imbriaco, M.; Yeung, H.; Finn, R.; Humm, J.L. Segmentation of lung lesion volume by adaptive positron emission tomography image thresholding. Cancer 2000, 80, 2505–2509. [Google Scholar] [CrossRef]
  47. Hatt, M.; Cheze Le Rest, C.; Albarghach, N.; Pradier, O.; Visvikis, D. Pet functional volume delineation: A robustness and repeatability study. Eur. J. Nucl. Med. Mol. Imaging 2011, 38, 663–672. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Wanet, M.; Lee, J.A.; Weynand, B.; De Bast, M.; Poncelet, A.; Lacroix, V.; Coche, E.; Grégoire, V.; Geets, X. Gradient-based delineation of the primary gtv on fdg-pet in non-small cell lung cancer: A comparison with threshold-based approaches, ct and surgical specimens. Radiother. Oncol. 2011, 98, 117–125. [Google Scholar] [CrossRef]
  49. Yu, W.; Fu, X.-L.L.; Zhang, Y.-J.J.; Xiang, J.-Q.Q.; Shen, L.; Jiang, G.-L.L.; Chang, J.Y. Gtv spatial conformity between different delineation methods by 18fdg pet/ct and pathology in esophageal cancer. Radiother. Oncol. 2009, 93, 441–446. [Google Scholar] [CrossRef]
  50. Mohan, D.; Ulagamuthalvi, V.; Joseph, N. Performance Comparison of Brain Tumor Segmentation Algorithms. In Advances in Computational Intelligence and Communication Technology; Lecture Notes in Networks and Systems; Springer: Singapore, 2022; Volume 399, pp. 243–249. [Google Scholar]
  51. Zhou, S.; Xu, Z. Automatic grayscale image segmentation based on affinity propagation clustering. Pattern Anal. Appl. 2020, 23, 331–348. [Google Scholar] [CrossRef]
  52. Bal, A.; Banerjee, M.; Chakrabarti, A.; Sharma, P. MRI Brain Tumor Segmentation and Analysis using Rough-Fuzzy C-Means and Shape Based Properties. J. King Saud Univ. Sci. 2022, 34, 115–133. [Google Scholar] [CrossRef]
Figure 1. Computational process.
Figure 1. Computational process.
Applsci 12 04865 g001
Figure 2. Evolution of Homogeneity in four examples of PET images through the iterations.
Figure 2. Evolution of Homogeneity in four examples of PET images through the iterations.
Applsci 12 04865 g002
Figure 3. Example of original images, A-IFSs result and the ground truth.
Figure 3. Example of original images, A-IFSs result and the ground truth.
Applsci 12 04865 g003
Figure 4. Examples of images (ad) from different patients.
Figure 4. Examples of images (ad) from different patients.
Applsci 12 04865 g004
Figure 5. (ac) Examples of original images, (df) corresponding experts’ delineation.
Figure 5. (ac) Examples of original images, (df) corresponding experts’ delineation.
Applsci 12 04865 g005
Figure 6. Algorithm comparison with I o U .
Figure 6. Algorithm comparison with I o U .
Applsci 12 04865 g006
Figure 7. Algorithm comparison with pixel accuracy.
Figure 7. Algorithm comparison with pixel accuracy.
Applsci 12 04865 g007
Table 1. Fixed threshold IoU results.
Table 1. Fixed threshold IoU results.
T MaxPET 1PET 2PET 3PET 4...Average
30%0.337770.822920.213230.72679...0.29735
32%0.391370.874860.243280.76301...0.32694
34%0.473700.920750.28370.8021...0.36072
36%0.575460.961070.330120.84333...0.40531
38%0.650110.993670.386750.87995...0.46184
40%0.723520.929110.458730.93108...0.52214
42%0.772050.892410.499020.97316...0.56530
44%0.799870.846840.534150.98113...0.60077
46%0.826460.801270.570980.93904...0.63505
48%0.868020.764560.603620.88824...0.66256
50%0.908630.729110.635730.84615...0.68626
52%0.953220.679750.664050.80697...0.69683
54%0.993610.646840.683390.76633...0.70175
56%0.962990.594940.705850.73295...0.70384
58%0.912310.549370.730680.69231...0.70475
60%0.863230.477220.756420.64586...0.70115
Table 2. IoU algorithm comparison results.
Table 2. IoU algorithm comparison results.
Number of Images: 40
MinAverageMax
t40%0.161990.522140.96322
t50%0.283190.686260.98023
t58%0.315770.704750.98688
K-Means0.041670.653340.98688
FCM0.020050.462920.98586
AP0.283190.693971
A-IFSs0.092290.764921
Table 3. Pixel accuracy algorithm comparison results.
Table 3. Pixel accuracy algorithm comparison results.
Number of Images: 40
MinAverageMax
t40%0.974980.992230.99988
t50%0.988420.996930.99995
t58%0.996930.997770.99997
K-Means0.990920.997320.99997
FCM0.868000.973880.99996
AP0.988470.996961
A-IFSs0.992350.998431
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Couto, P.; Bento, T.; Bustince, H.; Melo-Pinto, P. Positron Emission Tomography Image Segmentation Based on Atanassov’s Intuitionistic Fuzzy Sets. Appl. Sci. 2022, 12, 4865. https://doi.org/10.3390/app12104865

AMA Style

Couto P, Bento T, Bustince H, Melo-Pinto P. Positron Emission Tomography Image Segmentation Based on Atanassov’s Intuitionistic Fuzzy Sets. Applied Sciences. 2022; 12(10):4865. https://doi.org/10.3390/app12104865

Chicago/Turabian Style

Couto, Pedro, Telmo Bento, Humberto Bustince, and Pedro Melo-Pinto. 2022. "Positron Emission Tomography Image Segmentation Based on Atanassov’s Intuitionistic Fuzzy Sets" Applied Sciences 12, no. 10: 4865. https://doi.org/10.3390/app12104865

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop