Color as a High-Value Quantitative Tool for PET/CT Imaging
Abstract
:1. Introduction
2. Materials and Methods
2.1. Colormap Properties
2.2. Method 1: Colormap Information Transform (CIT)
- Step 1: Channel Steps Determination.
- Step 2: Determine Channel Mapping Shades
2.3. Method 2: Folding Colormap
- Rule 1—Axis of Grays crossing zero: Black RGB (0,0,0) and White RGB (255,255,255) vertices must be positioned in a 3D inverted position, i.e., if cubic coordinates of Black are (a,a,a) then coordinates of White must be (−a,−a,−a).
- Rule 2—Color change density symmetry: Vertices which have a common edge must be subjected to only one-color channel inversion. Specifically, White Vertex RGB (255,255,255) is a point that belongs on 3 edges, as does every vertex. For each edge that has for starting point on the White Vertex, the opposite paired vertex must change by inverting R or G or B regardless of the layout of the final coordinates of the vertices, as shown on the cube of Figure 5c. Our general folding colormap method generates a strip of RGB values that start from a point with vertex P1 (i1,j1,k1) and end at another vertex point, P2 (i2,j2,k2), by folding the surface of the cube.
- Rule 3—Continuity: Strip RGB values’ coordinates must move only 1 step along a chosen axis X, Y, or Z.
- Rule 4—Values separation: Strip RGB values’ coordinates must not cross at any point on the cube’s surface.
2.4. Pilot Study Database
2.5. Implementation Tools
3. Results
3.1. CIT Application
3.2. Colormap Folding Application
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Townsend, D.W.; Beyer, T.; Blodgett, T.M. PET/CT scanners: A hardware approach to image fusion. Sem. Nucl. Med. 2003, 33, 193–204. [Google Scholar] [CrossRef] [PubMed]
- Torigian, D.A.; Huang, S.S.; Houseni, M.; Alavi, A. Functional Imaging of Cancer with Emphasis on Molecular Techniques. CA Cancer J. Clin. 2007, 57, 206–224. [Google Scholar] [CrossRef] [PubMed]
- Jadvar, H.; Parker, J.A. Clinical PET and PET/CT; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Hess, S.; Blomberg, B.A.; Zhu, H.J.; Høilund-Carlsen, P.F.; Alavi, A. The pivotal role of FDG-PET/CT in modern medicine. Acad. Radiol. 2014, 21, 232–249. [Google Scholar] [CrossRef]
- Freeman, L.M.; Blaufox, M.D. (Eds.) Brain Imaging Update. Semin. Nucl. Med. 2012, 42, 353. [Google Scholar] [CrossRef]
- Wang, T.; Xing, H.; Wang, S.; Liu, L.; Li, F.; Jing, H. Deep learning-based automated segmentation of eight brain anatomical regions using head CT images in PET/CT. BMC Med. Imaging 2022, 22, 99. [Google Scholar] [CrossRef] [PubMed]
- Tremblay, P.; Dick, A.S. Broca and Wernicke are dead, or moving past the classic model of language neurobiology. Brain Lang. 2016, 162, 60–71. [Google Scholar] [CrossRef]
- Hofman, M.S.; Hicks, R.J. How We Read Oncologic FDG PET/CT. Cancer Imaging 2016, 16, 35. [Google Scholar] [CrossRef]
- Sadaghiani, M.S.; Rowe, S.P.; Sheikhbahaei, S. Applications of artificial intelligence in oncologic 18F-FDG PET/CT imaging: A systematic review. Ann. Transl. Med. 2021, 9, 823. [Google Scholar] [CrossRef]
- Foster, B.; Bagci, U.; Mansoor, A.; Xu, Z.; Mollura, D.J. A review on segmentation of positron emission tomography images. Comp. Biol. Med. 2014, 50, 76–96. [Google Scholar] [CrossRef]
- Lyra, V.; Parissis, J.; Kallergi, M.; Rizos, E.; Filippatos, G.; Kremastinos, D.; Chatziioannou, S. 18F-FDG PET/CT brain glucose metabolism as a marker of different types of depression comorbidity in chronic heart failure patients with impaired systolic function. Eur. J. Heart Fail. 2020, 22, 2138–2146. [Google Scholar] [CrossRef]
- Porter, E.; Fuentes, P.; Siddiqui, Z.; Thompson, A.; Levitin, R.; Solis, D.; Myziuk, N.; Guerrero, T. Hippocampus segmentation on noncontrast CT using deep learning. Med. Phys. 2020, 47, 2950–2961. [Google Scholar] [CrossRef]
- Carlson, M.L.; DiGiacomo, P.S.; Fan, A.P.; Goubran, M.; Khalighi, M.M.; Chao, S.Z.; Vasanawala, M.; Wintermark, M.; Mormino, E.; Zaharchuk, G.; et al. Simultaneous FDG-PET/MRI detects hippocampal subfield metabolic differences in AD/MCI. Sci. Rep. 2020, 10, 12064. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Zhang, D.; Chen, Z.; Wang, H.; Miao, W.; Zhu, W. Clinical evaluation of a novel atlas-based PET/CT brain image segmentation and quantification method for epilepsy. Quant. Imaging Med. Surg. 2022, 12, 4538–4548. [Google Scholar] [CrossRef] [PubMed]
- Alongi, P.; Laudicella, R.; Panasiti, F.; Stefano, A.; Comelli, A.; Giaccone, P.; Arnone, A.; Minutoli, F.; Quartuccio, N.; Cupidi, C.; et al. Radiomics Analysis of Brain [18F]FDG PET/CT to Predict Alzheimer’s Disease in Patients with Amyloid PET Positivity: A Preliminary Report on the Application of SPM Cortical Segmentation, Pyradiomics, and Machine-Learning Analysis. Diagnostics 2022, 12, 933. [Google Scholar] [CrossRef]
- Bhateja, V.; Srivastava, A.; Moin, A.; Lay-Ekuakille, A. Multispectral medical image fusion scheme based on hybrid contourlet and shearlet transform domains. Rev. Sci. Instrum. 2018, 89, 084301. [Google Scholar] [CrossRef] [PubMed]
- Jun, S.; Park, J.G.; Seo, Y. Accurate FDG PET tumor segmentation using the peritumoral halo layer method: A study in patients with esophageal squamous cell carcinoma. Cancer Imaging 2018, 18, 35. [Google Scholar] [CrossRef]
- NEMA. Digital Imaging and Communications in Medicine (DICOM), Part 14: Grayscale Standard Display Function, Vol. PS 3.14; National Electrical Manufacturers Association: Washington, DC, USA, 2001. [Google Scholar]
- Badano, A.; Revie, C.; Casertano, A.; Cheng, W.C.; Green, P.; Kimpe, T.; Krupinski, E.; Sisson, C.; Skrøvseth, S.; Treanor, D. Consistency and Standardization of Color in Medical Imaging: A Consensus Report. J. Dig. Imag. 2015, 28, 41–52. [Google Scholar] [CrossRef]
- International Color Consortium. Visualization of Medical Content on Color Display Systems. White Paper #44. April 2016, Revised October 2023. Available online: https://color.org/whitepapers/ICC_White_Paper44_Visualization_of_colour_on_medical_displays-v2.pdf (accessed on 20 January 2025).
- Zhang, Z.; Shang, X.; Li, G.; Wang, G. Just Noticeable Difference Model for Images with Color Sensitivity. Sensors 2023, 23, 2634. [Google Scholar] [CrossRef]
- Xue, Z.; Antani, S.; Long, L.R.; Demner-Fushman, D.; Thoma, G.R. Window Classification of Brain CT Images in Biomedical Articles. AMIA Annu. Symp. Proc. 2012, 2012, 1023–1029. [Google Scholar]
- International Telecommunication Union. ITU-R Recommendation BT.709. Available online: https://www.itu.int/rec/R-REC-BT.709-6-201506-I/en (accessed on 20 January 2025).
- Yarlagadda, R.; Hershey, J.E. Signal Processing, General. In Encyclopedia of Physical Science and Technology, 3rd ed.; Meyers, R.A., Ed.; Academic Press: Cambridge, MA, USA, 2003; pp. 761–779. [Google Scholar]
- Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine Learning for Medical Imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef]
- DenOtter, T.D.; Schubert, J. Hounsfield Unit. 2023 Mar 6. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2024. [Google Scholar] [PubMed]
- Mishra, D.; Ghimire, R.K.; Chand, R.B.; Thapa, N.; Panta, O.B. Evaluation of Hounsfield Unit in adult brain structures by CT. J. Inst. Med. 2016, 38, 70–75. [Google Scholar] [CrossRef]
- Samala, R.K.; Drukker, K.; Shukla-Dave, A.; Chan, H.P.; Sahiner, B.; Petrick, N.; Greenspan, H.; Mahmood, U.; Summers, R.M.; Tourassi, G.; et al. AI and machine learning in medical imaging: Key points from development to translation. BJR/Artif. Intell. 2024, 1, ubae006. [Google Scholar] [CrossRef] [PubMed]
- Mall, P.K.; Singh, P.K.; Srivastav, S.; Narayan, V.; Paprzycki, M.; Jaworska, T.; Ganzha, M. A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities. Healthc. Anal. 2023, 4, 100216. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9992–10002. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Marinis, M.; Chatziioannou, S.; Kallergi, M. Color as a High-Value Quantitative Tool for PET/CT Imaging. Information 2025, 16, 352. https://doi.org/10.3390/info16050352
Marinis M, Chatziioannou S, Kallergi M. Color as a High-Value Quantitative Tool for PET/CT Imaging. Information. 2025; 16(5):352. https://doi.org/10.3390/info16050352
Chicago/Turabian StyleMarinis, Michail, Sofia Chatziioannou, and Maria Kallergi. 2025. "Color as a High-Value Quantitative Tool for PET/CT Imaging" Information 16, no. 5: 352. https://doi.org/10.3390/info16050352
APA StyleMarinis, M., Chatziioannou, S., & Kallergi, M. (2025). Color as a High-Value Quantitative Tool for PET/CT Imaging. Information, 16(5), 352. https://doi.org/10.3390/info16050352