Advance in CT Imaging Using Deep Learning

A special issue of Tomography (ISSN 2379-139X).

Deadline for manuscript submissions: closed (31 July 2022) | Viewed by 8342

Special Issue Editor


E-Mail Website
Guest Editor
Division of Imaging, Diagnostics and Software Reliability, OSEL/CDRH/FDA, Silver Spring, MD 20993, USA
Interests: machine learning; deep learning; computer-aided diagnosis; radiomics; study design; performance assessment

Special Issue Information

Dear Colleagues,

Advances in deep learning have significantly changed the field of medical imaging analysis. Multitude of tasks, including segmentation, abnormality detection and localization, patient risk score calculation, and synthetic image generation, have benefitted from the potential of deep learning for fast and accurate results. CT imaging, in particular, is a field where deep learning has the potential to significantly impact the state of the field. Within the diagnostic utility of CT, many different tasks have been attempted across the various anatomical areas with research directly leading to clinical implementation. Novel medical devices incorporating deep learning into CT are emerging for clinical use, including devices that automatically perform organ and lesion segmentation, provide information to aid in the finding and classifying images/diseases, and perform image reconstruction and denoising. This Special Issue will focus on research papers, perspectives, and reviews informing the readers about the advances in CT imaging using deep learning. We seek manuscripts that describe methods for image analysis and image generation using CT images. Methods for deep learning-based image denoising and reconstruction that results in images that provide diagnostic information while also showing potential improvements in clinical measures, such as time savings and dose reduction, are also welcome. Innovative methods that involve CT imaging used with deep learning are encouraged, as well as manuscript submissions describing the current state of the field, and those promising new descriptions for the future of this area.

Dr. Kenny H. Cha
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Tomography is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • CT, deep learning
  • image reconstruction
  • denoising
  • computer-aided diagnosis
  • quantitative imaging

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 8146 KiB  
Article
On the Simulation of Ultra-Sparse-View and Ultra-Low-Dose Computed Tomography with Maximum a Posteriori Reconstruction Using a Progressive Flow-Based Deep Generative Model
by Hisaichi Shibata, Shouhei Hanaoka, Yukihiro Nomura, Takahiro Nakao, Tomomi Takenaga, Naoto Hayashi and Osamu Abe
Tomography 2022, 8(5), 2129-2152; https://doi.org/10.3390/tomography8050179 - 24 Aug 2022
Cited by 3 | Viewed by 2189
Abstract
Ultra-sparse-view computed tomography (CT) algorithms can reduce radiation exposure for patients, but these algorithms lack an explicit cycle consistency loss minimization and an explicit log-likelihood maximization in testing. Here, we propose X2CT-FLOW for the maximum a posteriori (MAP) reconstruction of a three-dimensional (3D) [...] Read more.
Ultra-sparse-view computed tomography (CT) algorithms can reduce radiation exposure for patients, but these algorithms lack an explicit cycle consistency loss minimization and an explicit log-likelihood maximization in testing. Here, we propose X2CT-FLOW for the maximum a posteriori (MAP) reconstruction of a three-dimensional (3D) chest CT image from a single or a few two-dimensional (2D) projection images using a progressive flow-based deep generative model, especially for ultra-low-dose protocols. The MAP reconstruction can simultaneously optimize the cycle consistency loss and the log-likelihood. We applied X2CT-FLOW for the reconstruction of 3D chest CT images from biplanar projection images without noise contamination (assuming a standard-dose protocol) and with strong noise contamination (assuming an ultra-low-dose protocol). We simulated an ultra-low-dose protocol. With the standard-dose protocol, our images reconstructed from 2D projected images and 3D ground-truth CT images showed good agreement in terms of structural similarity (SSIM, 0.7675 on average), peak signal-to-noise ratio (PSNR, 25.89 dB on average), mean absolute error (MAE, 0.02364 on average), and normalized root mean square error (NRMSE, 0.05731 on average). Moreover, with the ultra-low-dose protocol, our images reconstructed from 2D projected images and the 3D ground-truth CT images also showed good agreement in terms of SSIM (0.7008 on average), PSNR (23.58 dB on average), MAE (0.02991 on average), and NRMSE (0.07349 on average). Full article
(This article belongs to the Special Issue Advance in CT Imaging Using Deep Learning)
Show Figures

Figure 1

14 pages, 1119 KiB  
Article
Verte-Box: A Novel Convolutional Neural Network for Fully Automatic Segmentation of Vertebrae in CT Image
by Bing Li, Chuang Liu, Shaoyong Wu and Guangqing Li
Tomography 2022, 8(1), 45-58; https://doi.org/10.3390/tomography8010005 - 1 Jan 2022
Cited by 8 | Viewed by 2441
Abstract
Due to the complex shape of the vertebrae and the background containing a lot of interference information, it is difficult to accurately segment the vertebrae from the computed tomography (CT) volume by manual segmentation. This paper proposes a convolutional neural network for vertebrae [...] Read more.
Due to the complex shape of the vertebrae and the background containing a lot of interference information, it is difficult to accurately segment the vertebrae from the computed tomography (CT) volume by manual segmentation. This paper proposes a convolutional neural network for vertebrae segmentation, named Verte-Box. Firstly, in order to enhance feature representation and suppress interference information, this paper places a robust attention mechanism on the central processing unit, including a channel attention module and a dual attention module. The channel attention module is used to explore and emphasize the interdependence between channel graphs of low-level features. The dual attention module is used to enhance features along the location and channel dimensions. Secondly, we design a multi-scale convolution block to the network, which can make full use of different combinations of receptive field sizes and significantly improve the network’s perception of the shape and size of the vertebrae. In addition, we connect the rough segmentation prediction maps generated by each feature in the feature box to generate the final fine prediction result. Therefore, the deep supervision network can effectively capture vertebrae information. We evaluated our method on the publicly available dataset of the CSI 2014 Vertebral Segmentation Challenge and achieved a mean Dice similarity coefficient of 92.18 ± 0.45%, an intersection over union of 87.29 ± 0.58%, and a 95% Hausdorff distance of 7.7107 ± 0.5958, outperforming other algorithms. Full article
(This article belongs to the Special Issue Advance in CT Imaging Using Deep Learning)
Show Figures

Figure 1

18 pages, 5133 KiB  
Article
Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction
by Chang Sun, Yitong Liu and Hongwen Yang
Tomography 2021, 7(4), 932-949; https://doi.org/10.3390/tomography7040077 - 9 Dec 2021
Cited by 3 | Viewed by 2948
Abstract
Sparse-view CT reconstruction is a fundamental task in computed tomography to overcome undesired artifacts and recover the details of textual structure in degraded CT images. Recently, many deep learning-based networks have achieved desirable performances compared to iterative reconstruction algorithms. However, the performance of [...] Read more.
Sparse-view CT reconstruction is a fundamental task in computed tomography to overcome undesired artifacts and recover the details of textual structure in degraded CT images. Recently, many deep learning-based networks have achieved desirable performances compared to iterative reconstruction algorithms. However, the performance of these methods may severely deteriorate when the degradation strength of the test image is not consistent with that of the training dataset. In addition, these methods do not pay enough attention to the characteristics of different degradation levels, so solely extending the training dataset with multiple degraded images is also not effective. Although training plentiful models in terms of each degradation level can mitigate this problem, extensive parameter storage is involved. Accordingly, in this paper, we focused on sparse-view CT reconstruction for multiple degradation levels. We propose a single degradation-aware deep learning framework to predict clear CT images by understanding the disparity of degradation in both the frequency domain and image domain. The dual-domain procedure can perform particular operations at different degradation levels in frequency component recovery and spatial details reconstruction. The peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and visual results demonstrate that our method outperformed the classical deep learning-based reconstruction methods in terms of effectiveness and scalability. Full article
(This article belongs to the Special Issue Advance in CT Imaging Using Deep Learning)
Show Figures

Figure 1

Back to TopTop