Next Article in Journal
Optimizing sEMG Gesture Recognition: Leveraging Channel Selection and Feature Compression for Improved Accuracy and Computational Efficiency
Previous Article in Journal
Power–Pitch Cascade Control-Based Approach for the Up/Down-Regulated Operation of Large Wind Turbines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse-View Artifact Correction of High-Pixel-Number Synchrotron Radiation CT

1
Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(8), 3397; https://doi.org/10.3390/app14083397
Submission received: 19 February 2024 / Revised: 9 April 2024 / Accepted: 12 April 2024 / Published: 17 April 2024

Abstract

:
High-pixel-number synchrotron radiation computed tomography (CT) has the advantages of high sensitivity, high resolution, and a large field of view. It has been widely used in biomedicine, cultural heritage research, non-destructive testing, and other fields. The Nyquist sampling theorem states that when the detector’s pixels per row are increased, it requires more CT projections, resulting in a lengthened CT scan time and increased radiation damage. Sparse-view CT can significantly reduce radiation damage and improve the projection data acquisition speed. However, there is insufficient sparse projection data, and the slices reconstructed show aliasing artifacts. Currently, aliasing artifact correction processes more medical CT images, and the number of pixels of such images is small (mainly 512 × 512 pixels). This paper presents an aliasing artifact correction algorithm based on deep learning for synchrotron radiation CT with a high pixel number ( 1728 × 1728 pixels). This method crops high-pixel-number CT images with aliasing artifacts into patches with overlapping features. During the network training process, a convolutional neural network is utilized to enhance the details of the patches, after which the patches are reintegrated into a new CT slice. Subsequently, the network parameters are updated to optimize the new CT slice that closely approximates the full-view slice. To align with practical application requirements, the neural network is trained using only three samples to optimize network parameters and applied successfully to untrained samples for aliasing artifact correction. Comparative analysis with typical deep learning aliasing artifact correction algorithms demonstrates the superior ability of our method to correct aliasing artifacts while preserving image details more effectively. Furthermore, the effect of aliasing artifact correction at varying levels of projection sparsity is investigated, revealing a positive correlation between image quality after deep learning processing and the number of projections. However, the trade-off between rapid experimentation and artifact correction remains a critical consideration.

1. Introduction

Sparse-view CT is an imaging technique that collects a small number of projection images within the scanning angle and then reconstructs the CT slices [1]. It can effectively reduce radiation damage, CT scanning time, and data storage volume in high-pixel-number synchrotron radiation CT. However, the projection data collected by sparse-view CT does not satisfy the Nyquist sampling theorem. Reconstruction is an ill-posed inverse problem, and aliasing artifacts will appear in the reconstructed slice. Figure 1 displays the aliasing artifacts in the mouse brain data used in this study under different sparse views. Figure 1a–e shows the slice images reconstructed using FBP at 100 views, 200 views, 300 views, 400 views, and 1200 views (label), respectively. The images (a1–e1) and (a2–e2) are magnified views of the blue and red boxed areas in (a–e), respectively. The magnified images clearly reveal numerous streaking artifacts and noise.
To correct aliasing artifacts, researchers have proposed some iterative correction methods based on compressed sensing [2,3,4,5]. These methods introduce constraints to the iterative reconstruction process, greatly improving the quality of the reconstructed image. However, to solve for the optimal values of the objective function iteratively, the iterative approach involves constructing a mathematical model that requires multiple projections and back projections. This process demands substantial computational resources, rendering it impractical for high-pixel-number CT images.
The rapid development of deep learning in recent years in the fields of noise reduction [6,7], segmentation [8,9], and super-resolution [10,11,12] has also provided new ideas for the correction of aliasing artifacts in sparse-view medical CT (512 × 512 pixels). The correction of aliasing artifacts in sparse-view CT using deep learning methods is broadly classified into three main categories: sinogram domain methods [13,14], slice domain methods [15,16,17,18,19,20,21], and the fusion of sinogram domain and slice domain methods [22,23,24,25,26,27]. Lee et al. [13] proposed the segmentation of sinograms into patches, followed by the use of a convolutional network to supplement the missing data in the sinogram. Okamoto et al. [14] also suggested employing deep learning to enlarge a sinogram vertically and interpolate the data to create more projections. The proposed band patch horizontally has the same size as the sinogram and is only cropped vertically. Within the slice domain, Jin et al. [15] proposed an FBPConvNet network to learn the residuals between low-quality inputs and labels. Lee et al. [17] combined image wavelet transform and convolutional network using wavelet transform instead of the pooling operation of U-Net [28]. Zhang et al. [19] proposed a DD-Net slice domain network model with an encoding and decoding structure. In the fusion domain, Lee et al. [22] suggested the use of a two-dimensional wavelet transform instead of the down-sampling approach of conventional networks, employing two U-Net models in the sinogram and image domains. Dong [27] proposed supplementing missing data with linear interpolation in the sinogram and using neural networks to correct artifacts in reconstructed images within the slice domain.
In synchrotron CT imaging, a 2 k × 2 k pixel detector is typically used [29,30], resulting in both the acquired projection data and the number of pixels in the reconstructed CT image being 2 k × 2 k in size. However, the increase in the number of pixels means that more projection data are needed to satisfy the requirements of the Nyquist sampling theorem, which will enhance the advantage of sparse projection but also increase the difficulty of correcting sparse artifacts. In this paper, we present an artifact correction algorithm for high-pixel-number sparse-view CT. We performed CT imaging using a synchrotron radiation light source and acquired 1200 projections at 180 angles, using their CT reconstruction slices as labels. We obtained sparsely sampled projections and used their reconstructed CT slices as input to the neural network. This type of image post-processing strategy can effectively conserve computational resources. We cropped the input image into small patches and computed the loss of these patches with the corresponding labeled patches to optimize the network parameters. Additionally, we proposed constraints: the output patches of the network are reintegrated into full CT images, which are used to compute the structural similarity with the labeled CT slices to further optimize the network parameters. Experiments demonstrate that, compared with typical deep learning methods (FBPConvNet [15], DD-Net [19]), our proposed artifact correction method shows excellent results in both qualitative evaluation and quantitative metrics. We believe that this artifact correction method for high-pixel-number CT can be applied to other fields as well.
The rest of this paper is organized as follows. First, Section 2 presents the details of our proposed method. Section 3 gives the experimental and research results. Finally, the paper is summarized.

2. Methods and Data

2.1. Method

The method proposed in this paper is shown in Figure 2: First, the sparse-view slices are reconstructed with filtered back projection (FBP). The slices with aliasing artifacts of 1728 × 1728 pixels are cropped with overlap into patches of 480 × 480 pixels. Cropping with pixel overlap avoids pixel discontinuity in image stitching. Then, these patches are passed through the neural network to start training and artifact-free patches as labels. Once all the patches of each slice have been trained, these patches are reintegrated to a complete CT slice, and we further expect the complete CT slice to be wirelessly close to the full-view CT slice in terms of feature structure. In other words, we add a L o s s g l o b a l here (see Figure 2).
Due to incorporating an attention mechanism in its network architecture (see Figure 3), the Attention U-Net [31] model demonstrates an enhanced capability in extracting detailed image features. Therefore, in this paper, the Attention U-Net is employed as the network model for artifact correction. In the left part of the network, two convolutions are first used in each dimension to extract image features. After down-sampling four times in the Max Pooling layer, the receptive field of the network increases continuously. The right part of the network is expanded by the up-sampled feature map to restore the image size. The introduction of the Attention-Gate module allows the model to focus on the target information, reduce the weight of irrelevant information, and improve the ability of the network to retain details. After the Attention-Gate module and concatenation with the up-sampled features on the feature channel, the image resolution can be recovered. The network model is implemented using the PyTorch framework, running on Intel Xeon Gold 6240 CPU and Nvidia V100 GPU. The Adam optimizer was used during training, with a total of 60 epochs, and the initial learning rate was 0.0001. Every 10 epochs, the learning rate was reduced to 0.8 times the original. The images in this dataset are all of 8-bit depth. Before training the network, we normalize their grayscale values by dividing by 255, thus scaling them to a range between 0 and 1.
During training, the patch section uses the mean square error as the loss function:
L o s s l o c a l ( X , X ) = X X 2
where X is the neural network’s patch output, and X represents the patch’s label. In addition, the structural similarity index [32] integrates the brightness l ( Y , Y ) , contrast c ( Y , Y ) , and structure s ( Y , Y ) of the two images, which can be used to evaluate the similarity between the Y (reintegrated CT slice)and the label image Y (full-view CT slice):
l ( Y , Y ) = 2 m Y m Y + c 1 m Y 2 + m Y 2 + c 1
c ( Y , Y ) = 2 σ Y σ Y + c 2 σ Y 2 + σ Y 2 + c 2
s ( Y , Y ) = σ Y Y + c 3 σ Y σ Y + c 3
S S I M ( Y , Y ) = ( 2 m Y m Y + c 1 ) ( 2 σ Y + c 2 ) ( m Y 2 + m Y 2 + c 1 ) ( σ Y 2 + σ Y 2 + c 2 )
where m represents the mean, σ represents the standard deviation, c 1 , c 2 and c 3 are constants. We propose to use S S I M ( Y , Y ) to further constrain the direction of network optimization:
L o s s g l o b a l ( Y , Y ) = λ · ( 1 S S I M ( Y , Y ) )
In this paper, the λ is 0.1. In summary, the loss function for network training consists of two parts:
L o s s = L o s s l o c a l ( X , X ) + L o s s g l o b a l ( Y , Y )

2.2. Data Preparation and Image Quality Assessment

Synchrotron radiation CT is an ideal method for high-resolution 3D non-destructive imaging of mouse brains, and the radiation damage can be further reduced by the sparse-view CT method. We used alcoholic dehydration mouse brains under 12 keV monochromatic light generated by a double crystal monochromator at the X-ray Imaging and Biomedical Application Beamline Station (BL13W1) of Shanghai Synchrotron Radiation Facility (SSRF). The distance between the sample and the detector was 20 cm, and the pixel size of the detector was 6.5 μm × 6.5 μm. Meanwhile, we used a 2× objective lens, so each pixel of the CT slice was 3.25 μm. In total, 1200 projections were collected at 180 angles, and their reconstruction slices were used as labels for network training. In total, 400 views (or 300 views, 200 views, 100 views) were sampled at uniform intervals from all projections, and the reconstructed slices were used as input to the network. The Artifacts are shown in Figure 1. Sparse sampling results in the reconstructed slice contain aliasing artifacts. It should be noted that the slice also contains ring artifacts. However, our focus is on the correction of aliasing artifacts. Therefore, ring artifacts were not corrected prior to training but were considered as part of the sample structure. Furthermore, we used five mouse brain samples, of which three (2708 slices) were used for training, one was used as the validation dataset (474 slices), and one (865 slices) was used as the test set. It is worth noting that since the resolution of the image is 3.25 μm, the thickness of each layer in the reconstructed CT slices is also 3.25 μm. This implies that the feature differences between neighboring slices are very small, resulting in insufficiently significant feature differences in the dataset. Such characteristics of synchrotron CT data pose a challenge for deep learning training.
There are two ways to evaluate image quality: qualitative and quantitative. The performance of the proposed method is qualitatively evaluated by observing the differences in features between the images after artifact correction and the reference (label) images. For quantitative evaluation, there are full-reference and no-reference metrics [33]. Given the availability of label images, we utilize full-reference metrics (PSNR and MS-SSIM [34]) to evaluate the performance of the proposed artifact correction method.The larger the peak signal-to-noise ratio, the better the quality of the network output image. The MS-SSIM index performs down-sampling M (M takes 5) times on the image to represent the similarity of the network output image and the label at different scales. The closer the value is to 1, the closer the network output image and the label are. Its expression is as follows:
P S N R ( Y , Y ) = 10 l o g 10 M A X 2 M S E ( Y , Y )
M S S S I M = [ l M ( Y , Y ) ] α M j = 1 M c j ( Y , Y ) β j s j ( Y , Y ) γ j
MAX represents the maximum value of the true (label) image. α , β , γ are weight parameters used to adjust the relative importance of different components luminance l, contrast c and structure s. Among these, β 1 = γ 1 = 0.0448 , β 2 = γ 2 = 0.2856 , β 3 = γ 3 = 0.3001 , β 4 = γ 4 = 0.2363 , α 5 = β 5 = γ 5 = 0.1333 and l , c , s corresponds to Equations (2)–(4) in the previous paragraph.

3. Results

3.1. Comparison of Artifact Correction Effects of Different Methods

To verify the performance of our proposed method, we compare it with the typical deep learning methods FBPConvNet [17] and DD-Net [19] under 200 views and 400 views. Figure 4 shows the artifact correction effects of several methods under 200 views. Figure 4(a1–e1) is an enlarged view of the area of the red box in Figure 4a–e. Figure 4a is a directly reconstructed slice using FBP, with details on the image destroyed by artifacts. After applying the DD-Net (Figure 4(b1)) and FBPConvNet (Figure 4(c1)) algorithms, the image features appear blurred and the internal structure is indiscernible. However, our proposed method retains more information than other methods while removing artifacts (Figure 4(e1)). Figure 4(a2–d2) depicts the absolute error images of Figure 4a–d and the full-view slice (Figure 4e), with brighter colors indicating larger errors. Our proposed method demonstrates the smallest absolute error, followed by the FBPConvNet method and the DD-Net method in descending order. The data at the top of (a2–d2) represent the average values of the absolute error images.
Figure 5 depicts the artifact correction effects of various methods at 400 views. Sequentially, Figure 5a–e presents the following: the slice directly reconstructed by FBP, the DD-Net method, the FBPConvNet method, our proposed method, and the slice directly reconstructed by full-view (1200 views). Additionally, Figure 5(a1–e1) provides a zoomed view of the area within the red box in Figure 5a. The input image of the network (Figure 5(a1)) exhibits a low signal-to-noise ratio and noticeable noise, which can impact subsequent image analysis. The DD-Net method still introduces excessive smoothing (as indicated by the blue arrow in Figure 5(b1)). Upon observing the region highlighted by the red arrow in Figure 5(c1), it is evident that the aliasing artifacts persist in the image corrected by the FBPConvNet method. In contrast, our proposed method demonstrates advantages in recovering fine details and closely approximates the label image. This is further supported by the absolute error image in Figure 5(a2–d2).
Table 1 and Table 2 list the quantitative evaluation results of several artifact correction methods on the test dataset under 200 and 400 views. Both the MS-SSIM and PSNR of our proposed method are optimal values, indicating that the proposed method can recover image details with high quality while removing artifacts.

3.2. Artifact Correction Effects on Material Samples

Figure 6 demonstrates the aliasing artifact correction effects of the proposed method when applied to material samples. These samples are composed of tungsten (W), silicon carbide (SiC), and titanium (Ti)(referred to as TiW). The bright areas on the CT slices represent W, while the ring structures surrounding W are made of SiC, and the remaining structures consist of Ti. There are a total of four samples, of which two are used for training (3930 slices), one for validation (1200 slices), and one for testing (1500 slices). The input to the network are slices reconstructed from 200 projection images, and the network labels are slices reconstructed from 1200 projection images. The network model and parameter settings are consistent with those used for the mouse brain data. Figure 6a shows the slice image reconstructed directly using FBP, Figure 6b shows the artifact correction result using the algorithm proposed in this paper, and Figure 6c is the label image. After using the proposed algorithm, the aliasing artifacts were corrected, and the image contrast was improved. Figure 6(a2,b2) are the absolute error images between Figure 6a,b and the label images. The deviation between the slice after applying the artifact correction algorithm and the label decreases, as illustrated in Figure 6(b2). This result demonstrates that the proposed method can be used for aliasing artifact correction in different types of data.

3.3. Selection of the Optimal Sparsity Ratio

To enable the application of the method to actual CT imaging, our attention is directed towards establishing a projection sparse ratio, aiming to achieve high-quality artifact correction while expediting experiments with reduced projection data. Accordingly, we conducted artifact correction at 100, 200, 300, and 400 views, as depicted in Figure 7. Figure 7(a1–e1) illustrates the local zoomed-in image of the red box region in Figure 7a–e. The experimental results demonstrate that a higher number of projections leads to richer image information after algorithmic correction, becoming closer to the ground truth. Additionally, Figure 7f,g presents quantitative image quality evaluations using PSNR and MS-SSIM for varying sparsity levels in the test set. It is noteworthy that although the image quality improves with an increasing number of projections after applying the algorithm, the rate of improvement diminishes. Meanwhile, the experimental time and radiation damage to the samples also increase, indicating a diminishing return. The trade-off between rapid experimentation and artifact correction is a critical consideration. For this case, we believe that the choice of the sparsity ratio should depend on the experimental task. For example, if it is necessary to observe a salient target (e.g., the red arrow in Figure 7), it is appropriate to choose 100 views, which can save 11/12 of the experiment time. After being processed by the deep learning method, the signal-to-noise ratio of the image is improved, which is conducive to subsequent data processing operations such as feature segmentation. Moreover, if the focus is on restoring the detailed information of the slice (such as the blue arrows in Figure 7), it is a suitable choice to choose 400 views, which can save 2/3 of the projection data.

4. Conclusions

This paper proposes a deep learning method for artifact correction in synchrotron sparse-view CT. In contrast to artifact correction for medical CT images, synchrotron CT imaging commonly uses a 2 k × 2 k detector and the reconstructed slices with the same dimensions. Therefore, we propose a method for aliasing artifact correction with a high pixel number (1728 × 1728 pixels) to fit practical applications better. Specifically, local details are addressed by cropping CT slices with aliasing artifacts into overlapping patches, and the artifacts are corrected through neural network optimization. Then, the local patches are reintegrated into the global CT slice, further constraining the spliced global slice and high-quality labels to be consistent in feature structure. In practical scenarios, obtaining a sufficient number of samples is challenging. Therefore, this study utilized only three sample slices as the training set, one sample as the validation set to verify the network training effect, and the reconstructed slices of another sample as the test set. The experiments demonstrate that our proposed method can successfully correct aliasing artifacts, indicating the feasibility of utilizing sparse-view CT imaging to achieve fast experiments and reduce radiation damage in practical CT imaging. We believe that the proposed method can also be applied to other fields.
Meanwhile, we investigated the artifact correction effect of the proposed method at different sparsity levels. The results indicate that the quality of the neural network-processed images improves with an increased number of acquired projections, but the enhancement rate of the method gradually diminishes. Moreover, the rise in the number of projections implies longer experimental data and higher radiation damage, leading to diminishing returns, which need to be weighed. Based on this, we recommend using a sparse ratio of 1/12 or 1/6 if the sample’s complexity is similar to that of the mouse brain and the researcher wishes to observe significant features. If high-quality CT images are needed, a higher sparse ratio needs to be considered. The deployment of the proposed method on the TiW dataset led to the elimination of aliasing artifacts and an enhancement in image quality, thereby further substantiating the effectiveness of the proposed technique. The neural network model utilized in this study is Attention U-Net. In subsequent research, we plan to explore other neural network architectures to achieve superior artifact correction results. Furthermore, our future endeavors will also focus on developing artifact correction methods capable of simultaneously addressing multiple types of samples.

Author Contributions

Conceptualization, M.H., G.L. and J.Z.; methodology, M.H.; validation, M.H.; investigation, M.H.; resources, R.S., Z.W. and Y.W.; software, T.D. and R.S.; writing—original draft preparation, M.H.; writing—review and editing, M.H. and G.L.; visualization, M.H., J.Z. and B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Key Research and Development Program of China (2023YFA1609200), National Natural Science Foundation of China “Ye Qisun” Science Fund project (No. U2241283), the Equipment Development Project of the Chinese Academy of Sciences (YJKYYQ20190060), and National Natural Science Foundation of China (11627901).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

We would like to thank Beijing Synchrotron Radiation Facility 3W1 and Shanghai Synchrotron Radiation Facility BL13W1 for providing synchrotron radiation imaging beamtime. We would like to thank Ting Li from Beijing University of Chinese Medicine for providing the mouse brain samples. We would like to thank Hao Huang, AECC Beijing Institute of Aeronautical Materials for providing the TiW samples.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CTComputed tomography
FBPFiltered back projection
PSNRPeak signal-to-noise ratio
MS-SSIMMulti-level structural similarity index

References

  1. Kudo, H.; Suzuki, T.; Rashed, E.A. Image reconstruction for sparse-view CT and interior CT—Introduction to compressed sensing and differentiated backprojection. Quant. Imaging Med. Surg. 2013, 3, 147. [Google Scholar] [PubMed]
  2. Huang, J.; Ma, J.; Liu, N.; Zhang, H.; Bian, Z.; Feng, Y.; Feng, Q.; Chen, W. Sparse angular CT reconstruction using non-local means based iterative-correction POCS. Comput. Biol. Med. 2011, 41, 195–205. [Google Scholar] [CrossRef] [PubMed]
  3. Sidky, E.Y.; Pan, X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 2008, 53, 4777–4807. [Google Scholar] [CrossRef] [PubMed]
  4. Sidky, E.Y.; Kao, C.M.; Pan, X. Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT. J. X-ray Sci. Technol. 2006, 14, 119–139. [Google Scholar]
  5. Li, H.; Chen, X.; Wang, Y.; Zhou, Z.; Zhu, Q.; Yu, D. Sparse CT reconstruction based on multi-direction anisotropic total variation (MDATV). Biomed. Eng. Online 2014, 13, 92. [Google Scholar] [CrossRef] [PubMed]
  6. Lou, S.; Deng, J.; Lyu, S. Chaotic signal denoising based on simplified convolutional denoising auto-encoder. Chaos Solitons Fractals 2022, 161, 112333. [Google Scholar] [CrossRef]
  7. Tian, C.; Fei, L.; Zheng, W.; Xu, Y.; Zuo, W.; Lin, C.W. Deep learning on image denoising: An overview. Neural Netw. 2020, 131, 251–275. [Google Scholar] [CrossRef] [PubMed]
  8. Du, W.; Shen, H.; Zhang, G.; Yao, X.; Fu, J. Interactive defect segmentation in X-ray images based on deep learning. Expert Syst. Appl. 2022, 198, 116692. [Google Scholar] [CrossRef]
  9. Al Arif, S.M.M.R.; Knapp, K.; Slabaugh, G.; Knapp, K.; Slabaugh, G. Fully automatic cervical vertebrae segmentation framework for X-ray images. Comput. Methods Programs Biomed. 2018, 157, 95–111. [Google Scholar] [CrossRef]
  10. Chaudhari, A.S.; Fang, Z.; Kogan, F.; Wood, J.; Stevens, K.J.; Gibbons, E.K.; Lee, J.H.; Gold, G.E.; Hargreaves, B.A. Super-resolution musculoskeletal MRI using deep learning. Magn. Reson. Med. 2018, 80, 2139–2154. [Google Scholar] [CrossRef]
  11. Zhong, X.; Liang, N.; Cai, A.; Yu, X.; Li, L.; Yan, B. Super-resolution image reconstruction from sparsity regularization and deep residual-learned priors. J. X-ray Sci. Technol. 2023, 31, 319–336. [Google Scholar] [CrossRef] [PubMed]
  12. Karamov, R.; Breite, C.; Lomov, S.V.; Sergeichev, I.; Swolfs, Y. Super-Resolution Processing of Synchrotron CT Images for Automated Fibre Break Analysis of Unidirectional Composites. Polymers 2023, 15, 2206. [Google Scholar] [CrossRef] [PubMed]
  13. Lee, H.; Lee, J.; Kim, H.; Cho, B.; Cho, S. Deep-neural-network-based sinogram synthesis for sparse-view CT image reconstruction. IEEE Trans. Radiat. Plasma Med. Sci. 2019, 3, 109–119. [Google Scholar] [CrossRef]
  14. Okamoto, T.; Ohnishi, T.; Haneishi, H. Artifact reduction for sparse-view CT using deep learning with band patch. IEEE Trans. Radiat. Plasma Med. Sci. 2022, 6, 859–873. [Google Scholar] [CrossRef]
  15. Jin, K.H.; McCann, M.T.; Froustey, E.; Unser, M. Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Process. 2017, 26, 4509–4522. [Google Scholar] [CrossRef] [PubMed]
  16. Xie, S.; Zheng, X.; Chen, Y.; Xie, L.; Liu, J.; Zhang, Y.; Yan, J.; Zhu, H.; Hu, Y. Artifact removal using improved GoogLeNet for sparse-view CT reconstruction. Sci. Rep. 2018, 8, 6700. [Google Scholar] [CrossRef] [PubMed]
  17. Lee, M.; Kim, H.; Kim, H.J. Sparse-view CT reconstruction based on multi-level wavelet convolution neural network. Phys. Medica 2020, 80, 352–362. [Google Scholar] [CrossRef] [PubMed]
  18. Nakai, H.; Nishio, M.; Yamashita, R.; Ono, A.; Nakao, K.K.; Fujimoto, K.; Togashi, K. Quantitative and qualitative evaluation of convolutional neural networks with a deeper u-net for sparse-view computed tomography reconstruction. Acad. Radiol. 2020, 27, 563–574. [Google Scholar] [CrossRef]
  19. Zhang, Z.; Liang, X.; Dong, X.; Xie, Y.; Cao, G. A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution. IEEE Trans. Med. Imaging 2018, 37, 1407–1417. [Google Scholar] [CrossRef]
  20. Han, Y.; Ye, J.C. Framing U-Net via deep convolutional framelets: Application to sparse-view CT. IEEE Trans. Med. Imaging 2018, 37, 1418–1429. [Google Scholar] [CrossRef]
  21. Xie, S.; Yang, T. Artifact removal in sparse-angle CT based on feature fusion residual network. IEEE Trans. Radiat. Plasma Med. Sci. 2020, 5, 261–271. [Google Scholar] [CrossRef]
  22. Lee, D.; Choi, S.; Kim, H.J. High quality imaging from sparsely sampled computed tomography data with deep learning and wavelet transform in various domains. Med Phys. 2019, 46, 104–115. [Google Scholar] [CrossRef] [PubMed]
  23. Wu, W.; Hu, D.; Niu, C.; Yu, H.; Vardhanabhuti, V.; Wang, G. DRONE: Dual-domain residual-based optimization network for sparse-view CT reconstruction. IEEE Trans. Med Imaging 2021, 40, 3002–3014. [Google Scholar] [CrossRef] [PubMed]
  24. Hu, D.; Liu, J.; Lv, T.; Zhao, Q.; Zhang, Y.; Quan, G.; Feng, J.; Chen, Y.; Luo, L. Hybrid-domain neural network processing for sparse-view CT reconstruction. IEEE Trans. Radiat. Plasma Med Sci. 2020, 5, 88–98. [Google Scholar] [CrossRef]
  25. Li, R.; Li, Q.; Wang, H.; Li, S.; Zhao, J.; Yan, Q.; Wang, L. DDPTransformer: Dual-Domain With Parallel Transformer Network for Sparse View CT Image Reconstruction. IEEE Trans. Comput. Imaging 2022, 8, 1101–1116. [Google Scholar] [CrossRef]
  26. Cheslerean-Boghiu, T.; Hofmann, F.C.; Schultheiß, M.; Pfeiffer, F.; Pfeiffer, D.; Lasser, T. Wnet: A data-driven dual-domain denoising model for sparse-view computed tomography with a trainable reconstruction layer. IEEE Trans. Comput. Imaging 2023, 9, 120–132. [Google Scholar] [CrossRef]
  27. Dong, X.; Vekhande, S.; Cao, G. Sinogram interpolation for sparse-view micro-CT with deep learning neural network. In Proceedings of the Medical Imaging 2019: Physics of Medical Imaging; SPIE: Toronto, ON, Canada, 2019; Volume 10948, pp. 692–698. [Google Scholar]
  28. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin, Germany, 2015; pp. 234–241. [Google Scholar]
  29. Yagi, N.; Inoue, K.; Oka, T. CCD-based X-ray area detector for time-resolved diffraction experiments. J. Synchrotron Radiat. 2004, 11, 456–461. [Google Scholar] [CrossRef] [PubMed]
  30. Stampanoni, M.; Groso, A.; Isenegger, A.; Mikuljan, G.; Chen, Q.; Bertrand, A.; Henein, S.; Betemps, R.; Frommherz, U.; Böhler, P.; et al. Trends in synchrotron-based tomographic imaging: The SLS experience. In Proceedings of the Developments in X-ray Tomography V; SPIE: Toronto, ON, Canada, 2006; Volume 6318, pp. 193–206. [Google Scholar]
  31. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  32. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  33. Rukundo, O. Normalized Weighting Schemes for Image Interpolation Algorithms. Appl. Sci. 2023, 13, 1741. [Google Scholar] [CrossRef]
  34. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 2, pp. 1398–1402. [Google Scholar]
Figure 1. Aliasing artifacts in the mouse brain data used in this study under different sparse views. (ae) Slice images reconstructed using FBP at 100 views, 200 views, 300 views, 400 views, and 1200 views (label). The images (a1e1) and (a2e2) are magnified views of the blue and red boxed areas in (ae), respectively.
Figure 1. Aliasing artifacts in the mouse brain data used in this study under different sparse views. (ae) Slice images reconstructed using FBP at 100 views, 200 views, 300 views, 400 views, and 1200 views (label). The images (a1e1) and (a2e2) are magnified views of the blue and red boxed areas in (ae), respectively.
Applsci 14 03397 g001
Figure 2. Flow chart of aliasing artifact correction method.
Figure 2. Flow chart of aliasing artifact correction method.
Applsci 14 03397 g002
Figure 3. Attention U-Net network structure.
Figure 3. Attention U-Net network structure.
Applsci 14 03397 g003
Figure 4. Artifact correction effects of different algorithms under 200 views. (a) Slice reconstructed by FBP, (b) slice corrected by DD-Net, (c) slice corrected by FBPConvNet, (d) artifact correction using the proposed method (e) full-view CT slice. (a1e1) Enlarged images of the red box in (ae). (a2d2) Absolute error images between (ad) and (e). The display window is [0, 0.22] for the absolute error images.
Figure 4. Artifact correction effects of different algorithms under 200 views. (a) Slice reconstructed by FBP, (b) slice corrected by DD-Net, (c) slice corrected by FBPConvNet, (d) artifact correction using the proposed method (e) full-view CT slice. (a1e1) Enlarged images of the red box in (ae). (a2d2) Absolute error images between (ad) and (e). The display window is [0, 0.22] for the absolute error images.
Applsci 14 03397 g004
Figure 5. Artifact correction effects of different algorithms under 400 views. (a) Slice reconstructed by FBP, (b) slice corrected by DD-Net, (c) slice corrected by FBPConvNet, (d) artifact correction using proposed method, (e) full-view CT slice. (a1e1) Enlarged images of the red box in (ae). (a2d2) Absolute error images between (ad) and (e). The display window is [0, 0.12] for the absolute error images.
Figure 5. Artifact correction effects of different algorithms under 400 views. (a) Slice reconstructed by FBP, (b) slice corrected by DD-Net, (c) slice corrected by FBPConvNet, (d) artifact correction using proposed method, (e) full-view CT slice. (a1e1) Enlarged images of the red box in (ae). (a2d2) Absolute error images between (ad) and (e). The display window is [0, 0.12] for the absolute error images.
Applsci 14 03397 g005
Figure 6. Correction of aliasing artifacts in the TiW dataset under 200 views. (a) FBP reconstruction slice, (b) slice processed using the proposed method, (c) reconstructed slice image at 1200 views. (a1c1) Magnified images of the areas within the red box in (ac). (a2,b2) Absolute error images between (a,b) and the label images. The display window is [0, 0.06] for the absolute error images.
Figure 6. Correction of aliasing artifacts in the TiW dataset under 200 views. (a) FBP reconstruction slice, (b) slice processed using the proposed method, (c) reconstructed slice image at 1200 views. (a1c1) Magnified images of the areas within the red box in (ac). (a2,b2) Absolute error images between (a,b) and the label images. The display window is [0, 0.06] for the absolute error images.
Applsci 14 03397 g006
Figure 7. Qualitative and quantitative comparison of network artifact correction ability under different sparse views: (a) 100 views, (b) 200 views, (c) 300 views, (d) 400 views, (e) full-view CT slice. (a1e1) Enlarged image of (ae) in the red box area. (f) Quantitative evaluation of MS-SSIM. (g) Quantitative evaluation of PSNR.
Figure 7. Qualitative and quantitative comparison of network artifact correction ability under different sparse views: (a) 100 views, (b) 200 views, (c) 300 views, (d) 400 views, (e) full-view CT slice. (a1e1) Enlarged image of (ae) in the red box area. (f) Quantitative evaluation of MS-SSIM. (g) Quantitative evaluation of PSNR.
Applsci 14 03397 g007
Table 1. Quantitative evaluation of different artifact correction methods at 200 views.
Table 1. Quantitative evaluation of different artifact correction methods at 200 views.
Methods200 Views (PSNR)200 Views (MS-SSIM)
FBP27.38120.7672
DD-Net28.53580.9261
FBPConvNet32.38080.9263
Proposed method33.62590.9673
Table 2. Quantitative evaluation of different artifact correction methods at 400 views.
Table 2. Quantitative evaluation of different artifact correction methods at 400 views.
Methods400 Views (PSNR)400 Views (MS-SSIM)
FBP32.18200.9218
DD-Net33.73850.9589
FBPConvNet35.48750.9671
Proposed method36.48450.9800
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, M.; Li, G.; Sun, R.; Zhang, J.; Wang, Z.; Wang, Y.; Deng, T.; Yu, B. Sparse-View Artifact Correction of High-Pixel-Number Synchrotron Radiation CT. Appl. Sci. 2024, 14, 3397. https://doi.org/10.3390/app14083397

AMA Style

Huang M, Li G, Sun R, Zhang J, Wang Z, Wang Y, Deng T, Yu B. Sparse-View Artifact Correction of High-Pixel-Number Synchrotron Radiation CT. Applied Sciences. 2024; 14(8):3397. https://doi.org/10.3390/app14083397

Chicago/Turabian Style

Huang, Mei, Gang Li, Rui Sun, Jie Zhang, Zhimao Wang, Yanping Wang, Tijian Deng, and Bei Yu. 2024. "Sparse-View Artifact Correction of High-Pixel-Number Synchrotron Radiation CT" Applied Sciences 14, no. 8: 3397. https://doi.org/10.3390/app14083397

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop