Next Article in Journal
Sample Size Effect on Musculoskeletal Segmentation: How Low Can We Go?
Next Article in Special Issue
WaveletDFDS-Net: A Dual Forward Denoising Stream Network for Low-Dose CT Noise Reduction
Previous Article in Journal
LDPC-Net: A Lightweight Detail–Content Progressive Coupled Network for Single-Image Dehazing with Adaptive Feature Extraction Block
Previous Article in Special Issue
Spectral-Kurtosis and Image-Embedding Approach for Target Classification in Micro-Doppler Signatures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse-View Spectral CT Reconstruction Based on Tensor Decomposition and Total Generalized Variation

School of Software, Shanxi Agricultural University, Jinzhong 030800, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(10), 1868; https://doi.org/10.3390/electronics13101868
Submission received: 25 March 2024 / Revised: 3 May 2024 / Accepted: 7 May 2024 / Published: 10 May 2024
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications, 2nd Edition)

Abstract

:
Spectral computed tomography (CT)-reconstructed images often exhibit severe noise and artifacts, which compromise the practical application of spectral CT imaging technology. Methods that use tensor dictionary learning (TDL) have shown superior performance, but it is difficult to obtain a high-quality pre-trained global tensor dictionary in practice. In order to resolve this problem, this paper develops an algorithm called tensor decomposition with total generalized variation (TGV) for sparse-view spectral CT reconstruction. In the process of constructing tensor volumes, the proposed algorithm utilizes the non-local similarity feature of images to construct fourth-order tensor volumes and uses Canonical Polyadic (CP) tensor decomposition instead of pre-trained tensor dictionaries to further explore the inter-channel correlation of images. Simultaneously, introducing the TGV regularization term to characterize spatial sparsity features, the use of higher-order derivatives can better adapt to different image structures and noise levels. The proposed objective minimization model has been addressed using the split-Bregman algorithm. To assess the performance of the proposed algorithm, several numerical simulations and actual preclinical mice are studied. The final results demonstrate that the proposed algorithm has an enormous improvement in the quality of spectral CT images when compared to several existing competing algorithms.

1. Introduction

X-ray CT is a non-destructive imaging technique that provides information about the internal tissue structures of organs and has found widespread applications in biomedical imaging, security inspection, industrial non-destructive testing, and materials science [1,2]. But conventional CT technology still cannot meet some actual needs because of its limitations, such as a loss of energy-related information and strong beam hardening artifacts in the images [3,4]. Additionally, the need for multiple scans to obtain multiple energy projections increases the radiation risk [5,6]. To mitigate these limitations, spectral CT based on a photon-counting detector has gained significant attention because of its capability to provide spectral information [7]. However, single-channel projection often suffers from serious quantum noise because of the limited number of photons in the corresponding energy channels, degrading the image quality greatly [8]. Consequently, the pursuit of enhancing spectral CT image quality has emerged as a popular research topic.
In order to improve the image quality from noisy projection, numerous algorithms have been developed for the reconstruction of spectral CT. Initially, some traditional CT reconstruction methods were used. Xu et al. incorporated a total variation (TV) regularization term into the reconstruction model and constrained the CT images at each energy level, enhancing the performance of spectral CT imaging [9]. In 2013, Zhao et al. designed a tight frame-based iterative reconstruction algorithm for spectral CT called TFIR, which found application in breast spectral CT imaging and achieved better reconstruction results with fewer projection data [10]. In 2016, Zeng et al. proposed a novel algorithm that combines penalized weighted least squares (PWLS) with structural tensor total variation (STV) regularization and employed an alternating optimization algorithm to solve the objective function, resulting in higher-quality spectral CT images [11]. Subsequently, more reconstruction algorithms based on single-energy channel regularization constraints have been proposed, all of which have achieved satisfactory reconstruction results [12,13,14,15,16]. However, these algorithms only process CT images at each energy channel separately during the image reconstruction stage, focusing solely on the correlation between single-channel images.
To fully exploit the correlations among images at different energy channels, an increasing number of algorithms are transforming multi-spectral CT images into tensor models, leveraging prior information such as sparsity and low-rankness inherent in tensors to improve the spectral CT image quality. Gao et al. proposed a PRISM (prior rank, intensity, and sparsity model) reconstruction model based on features, such as low rankness, intensity, and sparsity, and employed the split-Bregman algorithm to rapidly solve the objective function [17]. Chu et al. combined TV regularization with low-rank constraints to capture the sparsity of spectral CT tensors and verified the performance of their algorithm using simulated data [18]. In 2014, Li et al. further extended the PRISM reconstruction model by generalizing it to a tensor mode, making full use of the similarities across the energy dimension [19]. Later, Li et al. improved upon this algorithm by introducing an adaptive thresholding technique for spectral CT reconstruction, achieving excellent results [20]. Holt et al. proposed a novel regularization model called total nuclear variation (TNV), which significantly improves upon the performance of the TV algorithm [21]. Rigie et al. employed TNV regularization to constrain both spectral CT projections and images, which effectively preserved image edges and achieved superior results [22]. He et al. proposed a new spectral CT reconstruction algorithm that incorporates the nuclear norm and bilateral weighted relative total variation (BRTV) to represent inter-channel correlations and extract intra-channel structures and obtained promising reconstruction results [23]. Subsequently, the tensor dictionary learning (TDL) algorithm has demonstrated immense potential in spectral CT image reconstruction. It can more fully explore the correlations between energy channels, effectively preserving image structure while suppressing noise. In 2016, Zhang et al. proposed a TDL-based method for spectral CT reconstruction, which combines tensor operations with dictionaries to achieve sparse representation, effectively suppressing noise and recovering image details [24]. To enhance the constraint in the image domain, Wu et al. introduced the image gradient L0 norm within the tensor dictionary learning framework, efficiently preserving image edges and details while reconstructing with low-dose and sparse-view projection data [25]. Li et al. proposed an enhanced sparsity-constrained tensor dictionary learning algorithm for spectral CT reconstruction, which incorporated the image gradient L0 norm and full-spectrum reconstructed images with the TDL framework to constrain the correlations among images, achieving better reconstruction results [26]. Despite the promising results achieved by these tensor dictionary learning-based algorithms, there is an inevitable issue: The final image quality heavily relies on the quality of the training tensor dictionary datasets. In practical scenarios, noise is unavoidable in the training datasets, making it difficult to obtain a high-quality pre-trained global tensor dictionary.
Tensor decomposition is an effective method for image representation and analysis, which can obtain good sparsity results from complex signals or data matrices [27]. By controlling the sparse representation of high-dimensional tensor data, noise can be effectively suppressed and artifacts can be reduced [28,29]. It has been widely applied in various application scenarios such as signal processing, video information processing, computer vision, and hyperspectral image denoising [30,31,32]. Due to the similarities between spectral CT images and multi-dimensional data, the idea of tensor decomposition can be borrowed for spectral CT reconstruction. Zhang et al. proposed a spectral CT image denoising method based on tensor decomposition and non-local means (TDNLM), which recovered more fine structures in spectral CT images [33]. Wu et al. introduced a weight-bilateral image gradient with an L0 norm constraint into the tensor decomposition model, reducing artifacts and noise in spectral CT images [34]. Chen et al. proposed a fourth-order non-local tensor decomposition reconstruction algorithm, which introduced weighted kernel and TV norm to constraint each fourth-order tensor unit, and efficiently removed noise and artifacts from the image [35]. Compared to algorithms based on tensor dictionary learning, tensor decomposition does not require pre-training a global tensor dictionary, thereby avoiding the instability caused by the dependency of the tensor dictionary on the quality of the training datasets.
In the tensor approach, only the low-rankness and sparsity of the spatial-spectral domain are considered, neglecting the sparsity in the individual image space domain, which can lead to artifacts in the images. To further enhance the sparsity constraint in the image domain, several common regularization terms have been introduced into the reconstruction model, such as TV [9], adaptive weighted TV [36], and the image gradient L0 norm [25]. However, TV tends to cause staircasing artifacts. Compared to TV, the image gradient L0 norm performs better, as it enhances the image’s ability to preserve edges and suppress noise. But the effectiveness of algorithms based on the L0 norm of the image gradient assumes that the image is sparse in the gradient domain. However, spectral CT images may contain streaking artifacts such as beam hardening and sparse-view artifacts, which weaken the sparsity of the image in the gradient domain and degrade image quality [37]. TGV uses higher-order derivatives and introduces two penalty terms: one for smoothing large-scale variations in the image and the other for preserving small-scale detail information. Therefore, the TGV regularization term can better adapt to different image structures and noise levels, which enables the reconstruction algorithm to have better noise and artifact suppression capabilities [38]. As an edge preservation constraint, TGV has improved the performance of iterative CT reconstruction algorithm in the presence of noise or sparse-view projection [39].
Based on the above considerations, to overcome TDL’s limitations and enhance the sparsity constraint in the image spatial domain in spectral CT reconstruction, we developed a tensor decomposition-based spectral CT image reconstruction model with TGV constraint (TDTGV). Firstly, the algorithm utilizes the non-local similarity features of the image to build a fourth-order tensor volume. Then Canonical Polyadic (CP) tensor decomposition is used to explore the spectral image correlations in the different channels, replacing the pre-trained tensor dictionary. Furthermore, in order to enhance the constraints in the image domain and improve image quality, the TGV regularization term is introduced to represent the spatial sparsity features, as the use of higher-order derivatives can better adapt to different image structures and noise levels, enhancing the ability to suppress noise and artifacts.
This paper’s main contribution is threefold. First, the proposed algorithm uses tensor decomposition replacing pre-trained tensor dictionaries to explore the inherent relationships within the image, addressing the instability caused by the dependency of tensor dictionaries on the quality of training datasets. Second, a high-order TGV regularization term is introduced to strengthen the constraints in the image domain. In comparison to TV constraints and the L0 norm of image gradients, TGV could suppress artifacts and enhance the quality of the reconstructed image effectively. Finally, the proposed reconstruction model is solved by an efficient split-Bregman algorithm.
The paper is arranged as follows: Section 2 briefly reviews the associated mathematical foundations. Section 3 presents the mathematical model and the solution of the proposed algorithm. Section 4 reports on both numerically simulated and preclinical dataset experiments. Finally, Section 5 presents the conclusion and discussion.

2. Mathematical Fundamental Theory

2.1. Canonical Polyadic Tensor Decomposition

Tensor is a multidimensional data array. N-order tensor could be denoted as X I 1 × I 2 × × I k × × I N , in which Ik represents the kth dimension length (k = 1, 2, … N). Elements of X are denoted as x i 1 i 2 i N ( 1 i k I k , k = 1 , 2 , N ) . A tensor can be multiplied with a vector or a matrix. The mode-k product (k = 1, 2, …, N) of a tensor X by a matrix P J × I k is denoted by X × k P I 1 × × I k 1 × J × × I k + 1 × × I N , whose element in (i1, ···, ik−1, j, ik+1, ···, iN) is computed by i k = 1 I k x i 1 i 2 i N p j i k .
CP tensor decomposition is a decomposition method for higher-order tensors, commonly used for dimensionality reduction and pattern analysis of multi-dimensional data, and has become a powerful tool for handling high-order data [40]. In CP decomposition, a higher-order tensor can be decomposed into a sum of several low-rank tensors, which can be represented as follows:
X j = 1 J μ j x 1 j x 2 j x n j x N j = [ μ ; X 1 , X 2 , , X n , X N ]
where μj represents the column normalization coefficients, and J is a positive integer denoting the number of low-rank tensors in the decomposition, which controls the sparsity level of the representation, μ = μ 1 , μ 2 μ J J , X n = x n 1 , x n 2 x n J I n × J . The goal of CP decomposition is to find suitable xj and corresponding weights such that the tensor obtained through this decomposition approximates the original tensor X . This decomposition helps in understanding the underlying structures and patterns in the data, making it widely applicable in fields such as signal processing, image processing, and recommendation systems.

2.2. Image Restoration Based on Canonical Polyadic Tensor Decomposition

Similar to tensor dictionary learning, tensor decomposition can also be applied to image restoration, which can be denoted as follows:
min X * X X * F 2   s . t . X * = [ μ ; X 1 , X 2 , , X n , , X N ]
where X and X * are both image tensors, which represent the noisy image and the image to be restored, respectively. The optimization aims to minimize the error between X and its estimate X * . The recovery of X * can be obtained using the alternating least squares (ALS) method [27], which is completed by solving alternately for each Xn while keeping the other factor matrices fixed. The optimization process stops when a certain condition is met. To elaborate further, we introduce two different expressions:
X ( n ) = X n Q n T
X 1 X 2 = X 1 T X 1 * X 2 T X 2 X 1 X 2 T
where X(n) represents the mode-n unfolding of tensor X , Q n = X N X n + 1 X n 1 X 1 , denotes the Khatri–Rao product, and “^” indicates the Moore–Penrose pseudoinverse operator. Considering Equation (3), we can reformulate the update of X*n in Equation (2) as follows:
min X n * X ( n ) X n * Q n T F 2 ,   s . t . X n * = X n d i a g ( μ ) , n = 1 , N
Equation (5) represents a linear least squares problem, which can be iteratively solved using Equation (6):
X n * = X n Q n T
To reduce the computational cost of computing the pseudoinverse, X*n can be updated using Equation (7) [27]:
X n * = X ( n ) X N X n + 1 X n 1 X 1 X 1 T X 1 * X n 1 T X n 1 * X n + 1 T X n + 1 * * X N T X N
Subsequently, normalize the columns of X*n to obtain Xn, and compute μj using Equation (8):
μ j = x n j * 2
After updating X n * n = 1 N , the solution for X can be obtained according to Equation (1). Algorithm 1 summarizes the implementation algorithm for the CP decomposition using the ALS method.
Algorithm 1: CP decomposition using ALS algorithm implementation process.
Input: CP decomposition parameters L, ε, K; X
Initialize: X n = x n 1 , x n 2 x n J I n × J
While the termination condition is not satisfied, execute the loop:
  for n = 1: N
  (1) update Y n X 1 T X 1 * X n 1 T X n 1 * X n + 1 T X n + 1 * * X N T X N
  (2) Compute X*n according to Equation (7)
  (3) Obtain Xn by normalizing the columns of X*n
  (4) Compute μj according to Equation (8)
  end for
end while
output: X * = [ μ ; X 1 , X 2 , , X n , , X N ]

2.3. Total Generalized Variation

Total generalized variation (TGV) is an algorithm for image denoising, which was first proposed by Bredies et al. in 2010 [41]. Unlike TV, which only considers first-order derivatives, TGV has an order greater than or equal to two, involving higher-order derivatives. This allows TGV to represent image edges, textures, and other detailed information more effectively, resulting in better performance in image denoising tasks.
Mathematically, the TGV of an image u can be defined as follows:
T G V α n u = sup Ω u   d i v n   ν   d x | ν C c n Ω , S y m n l , d i v q ν α q , q = 0 , n 1
where Ω represents a bounded domain, div denotes the divergence operator, n represents the order of TGV, v indicates the dual variable, l denotes the l-dimensional real space, α = α 0 , α 1 , α n 1 represents the positive weight of TGV, and S y m n l represents the symmetric tensor space. For each β M n 1 , the q-divergence of the symmetric tensor space can be expressed as follows:
( d i v q v ) β = γ M q q ! γ ! q v β + γ x γ
where M n = β l | i = 1 l β i = n represents the n-th order multi-index. The ∞-norm of v is denoted as follows:
v = sup x Ω β M n n ! β ! v β x 2 1 2
In this study, we focus on the second-order TGV, which can be expressed as follows:
T G V α 2 u = sup Ω u   d i v 2   v   d x | v C c 2 Ω , S l × l , v α 0 , d i v   v α 1
where S l × l represents the symmetric l × l matrix space, α = α 0 , α 1 is a positive constant, and the first- and second-order divergence calculation formulas for S l × l are expressed as follows:
d i v i = j = 1 l v i j x j d i v 2 v = i = 1 l 2 v i i x i 2 + 2 i < j 2 v i j x i x j
Similarly, the ∞-norm in Equation (11) can be expressed as follows:
v = sup x Ω i = 1 l v i i x 2 + 2 i < j v i j x 2 1 2 d i v   v = sup x Ω i = 1 l j = 1 l v i j x j x 2 1 2
Furthermore, the second-order TGV can be reformulated as follows:
T G V α 2 u = min w α 1 Ω u w d x + α 2 Ω ε w d x
The minimum value is obtained over all vector fields Ω . ε w is the weakly symmetric derivative, which can be computed using equation ε w = w + w T / 2 . From the definition of the second-order TGV in Equation (15), it can be observed that for an image u, the second derivative 2 u has smaller values in smooth regions, which can be minimized by setting w = u according to Equation (15). Since 2 u is large near edges, setting w = 0 in these regions works well for minimization. Thus, TGV2α can explicitly describe gradient information in edge regions by setting the first derivative. Although this argument is intuitively valid, the actual minimum value of w can lie anywhere between 0 and u [42]. Additionally, the first- and second-derivative terms can be balanced using weights α1 and α2, and no significant staircasing artifacts are introduced by the second-derivative term.

3. Methods

3.1. Reconstruction Model

In spectral CT reconstruction, TDL-based algorithms have improved image quality, but they still face some challenges: The performance of tensor dictionaries depends heavily on the quality of the training samples. However, acquiring high-quality training samples is challenging in practice, which can compromise the accuracy of the global tensor dictionaries, degrade the quality of the reconstructed images, and make it easier to lose minute image structures. To address this issue, this paper proposes a tensor decomposition-based spectral CT image reconstruction model with a TGV constraint algorithm. This algorithm employs CP tensor decomposition instead of pre-trained tensor dictionaries, thereby avoiding the instability caused by the dependence of tensor dictionaries on the quality of training datasets. This study only considers a third-order tensor X N 1 × N 2 × C . In the construction of the tensor volume, the non-local similarity features of the image are exploited to cluster all similar tensors into a group of fourth-order tensors; the process of grouping tensor volumes is shown in Figure 1.
Specifically, we first extract overlapping small tensors of size NW × NH × C from the image tensor X (NW = NH = 8, C stands for the number of energy channels, which is equal to 8 in the paper). The above process yields a set of tensor blocks denoted by Y N W × N H × C × P , where P represents the total number of tensor blocks extracted. Next, these tensor blocks are divided into Q groups. Each group comprises Nq small tensor blocks denoted as T q X N W × N H × C × N q , and T q X denotes the extraction operation for a block. To ensure the similarity of tensors within each group, we employ the k-means algorithm, which iteratively searches for tensors that exhibit similar characteristics and assigns them to the corresponding group. Once the tensor blocks are grouped, the CP tensor decomposition is applied to the fourth-order tensors within each group to explore the intrinsic correlations and latent structures within the image.
To further improve the ability to preserve image structure and better suppress artifacts and noise, this paper introduces a TGV regularization term in the single energy channel. The specific reconstruction model constructed in this paper is given by Equation (16):
arg min X , T q * X q = 1 Q 1 2 c = 1 C || A x c p c || 2 2 + c = 1 C T G V α 2 x c + λ 2 ( q = 1 Q T q X T q * X F 2 ) s . t .   T q * X = [ λ q ; X 1 , X 2 , X 3 , X 4 ]
where xc and pc are the vectorized images and projections for the c-th channel, respectively, and A is the system matrix. The TGV regularization term uses a second-order discrete form, which can be expressed as follows:
T G V α 2 u = min w α 1 D u w 1 + α 2 ε w 1
where α1 and α2 are two positive regularization factors used to balance the minimization process. u represents the image, and D = D 1 , D 2 represents the first-order differential gradient operators in the x and y directions of the image. w = ( w 1 , w 2 ) represents a two-dimensional tensor function, and ε is the symmetric gradient operator, which can be expressed as follows:
ε w = D 1 w 1 D 2 w 1 + D 1 w 2 2 D 2 w 1 + D 1 w 2 2 D 2 w 2
Substituting Equation (17) into Equation (16), we can obtain the final algorithmic reconstruction model, which can be expressed as follows:
arg min X , T q * X q = 1 Q 1 2 c = 1 C || A x c p c || 2 2 + c = 1 C ( α 1 D x c w 1 + α 2 ε w 1 ) + λ 2 ( q = 1 Q T q X T q * X F 2 ) s . t .   T q * X = [ λ q ; X 1 , X 2 , X 3 , X 4 ]
where X N 1 × N 2 × C is the third-order tensor of the spectral CT image to be reconstructed, with N1 and N2 representing the width and height of the reconstructed image, respectively. The corresponding projection data tensor is denoted as P J 1 × J 2 × C , with J1 representing the number of detectors and J2 representing the number of projection views. The first term is the data fidelity term, the second term is the second-order TGV regularization constraint, which is used to suppress noise and artifacts while restoring image details, and the third term is the tensor decomposition, which aims to preserve the structure of the reconstructed image. α1, α2, and λ are parameters that balance the data fidelity term and the regularization terms, while q and Tq represent the group number and block extraction operations, respectively.

3.2. Solution

To further solve Equation (19), we decompose the problem and alternatingly solve the following subproblems:
Subproblem 1:
arg min X 1 2 c = 1 C || A x c p c || 2 2 + c = 1 C ( α 1 D x c w 1 + α 2 ε w 1 ) + λ 2 ( q = 1 Q T q X T q * X k F 2 )
Subproblem 2:
arg min T q * X q = 1 Q λ 2 ( q = 1 Q T q X k + 1 T q * X F 2 )   s . t .   T q * X = [ λ q ; X 1 , X 2 , X 3 , X 4 ]
where k represents the current iteration number, and we will solve each subproblem separately.
To effectively solve subproblem 1, the split-Bregman algorithm can be used [43]. First, we introduce two auxiliary variables y and z, and Equation (20) can be rewritten as a constrained optimization model, which can be expressed as follows:
arg min X , y c , z c c = 1 C 1 2 c = 1 C || A x c p c || 2 2 + c = 1 C ( α 1 D x c w c 1 + α 2 ε w c 1 ) + λ 2 ( q = 1 Q T q X T q * X k F 2 ) s . t .   y c = D x c w c , z c = ε w c ,   c = 1 , C
where yc and zc are the auxiliary matrices corresponding to each channel (c = 1, …, C), and y c N 1 × N 2 , z c N 1 × N 2 . Using the augmented Lagrangian function [44], Equation (22) can be transformed into the following unconstrained mathematical model, which can be expressed as follows:
L X , w c , z c , y c , t c , b c = arg min 1 2 c = 1 C || A x c p c || 2 2 + c = 1 C ( α 1 y c 1 + α 2 z c 1 ) + λ 2 ( q = 1 Q T q X T q * X k F 2 ) + μ 1 2 c = 1 C y c ( D x c w c ) t c F 2 + μ 2 2 c = 1 C z c ε w c b c F 2
where tc and bc are the error feedback matrices corresponding to each channel (c = 1, …, C), and t c N 1 × N 2 , b c N 1 × N 2 . μ1 and μ2 are the Lagrangian multipliers. Similarly, applying the alternating direction method of multipliers (ADMM) algorithm to Equation (23) [45], we can further alternately solve the following subproblems:
X k + 1 = arg min L X , w c k , z c k , y c k , t c 1 k , t c 2 k w c k + 1 = arg min L X k + 1 , w c , z c k , y c k , t c 1 k , t c 2 k z c k + 1 = arg min L X k + 1 , w c k + 1 , z c , y c k , t c 1 k , t c 2 k y c k + 1 = arg min L X k + 1 , w c k + 1 , z c k + 1 , y c , t c 1 k , t c 2 k t c k + 1 = t c k + 1 μ 1 y c k + 1 D x c k + 1 + w c k + 1 b c k + 1 = b c k + 1 μ 2 z c k + 1 ε w c k + 1
where k represents the iteration number. The aforementioned minimization model is a multivariate function that needs to be optimized, and it can be minimized using an alternating approach.
The subproblem for X can be written as follows:
X k + 1 = arg min X 1 2 c = 1 C || A x c p c || 2 2 + λ 2 ( q = 1 Q T q X T q * X k F 2 ) + μ 1 2 c = 1 C y c k ( D x c w c k ) t c 1 k F 2
The solution to Equation (25) can be obtained using Equation (26):
x i j c k + 1 = x i j c k [ A T A ] i j + λ [ q T q T T q ] i j c + μ 1 m = 1 2 D m T D m 1 [ A T ( A x c k p c ) ] i j + μ 1 m = 1 2 D m T D m x c k w c k y c k t c 1 k i j + λ [ q T q T T q X k T q * X k ] i j c
Adjusting the values of λ and μ1 can balance the data fidelity term and the sparse representation regularization term. Since the TGV regularization term constrains the image domain separately, the optimization models for the subproblems wc, yc, zc, tc, and bc can be solved for a specific channel c. The detailed solution process for wc, yc, zc, tc, and bc can be found in Appendix A.
Next, we will solve subproblem 2. Since each group is independent during the optimization process, Equation (21) can be written as follows:
arg min T q * X q = 1 Q T q X k + 1 T q * X F 2   s . t .   T q * X = [ λ q ; X 1 , X 2 , X 3 , X 4 ]
The solution process can utilize the CP decomposition ALS algorithm introduced in Section 2.2 to obtain T q * X k + 1 . Finally, the complete TDTGV algorithm process is presented in Algorithm 2.
Algorithm 2: Process of the proposed TDTGV algorithm
Input: Parameters ε, K, L, Q, α1, α2, μ1, μ2, λ
Initialization:   X ( 0 ) , w, y, z, t, b
while not satisfy the stopping criteria
do
  Normalize the projection data;
  Update the tensor image X k + 1 using Equation (26);
  Update wc1k+1, wc2k+1 using Equation (A3);
  Update yck+1 using Equation (A5);
  Update zck+1 using Equation (A8);
  Update tck+1 and bck+1 using Equation (24);
  Update T q * X k + 1 q = 1 Q using Algorithm 1;
  Non-negative constraints to the tensor image X k + 1 ;
end while
Output: Reconstructed spectral CT image X

4. Results

To verify the feasibility and effectiveness of the proposed algorithm, experiments were conducted on both simulated data and real datasets. The comparison methods include the SART algorithm, TVM algorithm, the L0TDL algorithm [25], and the ESC-TDL algorithm [26]. All programming was conducted using MATLAB (2019a), and the computer hardware specifications were 32 GB RAM, Intel(R) Core (TM) i7 9800X @ 3.8 GHz CPU. For all iterative algorithms, the initial image was set to 0. The number of iterations for both numerical simulation and real data experiments was set to 100.

4.1. Study of Numerical Simulation

For the numerical simulation experiment, we utilized a digital thoracic model of mice with a 1.2% iodine contrast agent added to the blood, as depicted in Figure 2. The phantom mainly consists of three parts: soft tissue, bone, and iodine. The X-ray source had a tube voltage of 50 kVp, and the energy spectrum was divided into eight channels: [16, 22) keV, [22, 25) keV, [25, 28) keV, [28, 31) keV, [31, 34) keV, [34, 37) keV, [37, 41) keV, and [41, 50) keV, as depicted in Figure 3. The experiment was conducted using an equidistant fan-beam scan. The specific data simulation parameters were set in Table 1. A total of 640 projections were collected in a complete 360° full-view scan. In an X-ray path, the number of photons was set to 5000.
To evaluate the performance of the proposed algorithm, this paper reconstructed images at projection views of 160 and 80 for all algorithms. The ground truth image was reconstructed from noise-free full-scan projections using the FBP algorithm. For simplicity, this work only displays three representative energy channels: channel 1, channel 4, and channel 8. Selecting algorithm parameters is a challenging task in iterative algorithms. In this experiment, the algorithm parameters were selected based on extensive experimental experience. For the 160 and 80 views reconstruction, the fixed parameter L is 32, K is 50, ε is 10−4, and the remaining reconstruction parameters are given in Table 2.
Figure 4 and Figure 5 show the images reconstructed using SART, TVM, L0TDL, ESC-TDL, and the proposed algorithm TDTGV at 160 and 80 views, respectively. In the figures, the three rows represent the reconstruction results of various methods for channels 1, 4, and 8. It can be observed from the figures that the proposed algorithm can achieve higher-quality images. Specifically, under the simultaneous sparse view and low-dose conditions, the reconstructed images obtained by the SART algorithm are contaminated by significant noise and obvious artifacts, resulting in the worst reconstruction performance, as shown in Figure 4(b1–b3) and Figure 5(b1–b3). The TVM algorithm improves the reconstruction results, but there is still a problem of image blurring, especially in high-energy channels, as shown in Figure 4(c1–c3) and Figure 5(c1–c3). The L0TDL and ESC-TDL algorithms improve the edge protection ability of the images to some extent; however, due to the instability of tensor dictionary training samples, the recovery ability of fine image structures needs to be improved, as shown in Figure 4(d1–d3,e1–e3) and Figure 5(d1–d3,e1–e3). Compared to the previous algorithms, it can be seen from Figure 4(f1–f3) and Figure 5(f1–f3) that the proposed TDTGV algorithm performs better in terms of image structure protection and detail recovery.
To facilitate a more detailed comparison of the reconstructed images, we selected two regions of interest (ROIs) in Figure 4 and Figure 5 and enlarged them to compare the reconstruction results further. The selection of ROIs is shown in Figure 4(a1) and Figure 5(a1), marked by red boxes labeled as regions A, B, C, and D, respectively. The enlarged results are displayed in the red boxes in Figure 4 and Figure 5. By comparing the reconstruction results of each algorithm in the enlarged ROIs, it is evident that the proposed algorithm can successfully recover more small image structures, as indicated by the arrows, further verifying the advantages of the proposed algorithm.
To verify the accuracy of different algorithms in reconstructed images, grayscale profiles were plotted for two regions in Figure 2a: horizontal (red line) and vertical (yellow line). In this case, the reconstruction results at 160 projection views were selected for analysis, and the grayscale profiles for channel 1 and channel 8 were plotted, as shown in Figure 6 and Figure 7, respectively. From the grayscale profiles in the figures, it can be observed that the results reconstructed by the TVM algorithm exhibit large oscillations, particularly in channel 8. The results of the other algorithms are better than those of TVM, but upon comparing the profiles of the two regions, it is evident that the proposed algorithm exhibits the highest level of reconstruction accuracy, which is closer to the ground truth.
To further demonstrate the accuracy of the proposed TDTGV algorithm in reconstructing images globally, Figure 8 shows the absolute difference images between the reconstructed images and the ground truth image. As can be observed from Figure 8, the difference between the images reconstructed by the SART algorithm and the reference image is the largest, followed by TVM, L0TDL, and ESC-TDL. Compared to the above algorithms, the TDTGV algorithm can achieve results closer to the original image in the global domain, further demonstrating the advantage of the TDTGV algorithm.
Next, three commonly used indicators such as PSNR (Peak Signal-to-Noise Ratio), RMSE (Root Mean Square Error), and SSIM (Structural SIMilarity) are utilized for quantitative comparison of the performance of each algorithm. In general, a smaller RMSE value and larger SSIM and PSNR values indicate that the reconstruction result is closer to the ground truth image. The indicator calculations are shown in Table 3. From the values in the table, it can be seen that the proposed TDTGV algorithm achieves the optimal values in all three indicators, meaning that the TDTGV algorithm performs best, further demonstrating its superiority.
Figure 9 shows the average attenuation coefficients and relative deviations of bone, soft tissue, and iodine contrast agents in each channel reconstructed with 160 projection views to verify the reconstruction performance of the algorithms. The first row shows the relative deviations of three basic materials, and the second row shows the average attenuation coefficients. The reference mean values were obtained by reconstructing the noise-free projection data using the FBP algorithm.
The figure shows that TDTGV performs better in the average attenuation coefficients and relative deviations of three basic materials. Concretely speaking, for the decomposition of bone, as could be observed from Figure 9a, the TVM algorithm leads to the highest relative bias (about 8% in channel 8) and is then followed by L0TDL, ESC-TDL. The proposed TDTGV algorithm has a relative deviation of less than 1.5% in all channels, which obtains the most accurate mean values. For the decomposition of soft tissue, as can be seen in Figure 9b, the relative deviations of the four algorithms are all below 1%; however, in all channels, the proposed algorithm TDTGV achieves the smallest decomposition error. Regarding the iodine contrast agent, as shown in Figure 9c, the relative deviations of the four algorithms do not exceed 3%, while the proposed algorithm has a relative deviation of less than 1.8% for iodine in each channel. Overall, the proposed algorithm achieves relatively accurate reconstruction results in all channels.
In addition, to further compare the capabilities of different algorithms in material component characterization, the spectral CT images reconstructed by SART, TVM, L0TDL, ESC-TDL, and TDTGV are characterized into the three basis materials. Taking the reconstruction results with 160 projection views as an example, Figure 10 shows the final results and the color representation images of the three basis materials. From the figure, it can be observed that in the bone region, the proposed method and the L0TDL algorithm achieve high characterization accuracy; for the iodine contrast agent region, SART, TVM, L0TDL, and ESC-TDL all mistakenly classify some bone pixels as iodine components to some extent, while the proposed algorithm achieves relatively accurate characterization; for soft tissue, as can be seen from the second column, the proposed algorithm has high characterization accuracy (indicated by the red arrow), outperforming the other algorithms.
Table 4 presents the quantitative analysis results of RMSE for the characterization of the three basis material components. It can be seen from the table that the proposed TDTGV algorithm achieves the lowest RMSE values in all three basis material component characterizations, further verifying the accuracy of the proposed algorithm in material component characterization.
To compare the convergence of various reconstruction algorithms, Figure 11 shows the convergence of different algorithms and plots the relationship between the RMSE values and iteration numbers. It can be observed from the figure that the proposed algorithm could converge and simultaneously achieve the smallest RMSE value.

4.2. Study of Actual Clinical Mice

In order to further verify the advantage of the TDTGV algorithm, experiments were conducted on actual clinical mice. A mouse injected with 0.2 mL of 15 nm Aurovist II gold nanoparticles (GNPs) (Nanoparticles; Yaphank, NY, USA) was used (mouse data provided by the MARS team in New Zealand). The actual data projection parameters are set as shown in Table 5. In a full scan, 371 projections were uniformly acquired. The X-ray source was 120 kVp and was divided into 13 energy channels. The spectral CT images are third-order tensors of size 256 × 256 × 13, with an area of 18.41 × 18.41 mm2.
In this experiment, images were reconstructed at projection views of 120 and 60 to verify the algorithm’s performance under sparse views. In the following experiments, only three typical energy channels (channels 1, 7, and 13) will be displayed. The algorithm parameters used in the experiment are as follows: the fixed parameter L is set to 64, K is set to 50, ε is set to 10−4, and the remaining reconstruction parameters are shown in Table 6.
Figure 12 and Figure 13 show the images reconstructed using SART, TVM, L0TDL, ESC-TDL, and the proposed algorithm TDTGV at 120 and 60 views, respectively. From the figures, it could be seen that the SART algorithm reconstructs images with severe noise and artifacts, resulting in the loss of image details. The TVM algorithm exhibits image blurring in soft tissue regions, as shown in Figure 12(b1–b3) and Figure 13(b1–b3). The L0TDL and ESC-TDL algorithms improve the edge protection of images to some extent, but they also suffer from the loss of some fine image structures, as illustrated in Figure 12(c1–c3,d1–d3) and Figure 13(c1–c3,d1–d3). In comparison to the previous algorithms, the proposed TDTGV algorithm performs best in terms of reconstructed image quality, as shown in Figure 12(e1–e3) and Figure 13(e1–e3), recovering more fine structures. At the same time, it can be observed from the enlarged views of regions B and D in the figures that the TDTGV algorithm can effectively suppress artifacts that appear near bones, as indicated by arrows “1”, “2”, “3”, and “4”.
In order to further verify the effectiveness of the algorithm proposed in this paper, regions of interest A and C were extracted from Figure 12 and Figure 13 and enlarged for separate display in Figure 14. From the enlarged images of the regions of interest, it can be observed that the image quality obtained by reconstructing with SART and TVM algorithms is relatively low, making it difficult to distinguish the image structures indicated by the red arrows. For the image structure indicated by red arrow “6”, the reconstruction using the L0TDL algorithm appears blurred, while the ESC-TDL algorithm and the TDTGV algorithm yield clearer image structures. However, the ESC-TDL algorithm also exhibits some blurriness for the image structure shown by red arrow “5”. For the bone structure indicated by arrow “7”, all compared algorithms exhibit varying degrees of blurriness. In comparison, the TDTGV algorithm yields a clearer structure, further verifying the effectiveness of the proposed algorithm.
To demonstrate the ability of algorithms in material component characterization, the spectral CT images reconstructed using SART, TVM, L0TDL, ESC-TDL, and the proposed TDTGV algorithm were characterized into three basis materials. Taking the images reconstructed with 120 projection views as an example, the results and corresponding color images are displayed in Figure 15.
Red arrows “8”, “9”, and “10” indicate some detailed parts. From the bone region, it can be observed that the ESC-TDL and TDTGV algorithms identify clearer bone images. However, in the soft tissue region, the TDTGV algorithm produces fewer artifacts in the decomposition image compared to other algorithms. Regarding the GNP component, the ESC-TDL and TDTGV algorithms demonstrate higher accuracy in characterizing the GNP region. Overall, from the fused color image, the proposed algorithm in this paper outperforms other reconstruction algorithms in terms of material component characterization, resulting in sharper image boundaries.

4.3. Parameters Analysis

The two regularization terms involved in the objective function in Equation (17) require several parameters for optimization. Firstly, the parameters for the CP tensor decomposition regularization term mainly include the accuracy level ε, sparsity level L, the number of atom K, group number Q, and regularization factor λ. Similar to tensor dictionary learning, a smaller ε may lead to noise artifacts, while a larger ε may destroy structural details. For L, a lower value will result in blurred edges. Similar to the literature [24], the fixed parameters are set as L = 32, K = 50, and ε = 10−4. Here, we focus on the group number Q, regularization parameter λ, and the parameters of the second regularization term: α1, α2, μ1, and μ2. The settings of these parameters have a significant impact on image quality, and different values will lead to reconstructed images of varying quality. During the parameter selection process, only one or two free parameters are relaxed while the others are fixed, and the changes in image quality with respect to the parameters are observed through experiments. The RMSE and SSIM are used as metrics for selecting parameters. Figure 16 shows the RMSE and SSIM values for different parameter values of the algorithm.
From Figure 16, it can be observed that when Q = 128 and Q = 256, the RMSE values are similar, but the SSIM value is higher for Q = 128. The parameter λ regulates the correlation between various channels. As λ increases, the correlation becomes stronger but easily results in a smoother image. However, when λ exceeds 2.7 × 103, the image becomes overly smooth, leading to a decrease in quality. α1 and α2 are the coefficients of the two penalty terms in TGV, which affect the image quality significantly. Lower values may cause noise to appear in the image, while higher values may result in over-smoothing of the image edge. From Figure 16, it could be observed that when appropriate values are set, both RMSE and SSIM values reach optimal levels. The same conclusion can be drawn for μ1 and μ2. As the values rise, the RMSE decreases and the SSIM increases. However, if the values keep rising, unsatisfactory results are obtained, and the corresponding image quality deteriorates.

5. Discussion and Conclusions

To address the issue that spectral CT image quality obtained by tensor dictionary-based algorithms faced, we propose a reconstruction algorithm that combines total generalized variation and tensor decomposition in this paper. First, we analyze the characteristics of the tensor decomposition algorithm for image recovery and introduce it into the CT reconstruction model, overcoming the instability of tensor dictionaries that depend on the quality of the training dataset. Then, we use the k-means clustering algorithm to group the extracted tensor blocks and explore the intrinsic relationships within each group’s fourth-order tensor volume by CP tensor decomposition. In the single-channel image domain, we employ a high-order TGV regularization term to better adapt to different image structures and noise levels, as well as effectively suppress artifacts. The subproblems are solved by the ADMM algorithm efficiently. Finally, experimental results validate that the proposed algorithm improves the ability to preserve image structures and suppress noise artifacts.
Although the method has yielded improved outcomes, there are still some challenges at present. Firstly, there is a plethora of parameters that require determination. In this paper, the parameters are chosen empirically through extensive experimentation. Consequently, the question of how to automate the optimization of parameters remains an unsolved issue that demands further exploration. Secondly, the small size (8 × 8) of the spatial-spectral block tensors used to construct the fourth-order tensor volume is insufficient to accurately describe the sparsity and low rankness of the tensor volume. Lastly, in practical applications, operations involving fourth-order tensor volumes often pose difficulties, with high computational costs and memory loads. In this paper, we compare the runtime costs of the experiments under the same conditions. The back-projection reconstruction times for all algorithms are identical, so we only compare the regularization constraint times. Table 7 lists the runtime per iteration for each algorithm. From the table, it could be observed that the TDTGV algorithm takes the longest computation time due to operations such as tensor block similarity extraction and clustering, as well as CP tensor decomposition on the fourth-order tensor volume. Therefore, these issues require further discussion and resolution and are the focus of our future research directions.
In conclusion, we propose a spectral CT reconstruction model based on total generalized variation and tensor decomposition. The proposed algorithm employs CP tensor decomposition instead of pre-trained tensor dictionaries, overcoming the instability of tensor dictionaries that depend on the quality of the training dataset while preserving more image structures. In the construction of the tensor volume, we utilize the non-local similarity features of the image to build a fourth-order tensor volume and further explore the internal image relationships by CP tensor decomposition. Meanwhile, to enhance the constraints in the image domain, the concept of TGV regularization is used. The introduction of high-order derivatives allows for better adaptation to different image structures and noise levels, as well as effective suppression of artifacts, further improving the accuracy of image reconstruction.

Author Contributions

Conceptualization, X.L.; methodology, X.L.; software, X.L. and K.W.; validation, X.L., K.W., and X.X.; formal analysis, X.L. and X.X.; data curation, X.L.; writing—original draft preparation, X.L. and X.X.; writing—review and editing, X.L. and K.W.; supervision, X.L. and F.L.; project administration, X.L.; funding acquisition, X.L. and F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Fundamental Applied Research of Shanxi Province (202203021212455, 202303021222067) and Shanxi Agricultural University Youth Science and Technology Innovation Project (2019018, 2019021).

Data Availability Statement

The authors may provide the data underlying the results presented in this paper upon reasonable request.

Acknowledgments

The authors wish to thank the MARS team in New Zealand for providing realistic mice data. The authors express their gratitude to Yanbo Zhang at Siemens of the U.S. for his extensive discussions and valuable suggestions. Additionally, the authors would like to thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. The Solution for wc, yc, zc, tc, and bc

The subproblem wc can be written as follows:
w c k + 1 = arg min w c μ 1 2 y c k ( D x c k + 1 w c ) t c F 2 + μ 2 2 z c k ε w c b c k F 2
Separating wc1 and wc2, we can obtain Equation (A2):
w c 1 k + 1 = arg min w c 1 μ 1 2 y c 1 k ( D 1 x c k + 1 w c 1 ) t c 1 k F 2 + μ 2 2 z c 1 k D 1 w c 1 b c 1 k F 2 + μ 2 2 z c 3 k 1 2 D 1 w c 2 + D 2 w c 1 b c 3 k F 2 + μ 2 2 z c 3 k 1 2 D 1 w c 2 + D 2 w c 1 b c 3 k F 2 w c 2 k + 1 = arg min w c 2 μ 1 2 y c 2 k ( D 2 x c k + 1 w c 2 ) t c 2 k F 2 + μ 2 2 z c 2 k D 2 w c 2 b c 2 k F 2 + μ 2 2 z c 3 k 1 2 D 1 w c 2 + D 2 w c 1 b c 3 k F 2 + μ 2 2 z c 3 k 1 2 D 1 w c 2 + D 2 w c 1 b c 3 k F 2
Taking the partial derivatives of wc1 and wc2, respectively, and rearranging and combining them, we can obtain the solutions as shown in Equations (A3):
w c 1 k + 1 = μ 1 D 1 x c k + 1 y c 1 k + t c 1 k + μ 2 D 1 T z c 1 k b c 1 k + D 2 T z c 3 k b c 3 k + μ 2 2 D 2 T D 1 w c 2 k μ 1 + μ 2 D 1 T D 1 + 1 2 D 2 T D 2 w c 2 k + 1 = μ 1 D 2 x c k + 1 y c 2 k + t c 2 k + μ 2 D 2 T z c 2 k b c 2 k + D 1 T z c 3 k b c 3 k + μ 2 2 D 1 T D 2 w c 1 k + 1 μ 1 + μ 2 D 2 T D 2 + 1 2 D 1 T D 1
The subproblem yc can be written as follows:
y c k + 1 = arg min y c α 1 y c 1 + μ 1 2 y c ( D x c w c ) t c F 2
For Equation (A4), the soft-thresholding algorithm can be employed to obtain the solution:
y c k + 1 = s h r i n k 2 ( D x c k + 1 w c k + 1 + t c k , α 1 μ 1 )
The shrink2 operator can be defined as follows:
s h r i n k 2 a , μ = 0 , a = 0 a 2 μ a a 2 , a 0
The subproblem zc can be written as follows:
z c k + 1 = arg min z c α 2 z c 1 + μ 2 2 z c ε w c k + 1 b c k F 2
Similar to solving yc, the solution for zc can also be obtained using the soft-thresholding algorithm:
z c k + 1 = s h r i n k F ( ε w c k + 1 + b c k , α 2 μ 2 )
The shrinkF operator can be defined as follows:
s h r i n k F b , μ = 0 , b = 0 b F μ b b F , b 0
where 0 refers to the zero matrix of size 2 × 2, and ||•||F represents the Frobenius norm of a matrix.

References

  1. Wang, G.; Yu, H.; De Man, B. An outlook on x-ray CT research and development. Med. Phys. 2008, 35, 1051–1064. [Google Scholar] [CrossRef] [PubMed]
  2. Nakano, M.; Haga, A.; Kotoku, J.; Magome, T.; Masutani, Y.; Hanaoka, S.; Kida, S.; Nakagawa, K. Cone-beam CT reconstruction for non-periodic organ motion using time-ordered chain graph model. Radiat. Oncol. 2017, 12, 145. [Google Scholar] [CrossRef] [PubMed]
  3. Brooks, R.A.; Di Chiro, G. Beam hardening in X-ray reconstructive tomography. Phys. Med. Biol. 1976, 21, 390–398. [Google Scholar] [CrossRef] [PubMed]
  4. Zhao, W.; Li, D.; Niu, K.; Qin, W.; Peng, H.; Niu, T. Robust Beam Hardening Artifacts Reduction for Computed Tomography Using Spectrum Modeling. IEEE Trans. Comput. Imaging 2018, 5, 333–342. [Google Scholar] [CrossRef]
  5. Brenner, D.J.; Hall, E.J. Computed tomography-an increasing source of radiation exposure. N. Engl. J. Med. 2013, 357, 2277–2284. [Google Scholar] [CrossRef]
  6. Nikzad, S.; Pourkaveh, M.; Vesal, N.J.; Gharekhanloo, F. Cumulative radiation dose and cancer risk estimation in common diagnostic radiology procedures. Iran. J. Radiol. 2018, 15, e60955. [Google Scholar] [CrossRef]
  7. Greffier, J.; Frandon, J. Spectral photon-counting CT system: Toward improved image quality performance in conventional and spectral CT imaging. Diagn. Interv. Imaging 2021, 102, 271–272. [Google Scholar] [CrossRef] [PubMed]
  8. Xi, Y.; Chen, Y.; Tang, R.; Sun, J.; Zhao, J. United Iterative Reconstruction for Spectral Computed Tomography. IEEE Trans. Med. Imaging 2015, 34, 769–778. [Google Scholar] [CrossRef]
  9. Xu, Q.; Yu, H.; Bennett, J.; He, P.; Zainon, R.; Doesburg, R.; Opie, A.; Walsh, M.; Shen, H.; Butler, A.; et al. Image Reconstruction for Hybrid True-Color Micro-CT. IEEE Trans. Biomed. Eng. 2012, 59, 1711–1719. [Google Scholar] [CrossRef] [PubMed]
  10. Zhao, B.; Gao, H.; Ding, H.; Molloi, S. Tight-frame based iterative image reconstruction for spectral breast CT. Med. Phys. 2013, 40, 031905. [Google Scholar] [CrossRef] [PubMed]
  11. Zeng, D.; Gao, Y.; Huang, L.; Bian, Z.; Zhang, H.; Lu, L.; Ma, J. Penalized weighted least-squares approach for multienergy computed tomography image reconstruction via structure tensor total variation regularization. Comput. Med. Imaging Graph. 2016, 53, 19–29. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, Q.; Salehjahromi, M.; Yu, H. Refined Locally Linear Transform-Based Spectral-Domain Gradient Sparsity and Its Applications in Spectral CT Reconstruction. IEEE Access 2021, 9, 58537–58548. [Google Scholar] [CrossRef] [PubMed]
  13. Xu, Q.; Yu, H.Y.; Mou, X.Q.; Zhang, L.; Hsieh, J.; Wang, G. Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans. Med. Imaging 2012, 31, 1682–1697. [Google Scholar] [CrossRef] [PubMed]
  14. Zhao, B.; Ding, H.; Lu, Y.; Wang, G.; Zhao, J.; Molloi, S. Dual-dictionary learning-based iterative image reconstruction for spectral computed tomography application. Phys. Med. Biol. 2012, 57, 8217. [Google Scholar] [CrossRef] [PubMed]
  15. Xu, Q.; Liu, H.; Yu, H.; Wang, G.; Xing, L. Dictionary Learning Based Reconstruction with Low-Rank Constraint for Low-Dose Spectral CT. Med. Phys. 2016, 43, 3701. [Google Scholar] [CrossRef]
  16. Wang, S.; Wu, W.; Cai, A.; Xu, Y.; Vardhanabhuti, V.; Liu, F.; Yu, H. Image-spectral decomposition extended-learning assisted by sparsity for multi-energy computed tomography reconstruction. Quant. Imaging Med. Surg. 2023, 13, 610–630. [Google Scholar] [CrossRef] [PubMed]
  17. Gao, H.; Yu, H.; Osher, S.; Wang, G. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM). Inverse Probl. 2011, 27, 115012. [Google Scholar] [CrossRef] [PubMed]
  18. Chu, J.; Li, L.; Chen, Z.; Wang, G.; Gao, H. Multi-energy CT reconstruction based on Low Rank and Sparsity with the Split-Bregman Method (MLRSS). In Proceedings of the 2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC), Anaheim, CA, USA, 29 October–3 November 2012; pp. 2411–2414. [Google Scholar]
  19. Li, L.; Chen, Z.; Wang, G.; Chu, J.; Gao, H. A tensor PRISM algorithm for multi-energy CT reconstruction and comparative studies. J. X-ray Sci. Technol. 2014, 22, 147–163. [Google Scholar] [CrossRef] [PubMed]
  20. Li, L.; Chen, Z.; Cong, W.; Wang, G. Spectral CT Modeling and Reconstruction With Hybrid Detectors in Dynamic-Threshold-Based Counting and Integrating Modes. IEEE Trans. Med. Imaging 2015, 34, 716–728. [Google Scholar] [CrossRef] [PubMed]
  21. Holt, K.M. Total Nuclear Variation and Jacobian Extensions of Total Variation for Vector Fields. IEEE Trans. IMAGE Process. 2014, 23, 3975–3989. [Google Scholar] [CrossRef] [PubMed]
  22. Rigie, D.S.; La Riviere, P.J. Joint reconstruction of multi-channel, spectral CT data via constrained total nuclear variation minimization. Phys. Med. Biol. 2015, 60, 1741–1762. [Google Scholar] [CrossRef] [PubMed]
  23. He, Y.; Zeng, L.; Xu, Q.; Wang, Z.; Yu, H.; Shen, Z.; Yang, Z.; Zhou, R. Spectral CT reconstruction via low-rank representation and structure preserving regularization. Phys. Med. Biol. 2023, 68, 025011. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, Y.; Mou, X.; Wang, G.; Yu, H. Tensor-Based Dictionary Learning for Spectral CT Reconstruction. IEEE Trans. Med. Imaging 2017, 36, 142–154. [Google Scholar] [CrossRef] [PubMed]
  25. Wu, W.; Zhang, Y.; Wang, Q.; Liu, F.; Chen, P.; Yu, H. Low-dose spectral CT reconstruction using image gradient ℓ0–norm and tensor dictionary. Appl. Math. Model. 2018, 63, 538–557. [Google Scholar] [CrossRef]
  26. Li, X.; Sun, X.; Zhang, Y.; Pan, J.; Chen, P. Tensor Dictionary Learning with an Enhanced Sparsity Constraint for Sparse-View Spectral CT Reconstruction. Photonics 2022, 9, 35. [Google Scholar] [CrossRef]
  27. Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  28. Wang, M.; Hong, D.; Han, Z.; Li, J.; Yao, J.; Gao, L.; Zhang, B.; Chanussot, J. Tensor Decompositions for Hyperspectral Data Processing in Remote Sensing: A comprehensive review. IEEE Geosci. Remote Sens. Mag. 2023, 11, 26–72. [Google Scholar] [CrossRef]
  29. Lin, J.; Huang, T.-Z.; Zhao, X.-L.; Ji, T.-Y.; Zhao, Q. Tensor Robust Kernel PCA for Multidimensional Data. IEEE Trans. Neural Netw. Learn. Syst. 2024, 1–13. [Google Scholar] [CrossRef] [PubMed]
  30. Osipov, D.; Chow, J.H. PMU Missing Data Recovery Using Tensor Decomposition. IEEE Trans. Power Syst. 2020, 35, 4554–4563. [Google Scholar] [CrossRef]
  31. Zhang, S.; Guo, X.; Xu, X.; Li, L.; Chang, C.-C. A video watermark algorithm based on tensor decomposition. Math. Biosci. Eng. 2019, 16, 3435–3449. [Google Scholar] [CrossRef] [PubMed]
  32. Peng, Y.; Meng, D.; Xu, Z.; Gao, C.; Yang, Y.; Zhang, B. Decomposable Nonlocal Tensor Dictionary Learning for Multispectral Image Denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2949–2956. [Google Scholar]
  33. Zhang, Y.; Salehjahromi, M.; Yu, H. Tensor decomposition and non-local means based spectral CT image denoising. J. X-ray Sci. Technol. 2019, 27, 397–416. [Google Scholar] [CrossRef] [PubMed]
  34. Wu, W.; Yu, H.; Liu, F.; Zhang, J.; Vardhanabhuti, V. Spectral CT reconstruction via Spectral-Image Tensor and Bidirectional Image-gradient minimization. Comput. Biol. Med. 2022, 151, 106080. [Google Scholar] [CrossRef]
  35. Chen, X.; Xia, W.J.; Liu, Y.; Chen, H.; Zhou, J.L.; Zha, Z.Y.; Wen, B.H.; Zhang, Y. FONT-SIR: Fourth-Order Nonlocal Tensor Decomposition Model for Spectral CT Image Reconstruction. IEEE Trans. Med. Imaging 2022, 41, 2144–2156. [Google Scholar] [CrossRef] [PubMed]
  36. Wu, W.; Hu, D.; An, K.; Wang, S.; Luo, F. A High-Quality Photon-Counting CT Technique Based on Weight Adaptive Total-Variation and Image-Spectral Tensor Factorization for Small Animals Imaging. IEEE Trans. Instrum. Meas. 2021, 70, 2502114. [Google Scholar] [CrossRef]
  37. Gong, C.; Zeng, L. Adaptive iterative reconstruction based on relative total variation for low-intensity computed tomography. Signal Process. 2019, 165, 149–162. [Google Scholar] [CrossRef]
  38. Xu, S.; Qi, M.; Wang, X.; Dong, Y.; Hu, Z.; Zhao, H. Image Restoration under Cauchy Noise: A Group Sparse Representation and Multidirectional Total Generalized Variation Approach. Trait. Signal 2023, 40, 857–873. [Google Scholar] [CrossRef]
  39. Niu, S.; Gao, Y.; Bian, Z.; Huang, J.; Chen, W.; Yu, G.; Liang, Z.; Ma, J. Sparse-view x-ray CT reconstruction via total generalized variation regularization. Phys. Med. Biol. 2014, 59, 2997. [Google Scholar] [CrossRef]
  40. Han, Y.; Zhang, C.-H. Tensor Principal Component Analysis in High Dimensional CP Models. IEEE Trans. Inf. Theory 2023, 69, 1147–1167. [Google Scholar] [CrossRef]
  41. Bredies, K.; Kunisch, K.; Pock, T. Total Generalized Variation. SIAM J. Imaging Sci. 2010, 3, 492–526. [Google Scholar] [CrossRef]
  42. Gao, Y.M.; Jin, Z.M.; Li, X. Template-based CT reconstruction with optimal transport and total generalized variation. Inverse Probl. 2023, 39, 095007. [Google Scholar] [CrossRef]
  43. Lazzaro, D.; Piccolomini, E.L.; Zama, F. A fast splitting method for efficient Split Bregman iterations. Appl. Math. Comput. 2019, 357, 139–146. [Google Scholar] [CrossRef]
  44. Liu, Q.; Liang, D.; Song, Y.; Luo, J.; Zhu, Y.; Li, W. Augmented Lagrangian-Based Sparse Representation Method with Dictionary Updating for Image Deblurring. SIAM J. Imaging Sci. 2013, 6, 1689–1718. [Google Scholar] [CrossRef]
  45. Luo, G.; Yang, Q. A Fast Symmetric Alternating Direction Method of Multipliers. Numer. Math. Methods Appl. 2020, 13, 200–219. [Google Scholar] [CrossRef]
Figure 1. Process of grouping tensor volumes: The dashed box in the (left figure) represents extracting overlapping small tensor blocks from the image tensor, while the (right figure) shows clustering the extracted small tensor blocks into Q groups, with each group containing Nq-specific similar small tensor blocks.
Figure 1. Process of grouping tensor volumes: The dashed box in the (left figure) represents extracting overlapping small tensor blocks from the image tensor, while the (right figure) shows clustering the extracted small tensor blocks into Q groups, with each group containing Nq-specific similar small tensor blocks.
Electronics 13 01868 g001
Figure 2. (a) A digital thoracic model of mice with iodine contrast and (b) decomposed image of material: bone (red), soft tissue (green), and iodine contrast (blue).
Figure 2. (a) A digital thoracic model of mice with iodine contrast and (b) decomposed image of material: bone (red), soft tissue (green), and iodine contrast (blue).
Electronics 13 01868 g002
Figure 3. 50 kVp spectrum curve: Different colors represent the segmented ranges of the energy spectrum.
Figure 3. 50 kVp spectrum curve: Different colors represent the segmented ranges of the energy spectrum.
Electronics 13 01868 g003
Figure 4. Images representing a thoracic model of mice reconstructed from 160 projections using different methods: (a1a3) Ground Truth, (b1b3) SART, (c1c3) TVM, (d1d3) L0TDL, (e1e3) ESC-TDL, and (f1f3) TDTGV. From top to down, the display windows are [0, 0.25] cm−1, [0, 0.1] cm−1, and [0, 0.06] cm−1, respectively.
Figure 4. Images representing a thoracic model of mice reconstructed from 160 projections using different methods: (a1a3) Ground Truth, (b1b3) SART, (c1c3) TVM, (d1d3) L0TDL, (e1e3) ESC-TDL, and (f1f3) TDTGV. From top to down, the display windows are [0, 0.25] cm−1, [0, 0.1] cm−1, and [0, 0.06] cm−1, respectively.
Electronics 13 01868 g004
Figure 5. Images representing a thoracic model of mice reconstructed from 80 projections using different methods: (a1a3) Ground Truth, (b1b3) SART, (c1c3) TVM, (d1d3) L0TDL, (e1e3) ESC-TDL, and (f1f3) TDTGV. From top to down, the display windows are [0, 0.25] cm−1, [0, 0.1] cm−1, and [0, 0.06] cm−1, respectively.
Figure 5. Images representing a thoracic model of mice reconstructed from 80 projections using different methods: (a1a3) Ground Truth, (b1b3) SART, (c1c3) TVM, (d1d3) L0TDL, (e1e3) ESC-TDL, and (f1f3) TDTGV. From top to down, the display windows are [0, 0.25] cm−1, [0, 0.1] cm−1, and [0, 0.06] cm−1, respectively.
Electronics 13 01868 g005
Figure 6. Grayscale curve along the red line of the reference image in Figure 2a.
Figure 6. Grayscale curve along the red line of the reference image in Figure 2a.
Electronics 13 01868 g006
Figure 7. Grayscale curve along the yellow line of the reference image in Figure 2a.
Figure 7. Grayscale curve along the yellow line of the reference image in Figure 2a.
Electronics 13 01868 g007
Figure 8. The absolute difference image under 160 projection views. From top to bottom, the rows are: (a1a3) SART, (b1b3) TVM, (c1c3) L0TDL, (d1d3) ESC-TDL, (e1e3) TDTGV. The display window is [−0.1, 0.1] cm−1.
Figure 8. The absolute difference image under 160 projection views. From top to bottom, the rows are: (a1a3) SART, (b1b3) TVM, (c1c3) L0TDL, (d1d3) ESC-TDL, (e1e3) TDTGV. The display window is [−0.1, 0.1] cm−1.
Electronics 13 01868 g008
Figure 9. The average attenuation coefficients and relative deviations of three basic materials: bone (a,d), soft (b,e), and iodine contrast (c,f).
Figure 9. The average attenuation coefficients and relative deviations of three basic materials: bone (a,d), soft (b,e), and iodine contrast (c,f).
Electronics 13 01868 g009
Figure 10. Material decomposition results from 160 views. The different color areas represent the corresponding basic materials: (red) Bone, (green) Soft tissue, (blue) Iodine contrast agent. The display windows are [0, 0.2] cm−1, [0, 1] cm−1, and [0, 0.5] cm−1, respectively.
Figure 10. Material decomposition results from 160 views. The different color areas represent the corresponding basic materials: (red) Bone, (green) Soft tissue, (blue) Iodine contrast agent. The display windows are [0, 0.2] cm−1, [0, 1] cm−1, and [0, 0.5] cm−1, respectively.
Electronics 13 01868 g010
Figure 11. Convergence analysis of reconstruction algorithms.
Figure 11. Convergence analysis of reconstruction algorithms.
Electronics 13 01868 g011
Figure 12. Images representing actual clinical mice reconstructed from 120 projections using different methods: (a1a3) SART, (b1b3) TVM, (c1c3) L0TDL, (d1d3) ESC-TDL, and (e1e3) TDTGV. The display windows are [0, 0.08] cm−1.
Figure 12. Images representing actual clinical mice reconstructed from 120 projections using different methods: (a1a3) SART, (b1b3) TVM, (c1c3) L0TDL, (d1d3) ESC-TDL, and (e1e3) TDTGV. The display windows are [0, 0.08] cm−1.
Electronics 13 01868 g012
Figure 13. Images representing actual clinical mice reconstructed from 60 projections using different methods: (a1a3) SART, (b1b3) TVM, (c1c3) L0TDL, (d1d3) ESC-TDL, and (e1e3) TDTGV. The display windows are [0, 0.08] cm−1.
Figure 13. Images representing actual clinical mice reconstructed from 60 projections using different methods: (a1a3) SART, (b1b3) TVM, (c1c3) L0TDL, (d1d3) ESC-TDL, and (e1e3) TDTGV. The display windows are [0, 0.08] cm−1.
Electronics 13 01868 g013
Figure 14. The enlarged ROI A and ROI C.
Figure 14. The enlarged ROI A and ROI C.
Electronics 13 01868 g014
Figure 15. Material decomposition results from 120 views. From left to right columns are SART, TV, L0TDL, ESC-TDL, and the proposed TDTGV algorithm. From top to bottom rows, the display windows are [0.1, 0.5] cm−1, [0, 1] cm−1, and [0, 1.5] cm−1, respectively.
Figure 15. Material decomposition results from 120 views. From left to right columns are SART, TV, L0TDL, ESC-TDL, and the proposed TDTGV algorithm. From top to bottom rows, the display windows are [0.1, 0.5] cm−1, [0, 1] cm−1, and [0, 1.5] cm−1, respectively.
Electronics 13 01868 g015
Figure 16. Quantitative analysis of the reconstructed images by different parameters.
Figure 16. Quantitative analysis of the reconstructed images by different parameters.
Electronics 13 01868 g016
Table 1. Parameters setting of the simulation projection.
Table 1. Parameters setting of the simulation projection.
NumberParameterValue
1Distance from X-ray source to PCD180 mm
2Distance from X-ray source to rotation center132 mm
3Detectors number512
4Detector element size0.1 mm
5Reconstructed image size256 × 256 × 8
6Size of each pixel0.15 mm
Table 2. The simulation experiment parameters setting.
Table 2. The simulation experiment parameters setting.
Photon NumbersProjection Viewλα1α2μ1μ2Q
Simulation experiment5000805.1 × 103133 × 10420128
1602.7 × 1030.72.52.4 × 10420128
Table 3. Index comparison of reconstruction results of different algorithms.
Table 3. Index comparison of reconstruction results of different algorithms.
Views ChannelRMSESSIMPSNR
Method 1st4th8th1st4th8th1st4th8th
80SART0.24070.20190.17310.67670.64460.549815.710.61670.5266
TVM0.16460.13240.04360.91670.89850.875219.020.883128.16
L0TDL0.12770.06240.02650.94310.93290.920428.780.921739.25
ESC-TDL0.12160.05840.02130.95010.94370.925530.0239.1741.37
TDTGV0.10320.04190.01810.96560.95190.937531.1840.6743.65
160SART0.21980.16020.15110.78670.69930.624916.5724.1726.78
TVM0.14860.10280.03720.93130.90430.898220.2529.1632.84
L0TDL0.11250.05730.02070.96010.95510.936430.4138.9140.93
ESC-TDL0.10750.04330.01870.96720.96000.939430.8740.1142.75
TDTGV0.08520.03490.01040.97400.96440.948732.3641.7344.25
Table 4. RMSE values for decomposed materials of three base components.
Table 4. RMSE values for decomposed materials of three base components.
AlgorithmBoneSoft TissueIodine Contrast Agent
RMSESART0.08940.16900.1029
TVM0.04260.11310.0510
L0TDL0.01410.04290.0289
ESC-TDL0.01820.04200.0258
TDTGV0.01130.04080.0233
Table 5. Parameters setting of the real projection.
Table 5. Parameters setting of the real projection.
NumberParametersValues
1Distance from X-ray source to PCD255 mm
2Distance from X-ray source to rotation center158 mm
3Detectors number512
4Detector element size55 µm
5Reconstructed image size256 × 256 × 13
Table 6. Parameters setting of the real experiment.
Table 6. Parameters setting of the real experiment.
Photon NumbersProjection Viewλα1α2μ1μ2Q
Real experiment----605.5 × 1031.43.63.8 × 10430256
1203 × 10313.12.9 × 10430256
Table 7. Computation time of all methods (unit: s).
Table 7. Computation time of all methods (unit: s).
MethodsTVML0TDLESC-TDLTDTGV
Computation time6.1 ± 0.329.5 ± 0.935.1 ± 1.358.7 ± 1.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Wang, K.; Xue, X.; Li, F. Sparse-View Spectral CT Reconstruction Based on Tensor Decomposition and Total Generalized Variation. Electronics 2024, 13, 1868. https://doi.org/10.3390/electronics13101868

AMA Style

Li X, Wang K, Xue X, Li F. Sparse-View Spectral CT Reconstruction Based on Tensor Decomposition and Total Generalized Variation. Electronics. 2024; 13(10):1868. https://doi.org/10.3390/electronics13101868

Chicago/Turabian Style

Li, Xuru, Kun Wang, Xiaoqin Xue, and Fuzhong Li. 2024. "Sparse-View Spectral CT Reconstruction Based on Tensor Decomposition and Total Generalized Variation" Electronics 13, no. 10: 1868. https://doi.org/10.3390/electronics13101868

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop