Next Article in Journal
Radiological Comparison of Canal Fill between Collared and Non-Collared Femoral Stems: A Two-Year Follow-Up after Total Hip Arthroplasty
Next Article in Special Issue
Comparative Analysis of Color Space and Channel, Detector, and Descriptor for Feature-Based Image Registration
Previous Article in Journal
SVD-Based Mind-Wandering Prediction from Facial Videos in Online Learning
Previous Article in Special Issue
Review of Image-Processing-Based Technology for Structural Health Monitoring of Civil Infrastructures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Derivative-Free Iterative One-Step Reconstruction for Multispectral CT

1
Institute of Basic Sciences in Engineering Science, University of Innsbruck, Technikerstrasse 13, 6020 Innsbruck, Austria
2
Department of Mathematics, University of Innsbruck Technikerstrasse 13, 6020 Innsbruck, Austria
*
Authors to whom correspondence should be addressed.
J. Imaging 2024, 10(5), 98; https://doi.org/10.3390/jimaging10050098
Submission received: 16 February 2024 / Revised: 19 April 2024 / Accepted: 19 April 2024 / Published: 24 April 2024
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)

Abstract

:
Image reconstruction in multispectral computed tomography (MSCT) requires solving a challenging nonlinear inverse problem, commonly tackled via iterative optimization algorithms. Existing methods necessitate computing the derivative of the forward map and potentially its regularized inverse. In this work, we present a simple yet highly effective algorithm for MSCT image reconstruction, utilizing iterative update mechanisms that leverage the full forward model in the forward step and a derivative-free adjoint problem. Our approach demonstrates both fast convergence and superior performance compared to existing algorithms, making it an interesting candidate for future work. We also discuss further generalizations of our method and its combination with additional regularization and other data discrepancy terms.

1. Introduction

Classical computed tomography (CT) is based on the inversion of the linear Radon transform, where a scalar-valued attenuation map μ : X R of the patient is recovered from observation of its Radon transform R μ : L R derived from projection data. Here and below X R d is the image domain in d = 2 , 3 dimensions, and L is the set of integration lines. While sufficient in many applications, the linear problem ignores the polychromatic nature of the X-rays and the energy-dependent absorption characteristics of real-world objects. The sample is more accurately represented by a family of attenuation maps μ ( e ) : X R dependent on the photon energy e ( 0 , ) . Recovering a single μ from projection data using a single energy bin results in a mixture of density maps from different energies resulting in severe nonuniqueness. Additionally, the nonlinearity results in beam hardening artifacts that may be partially accounted for by iterative algorithms or analytic modeling [1,2,3,4,5,6]. In order to overcome such weaknesses, the idea of multispectral CT (MSCT) is to measure projection data for different energy bands, which are then used to reconstruct multiple attenuation maps. The reconstruction problem, however, becomes nonlinear and much more challenging than pure Radon inversion [7,8,9,10,11,12,13]. In this work, we develop a simple and efficient strategy for tackling the nonlinearity.

1.1. Multispectral CT

Specifically, in this work, we use the material decomposition paradigm in MSCT. In the material decomposition approach, it is assumed that the energy-dependent attenuation maps μ ( e ) can be written as μ ( e , x ) = m = 1 M μ m ( e ) f m ( x ) , where f 1 , f 2 , , f M : X R are the densities of M separate materials (with f m ( x ) [ 0 , 1 ] ) to be recovered and μ m ( e ) are known and tabled absorption characteristic of the m-th material. The aim is to recover the material densities from data collected for several energy bands. This does not only allow to improve image quality, but also offers a broad range of applications, as it reconstructs multiple images encoding different characteristics of specific regions, which enables a deeper understanding of the object under examination. Recent significant advancements in the manufacturing of energy-sensitive sensors [14,15] has considerably increased the interest in and applicability of MSCT.
Assuming B spectral measurements Y 1 , , Y B , the material decomposition problem in MSCT can be written as the problem of recovering f 1 , , f M based on the modeling equation ( Y b ) b = 1 B = Φ ( R f m ) m = 1 M . Here, R f m is the Radon transform applied to the m-th material density map f m , and Φ is a nonlinear map. Classical CT would correspond to the unrealistic case where the application of the pointwise logarithm in the modeling equation results in a linear inverse problem. In MSCT, one accounts for the full nonlinear problem for the unknown density maps f m .

1.2. Two-Step and One-Step Algorithms

Various algorithms have been developed for solving the reconstruction problem in MSCT. They can be broadly classified into two categories: two-step methods and one-step algorithms. The idea of earlier two-step methods is to perform Radon inversion and material decomposition in two separate steps. Material decomposition can be performed either in the projection domain L (before Radon inversion) or in the image domain X (after Radon inversion). Both methods have their specific advantages and disadvantages. The image-domain decomposition approach allows incorporating prior information about the objects that is naturally contained in the image domain X . However, the nonlinear nature of the problem leads to approximate linear models that introduce severe reconstruction artifacts. The projection-domain decomposition approach, on the other hand, allows working with the correct nonlinear model. However, the prior structure in the Radon domain is not directly available. See the works [12,13,16,17,18] and references therein for proposed two-step approaches.
One-step methods reconstruct the material densities f 1 , , f M through iterative minimization techniques and thus overcome the drawbacks of both two-step methods. For some one-step algorithms, we refer to [7,8,11,19,20,21,22,23,24]. Despite their superior performance, such one-step iterative algorithms are computationally expensive. Existing methods require many iterative steps due to poor conditioning of the problem with computationally expensive iterative steps. The algorithms proposed in this paper are specific one-step algorithms that address these two drawbacks of existing one-step methods. The structure of our proposed algorithm is shown in Figure 1.

1.3. Our Contributions

The following list summarizes the main contributions of this paper:
  • We present a novel derivative-free algorithm designed to combine the advantages of one-step and two-step approaches. To achieve this, we introduce a simple and computationally efficient iterative update that incorporates appropriate preconditioning.
  • Image reconstruction is performed in the image domain, which naturally allows for the inclusion of an image smoothness prior. Our method can be combined with additional regularization. However, in order to show the method in its pure form, we will not include such a modification.
  • Our methods integrate benefits of two-step approaches by separating iterative updates into two parts. Moreover, the main ingredient that makes the algorithm efficient is the use of the full nonlinear forward model but linearization around zero for the adjoint problem. In addition to avoiding computation and evaluation of the derivative of the forward map, this also allows for including simple but efficient channel preconditioning.

2. Mathematical Modelling of MSCT

We assume that the object to be imaged lies in some domain X R 2 and consists of a combination of M different materials with densities f m : X R with m = 1 , , M . Each material has a separate mass attenuation coefficient μ m : [ 0 , ) [ 0 , ) , which is a known function of the X-ray energy e. The total energy dependent (linear) X-ray attenuation coefficient is then given by:
μ ( x , e ) = m = 1 M μ m ( e ) f m ( x ) for ( x , e ) X × R .
Assuming that the material specific attenuation functions μ m ( e ) are known, the goal is to recover densities f m from indirect x-ray measurements using different energy bins.

2.1. Continuous Model

We start with continuous modeling, where the quantities involved are functions on continuous domains that will be discretized later. Suppose that X-ray energy with a known incident spectral density I 0 ( e ) is sent along a line L from the source position to the detector position. While propagating along , the X-rays are attenuated according to μ ( x , e ) defined in (1). This results in an outgoing spectral density I 1 ( e ) = I 0 ( e ) exp ( μ ( x , e ) d ( x ) ) at the detector. The energy-sensitive detector with the spectral profile s ^ ( e ) records the integral 0 s ^ ( e ) I 1 ( e ) d e . Denoting the product of the incident spectral density of the source and the spectral sensitivity of the detector by s ( e ) I 0 ( e ) s ^ ( e ) , referred to as the effective spectrum, the recorded data are given by:
Y ( s , ) = 0 s ( e ) exp μ ( x , e ) d ( x ) ) d e = 0 s ( e ) exp m = 1 M μ m ( e ) f m d ( x ) d e .
The data in Equation (2) represent a single measurement in MSCT. The goal of material decomposition in MSCT is to determine the density distributions f m from multispectral X-ray measurements obtained by varying the line and the effective spectra s.
For simplicity of presentation, we consider parallel beam mode, where any line = ( θ , r ) is parametrized by its normal vector θ and its distance r from the origin. In this case, f m d ( x ) = R f m ( θ , x ) is given by the classical Radon transform of f m . Assuming further a total number of B different effective spectra and writing f = ( f 1 , , f M ) , we obtain the continuous MSCT forward model:
Y = 0 s b ( e ) exp m = 1 M μ m ( e ) R ( f m ) d e b = 1 , B .
Equation (3) gives the complete continuous forward model in material decomposition in multispectral CT. The unknown f consists of M functions f 1 , , f M defined on the image domain X and the data of B functions Y 1 , , Y B defined on the projection domain L = S 1 × R . The methods that we describe, however, would also work with a three-dimensional image domain and a general projection domain L of lines in R 3 .

2.2. Discretization

In order to avoid technical details and to concentrate on the main ideas, we derive the algorithm for the discrete forward model throughout this paper. For that purpose, we represent the material densities via a discrete column vector X 1 , , X M R N x and the Radon transform via a matrix A R N y × N x , where N x is the number of discretization points in the image domain and N y is the number of lines used in the projection domain. Furthermore, we discretize the effective energy spectra by vectors S 1 , , S B R E and the known material attenuations by vectors μ 1 , , μ M R E , where E is number of discretization points of the energy variable. The discretization of (3) yields the following discrete image reconstruction problem.
Problem 1 (Discrete MSCT image reconstruction problem).
Recover the unknown X R N x × M from data Y = F ( X ) + δ R N y × B where:
F ( X ) ( S · exp E × N y ( M · ( A X ) ) )
Here and below, we use the convention that the boldface notation exp E × N y indicates that the scalar function exp is applied pointwise to a vector in R E × N y . Furthermore, in (4):
  • The columns of X = [ X 1 , , X M ] are the discrete material images;
  • A R N y × N x is the discretized Radon transform;
  • The columns of M = [ M 1 , , M M ] R E × M are the discretized material attenuations;
  • The columns of S = [ S 1 , , S B ] R E × B the discretized effective spectra;
  • The columns of Y = [ Y 1 , , Y B ] are the observed spectral data.
Note that we included the transpose operations in (4) such that all involved linear operations can be written as matrix multiplications from left. Alternatively, we can also write F ( X ) = exp N y × E ( A X M ) S , reflecting the fact that the discrete Radon transform A operates on the columns and the matrices M and S on the rows of X R N y × M . At some places, we will denote operation of M on X from the right by M X X · M such that we have F ( X ) = S exp N y × E ( M A X ) . The structure of the discrete forward model is illustrated in Figure 2.
Remark 1 (Recalibration).
With S b , 1 = e = 1 E S b , e we obtain:
F ( 0 ) = S · exp E × N y ( 0 ) = S · 1 E × N y = 1 , S 1 1 , S 1 1 , S B 1 , S B .
Thus, with denoting pointwise multiplication (also known as Hadamard product) and the pointwise division, we get:
F ( X ) F ( 0 ) = 1 / 1 , S 1 1 / 1 , S 1 1 / S b , 1 1 / S b , 1 ( S · exp E × N y ( M · ( A X ) ) ) = S 1 / 1 , S 1 S B / 1 , S B · exp E × N y ( M · ( A X ) ) .
This means that the recalibrated forward model F ( X ) F ( 0 ) is the same as the original forward model with normalized effective spectra S b / S b , 1 . The matrix with normalized spectra can be written as S ( S · 1 E × E ) .
In this work we, use rescaled data, to which we apply the pointwise logarithm log N y × B and the corresponding least squares (LSQ) functional.
Definition 1 (Forward model and LSQ functional).
We define the logarithmic forward model and the LSQ functional by:
H : R N x × M R N y × B : Y log N y × B ( F ( X ) F ( 0 ) )
D : R N x × M R : X H ( X ) Y H 2 2 / 2 .
where F ( X ) = exp N y × E ( A X M ) S is defined in (4), Y R N y × B are the given data, Y H = log N y × B ( Y F ( 0 ) ) the modified data, and log N y × B ( · ) the pointwise logarithm.
Using the notations of Definition 1, material decomposition in MSCT amounts to the near solution of H ( X ) = Y H or the near minimization of D .
Remark 2 (Noise modelling).
In the statistical context, LSQ minimization derives from maximum likelihood estimation using a Gaussian noise model on Y H . From a statistical perspective, maximum likelihood estimation for Poisson noise on Y might be more reasonable, resulting in L ( X ) = F ( X ) , 1 log ( F ( X ) ) , Y min X . As the focus in this paper is the derivation of an efficient reconstruction algorithm rather than statistical optimality, we work with D . However, we expect that our strategy can also be applied to L instead of D .
Remark 3 (Regularization).
Due to the ill-conditioning of H ( X ) = Y H , the reconstruction problem has to be regularized [25,26]. In the context of LSQ minimization, a natural approach is variational regularization, where one considers H ( X ) Y H 2 2 / 2 + α R ( X ) instead of D with a regularization functional R ( X ) . Recently, in [27], the plug-and-play method has been identified as a regularization technique where the regularization is incorporated by a denoiser. Another class is given by iterative regularization [28], where regularization comes from early stopping. All these regularization methods require the gradient of D , which we compute below.

3. Algorithm Development

In this section, we derive the proposed algorithms for MSCT based on channel preconditioning (CP). The first one (CP-full) integrates channel preconditioning into a gradient scheme. In the second algorithm (CP-fast), the derivative of the forward map is replaced by the derivative at zero. Both methods greatly reduce the number of iterations compared to standard gradient methods and the numerical cost per iteration compared to Newton type methods. Before presenting our algorithms, we start by computing derivatives and gradients and recall existing gradient and Newton type methods.

3.1. Derivatives Computation

Standard algorithms for minimizing (8) require the derivative of the forward map and the gradient of the LSQ functional that we compute next. Recall the original and logarithmic MSCT forward operator F , H : R N x × M R N y × B and the LSQ functional D defined by (4), (7), and (8). We equip R N x × M and R N y × B with the Hilbert space structure induced by the standard inner product ξ 1 , ξ 2 = i , j ξ 1 [ i , j ] ξ 2 [ i , j ] . Furthermore, we use H [ X ] : R N x × M R N y × B to denote the derivative of H at location X R N x × M and D [ X ] : R N x × M R and D [ X ] R N x × M to denote the derivative and the gradient of D at X, respectively.
Remark 4 (Gradients, inner products, and preconditioning).
By the definition of gradients, we have D [ X ] , ξ = ( D [ X ] ) * ( ξ ) , where ( · ) * denotes the adjoint of a linear operator. Furthermore, by the chain rule, D [ X ] = ( H [ X ] ) · ( H ( X ) Y H ) . Gradients and adjoints depend on the chosen inner product. For example, the inner product ξ 1 , U 1 ξ 2 on the image space with a positive-definite matrix U yields the modified gradient U · D [ X ] . Choosing U such that U · D [ X ] has improved condition significantly improves gradient based methods for minimizing D .
Remark 5 (Some calculus rules).
For the following computation we use some elementary calculus rules listed next. Let G 1 , G 2 : R N x R N y be vector valued functions and ϕ : R R a scalar function with derivative ψ : R R . Then, for X , ξ R N x , we have:
( G 1 G 2 ) [ X ] ( ξ ) = ( G 1 [ X ] ( ξ ) ) ( G 2 ( X ) ) + ( G 1 ( X ) ) ( G [ X ] ( ξ ) )
ϕ N [ X ] ( ξ ) = ψ N ( X ) ξ .
As usual, we define the vector value functions ϕ N , ψ N : R N x R N x by pointwise application ϕ N ( X ) = ( ϕ ( X i ) ) i and ψ N ( X ) = ( ψ ( X i ) ) i .
We have the following explicit expressions for derivatives, adjoints, and gradients.
Theorem 1 (Derivatives computation).
Let F , H : R N x × M R N y × B and D be defined by (4), (7) and (8). The derivative of F and H , as well as the adjoint and the gradient of D are given by:
F [ X ] ( ξ ) = S · ( Q X ( M ( A ξ ) )
H [ X ] ( ξ ) = F ( X ) T ( S · ( Q X ( M ( A ξ ) ) ) )
F [ X ] * ( η ) = A · ( M · ( Q X ( S η ) ) )
H [ X ] * ( η ) = A · ( M · ( Q X ( S ( F ( X ) T η ) ) ) )
D [ X ] = A · ( M · ( Q X ( S ( F ( X ) T ( H ( X ) Y H ) ) ) ) ) ,
where Q X exp E × N y ( M ( A X ) ) denotes virtual spectrally resolved data.
Proof. 
The proof is given in the Appendix A. □
For CP-fast, the derivative of H at zero plays a central role.
Remark 6 (Derivative at zero).
Let us consider the derivative at the zero image X = 0 . In this case, we have Q 0 : = exp E × N y ( M ( A 0 ) ) = 1 E × N y , and therefore, H [ 0 ] ( ξ ) = ( S M ( A ξ ) ) F ( 0 ) and H [ 0 ] * ( η ) = A ( η F ( 0 ) ) · S · M . Using that F ( 0 ) = S · 1 E × N y , we get:
H [ 0 ] ( ξ ) = U ( A ξ )
H [ 0 ] * ( η ) = A η U
U ( S M ) ( S · 1 E × M ) .
The derivative H [ 0 ] ( ξ ) = A ξ U may also be seen as the linearization of H around zero. It has been used previously in MSCT and can be simply derived by first order Taylor series approximation [12]. In fact, with F ( 0 ) , b = S b , 1 E × 1 , we find:
log S b , exp ( M ( A ξ ) ) S b , 1 E × 1 log S b , ( 1 M ( A ξ ) ) S b , 1 E × 1 = log 1 S b , M ( A ξ ) S b , 1 E × 1 S b S b , 1 E × 1 , M ( A ξ ) .
The final expression in (19) in matrix notation is (16) and (18). For dual energy CT (the case where M = B = 2 ), the use of the inverse of U has been proposed in [29]. We emphasize that while we utilize the linearization H [ 0 ] ( ξ ) = A ξ U as an auxiliary tool, we actually solve the full nonlinear problem. However, the linearized LSQ problem A ξ U Y H 2 2 / 2 min ξ is also of interest in its own. Theoretically proven convergent algorithms for such problems can be found in [30,31].
The derivatives in Theorem 1 have a clear composite structure that we exploit in our algorithms. This is discussed next.
Remark 7 (Composite structure of derivatives).
Consider X ( R M ) N x as a signal of size N x with M channels (each channel is a material) and data Y H ( R B ) N y of size N y with B channels (each channel represents an energy bin). Then, we can write H ( X ) = Φ ( A X ) , where the following nonlinear function:
Φ ( Z ) = log N y × B ( exp N y × E ( Z M ) · S ̲ )
operates on the multichannel sinogram Z = A X along the (horizontal) channel dimension and S ̲ = S ( S · 1 E × N y ) are the normalized effective spectra. Application of the chain rule and some computations results in:
H [ X ] ( ξ ) = Φ [ A X ] ( A ξ )
Φ [ Z ] ( ζ ) = ( exp N y × E ( Z M ) ( ζ · M ) ) · S ) ( exp N y × E ( Z M ) S )
Φ [ Z ] * ( η ) = ( exp N y × E ( Z M ) ( ( η ( exp N y × E ( Z M ) S ) ) · S ) ) · M .
from which we recover (12) and (14). Furthermore, for the zero material sinogram Z = 0 , we get Φ [ 0 ] ( ζ ) = ζ U and Φ [ 0 ] * ( η ) = η U with U = ( S · M ) ( S · 1 N y × M ) as in Remark 6. Equations (21)–(23) factorize the derivative Φ [ Z ] and its adjoint into two separate parts: a high-dimensional ill-posed but linear part A operating in the pixel dimension and small size well-posed but nonlinear part operating in the channel dimension. Our algorithms will target this structure for fast and effective iterative updates.

3.2. Gradient and Newton—One-Step Algorithms

It is helpful to start with gradient based method for minimizing the LSQ functional (8). Our first method, CP-full, can be seen as a modified version of a hybrid between the standard gradient iteration (or Landweber’s method) and the Gauss–Newton method. Our second method, CP-fast, involves a simplification based on linearization around zero.
Gradient-based one-step algorithms for solving the MSCT problem H X = Y H using the LSQ functional D ( X ) start with the optimality condition D [ X ] = H [ X ] * ( H ( X ) Y H ) = 0 and fixed point equations derived from it. Applying a nonstationary positive-definite preconditioner Q k and a step size ω k > 0 results in:
X k + 1 = X k ω k · Q k H [ X k ] * H ( X k ) Y H .
Explicit expressions for H and H [ X k ] * are given by (7) and (14). Particular choices for the preconditioner and the step size yield various iterative solution methods. including Landweber’s iteration, the steepest descent method, Gauss–Newton iteration, Newton-CG iterations, or Quasi-Newton methods. To motivate our algorithm, it is most educational to discuss the Landweber and the Gauss–Newton iteration.
Landweber method:
In the context of inverse problems, the standard gradient method with a constant step size ω is known as the (nonlinear) Landweber iteration X k + 1 = X k ω · H [ X k ] * ( H ( X k ) Y H ) , which is (24) for the case where Q k is the identity and ω k = ω . Landweber’s iteration is stable, robust, and easy to implement. It is even applicable in ill-posed cases, where, with an appropriate stopping criterion, it serves as a regularization method [32]. On the other hand, it is also known to be slow in the sense that many iterative steps are required. In our case, this is due to the ill-conditioning of the forward operator.
Gauss–Newton method:
Several potential accelerations of Landweber’s method exist, and preconditioning seems one of the most natural ones. In the context of nonlinear least squares, the Gauss–Newton method and its variants are well-established and effective. In this case, one chooses the preconditioner Q k = ( H [ X k ] * H [ X k ] ) 1 in (24), which results in:
X k + 1 = X k ω k · ( H [ X k ] * H [ X k ] ) 1 H [ X k ] * H ( X k ) Y H .
The Gauss–Newton method (25) has the potential to significantly reduce the required number of iterations. On the other hand, each one of these iterations is numerically costly, as it requires inversion of the nonstationary normal operator H [ X k ] * H [ X k ] . Moreover, due to ill conditioning, the inversion needs to be regularized [28,33,34]. The algorithms proposed in this paper use simplifications that do not need to be regularized and avoid the costly inversion of the normal operator.

3.3. Proposed Algorithms

Now we move on to the proposed iterative algorithms for MSCT. We start with CP-full, which is a gradient-based algorithm with channel preconditioning. We then derive CP-fast, which is a derivative-free iterative algorithm using a stationary adjoint problem.
CP-full: 
The first proposed algorithm is an instance of (24). Instead of no preconditioning, as in Landweber’s method, or the costly preconditioning Q k = ( H [ X k ] * H [ X k ] ) 1 , as in the Gauss–Newton method, we propose preconditioning with the channel mixing term Φ only. That is, we exploit the factorization H ( X ) = Φ ( A X ) and propose the choice Q k = ( Φ [ A X k ] * Φ [ A X k ] ) 1 for the preconditioner. This results in the following CP-full iteration:
X k + 1 = X k ω k · A · ( Φ [ A X k ] * ( Φ [ A X k ] ) 1 Φ [ A X k ] * ( H ( X k ) Y H ) .
While efficiently addressing the nonlinearity via a Gauss–Newton-type preconditioner in the channel dimension, it is computationally much less costly than the full Gauss–Newton update. Instead of inverting H [ X k ] * H [ X k ] , which in matrix form has size ( N x M ) × ( N x M ) in the Gauss–Newton method, it requires inversion of the smaller M × M matrices Φ [ A X k ] * Φ [ A X k ] only, which can be done separately for each pixel in the projection domain. Assuming M , B = O ( 1 ) and N y = O ( N x ) , this dramatically reduces the cost of preconditioning from O ( N x 3 ) to O ( N x ) per iterative update.
CP-fast: 
In the derivative-free version, we go one step further and completely avoid the derivative H [ X k ] . For that purpose, we replace the derivative Φ [ A X k ] in (26) by the derivative at zero. According to Remark 6, we have Φ [ 0 ] ( ζ ) = ζ U with U = ( S · M ) ( S · 1 E × M ) . Now, with U = ( U U ) 1 U denoting the pseudoinverse of U , we arrive at the iterative update:
X k + 1 = X k ω k · A · ( H ( X k ) Y H ) · ( U ) .
We refer to (27) as the derivative-free fast channel-preconditioned (cp-fast) iteration. It only involves the derivative at zero, which can be computed once before the actual iteration. In this sense, it is actually derivative-free and fast. It can be interpreted as using the full nonlinear model for the forward problem, the linearization at zero for the adjoint problem, and including channel preconditioning.
Both iterations (26) and (27) are of fixed-point type and we, therefore, expect convergence for sufficiently small step sizes. Theoretically, proving convergence seems possible, but this is beyond the scope of this paper. As (26) is of gradient type, it seems easier to derive convergence for CP-full, while for the derivative free version CP-fast, such a proof seems challenging. Note further that for the results presented below, we integrated a positivity constraint by alternating iterative updates with the orthogonal projection onto the cone of nonnegative images.

4. Numerical Simulations

We compared our algorithms CP-full and CP-fast to existing iterative one-step algorithms in MSCT. Our evaluation builds on [11], which compares five such algorithms and provides open source code (https://github.com/SimonRit/OneStepSpectralCT, accessed on 13 July 2022.) that is used for our results. We compare CP-full and CP-fast with the best performing one of [11], and further with a two-step method.

4.1. Comparison Methods

The work [11] compares the following iterative one-step algorithms for MSCT in terms of memory usage and convergence speed to reach a fixed image quality threshold:
  • Ref. [19] derives a nonlinear CG method for a weighted LSQ term;
  • Refs. [20,22] derive surrogate approaches for Poisson maximum likelihood;
  • Ref. [21] extends [22] by including Nesterov’s momentum acceleration;
  • Ref. [23] generalizes the Chambolle–Pock algorithm [35] to nonconvex functionals.
Specifically, ref. [11] found the algorithm of [21] (referred to as Mechlem2018) to be significantly faster than the other four methods, and thus, we use it for comparison.
In addition, we compare with the algorithm [17] (referred to as Niu2014) as a prime example of an image domain two-step method. They use a penalized weighted least squares estimation technique applied to an empirical linear model. Note that more recently, data-driven methods based on neural networks and deep learning have also been proposed. Such methods are beyond the scope of this manuscript and we refer the interested reader to the review articles [10,36].

4.2. Numerical Implementation

For the presented results, we build on the Matlab code of [11], which we extend with our algorithms. In particular, we work with M = 3 base materials (water, iodine, and gadolinium) and B = 5 energy bins. The energy variable is discretized using E = 150 uniform nodes between 0 and 150 keV. The attenuation functions and energy spectra used are shown in Figure 3. We use N x = 256 × 256 image pixels and N y = 262450 line integrals for the Radon transform. In particular the code https://github.com/SimonRit/OneStepSpectralCT (accessed on 13 July 2022) creates matrices:
  • M R 150 × 3 for the base materials;
  • S R 5 × 150 for the effective energy spectra;
  • A R N y × N x for the Radon transform.
After row normalizing S ̲ = S ( S · 1 E × N y ) , we have H ( X ) = log ( exp ( A · X · M ) · S ̲ ) for the MSCT forward model. Furthermore, noisy data Y are created with a different realistic forward model and Poisson noise added.
Besides M , S , A , H , and Y, we require implementations of U = ( U * U ) 1 U * for CP-fast and U k = ( Φ [ A X k ] * Φ [ A X k ] ) 1 Φ [ A X k ] * for CP-full. Computing U is trivial and can be done in advance; U k are computed in each step of CP-fast using (22) and (23).

4.3. Results

Reconstruction results using the proposed algorithms CP-fast (top row) and CP-full (second row) and the two comparison methods Mechlem2018 [21] (row three) and Niu2014 [17] (bottom row) can be seen in Figure 4. The phantom shown in the top row is made out of iodine (left), gadolinium (middle), and water (right). In all cases, we use noisy data and plot the iteration with minimal 2 -reconstruction error X k X 2 / X 2 , where X is the ground truth. All calculations are performed on the same standard laptop, where one iteration of CP-full takes around six seconds, one iteration of CP-fast takes around one second, and one iteration of Mechlem2018 [21] about four seconds. These times are comparable to Niu2014 [17], which takes around 5 to 8 s when using 500 and 1000 CG iterations. Figure 5 shows the evolution of the relative 2 -reconstruction error for various one-step methods. Note that for CP-fast, the minimum error in Iodine and Gadolinium is reached at approximately the same number of iterations, which shows efficient preconditioning and is important in application. Furthermore, we do not enforce that the sum over the three density images is one, and in the example, it indeed does not hold. The proposed algorithms turned out to be more stable than Mechlem2018 [21] and produce better results. In particular, CP-full gives the best results, while CP-fast is fastest.
Additional results for a different phantom are shown in Figure 6 and Figure 7. The concentration of iodine, gadolinium, and water has been computed as in [11].

5. Conclusions and Outlook

Image reconstruction in MSCT requires the solution of a nonlinear ill-posed problem. Iterative one-step methods are known to be accurate for this purpose. In this work, we propose two generic algorithms named CP-full (channel-preconditioned full gradient iteration) and CP-fast (channel-preconditioned fast iteration). Both algorithms use preconditioning in the channel dimen•sion only, which considerably accelerates the updates compared to Newton-type methods that require solving numerically costly linear inverse problems at each iteration. CP-fast replaces the derivative in the channel nonlinearity with linearization at zero, making it even more efficient. Both algorithms turn out to be fast and robust.
There are several future directions emerging from our work. First, proving the convergence of the two algorithms and demonstrating their regularization properties is important. Second, we will combine them with more realistic noise priors, such as Poisson noise, resulting in the maximum likelihood estimation (MLE) functional. Additionally, we will integrate explicit image priors, use plug-and-play strategies, and incorporate learned components.

Author Contributions

Conceptualization, M.H. and L.N.; methodology, M.H., L.N. and T.P.; software, T.P. and L.N; formal analysis, M.H. and L.N.; writing—original draft preparation, M.H., L.N. and T.P.; writing—review and editing, M.H., L.N. and T.P.; visualization, T.P.; supervision, M.H. and L.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Theorem 1

Proof. 
By (4) and (10), we have:
F [ X ] ( ξ ) = S · ( exp E × N y ) [ M ( A X ) ] ( M ( A ξ ) ) = S · ( exp E × N y ( M ( A X ) ) ( M ( A ξ ) ) ) = S · Q X ( M ( A ξ ) ) .
This is (11). Now, using the (7), (9), (10) we have:
H [ X ] ( ξ ) = ( log N y × B ) [ F ( X ) F ( 0 ) T ] F [ X ] ( ξ ) F ( 0 ) T = ( F [ X ] ( ξ ) F ( 0 ) T ) ( F ( X ) F ( 0 ) ) = F [ X ] ( ξ ) F ( X ) T = F ( X ) T S · Q X ( M ( A ξ ) ) .
This is (12). Next, we turn over to the computation of the adjoints. We will only demonstrate this for (14), as (13) is verified in a similar manner. By elementary manipulations:
H [ X ] ( ξ ) , η = F ( X ) T ( S · ( Q X ( M · ( A ξ ) ) ) ) , η = ( A ξ ) , M ( Q X ( S · ( F ( X ) T η ) ) ) = A ξ , ( M ( Q X ( S · ( F ( X ) T η ) ) ) ) = ξ , A · ( M · ( Q X ( S ( F ( X ) T η ) ) ) ) ,
which is (14). Finally, with D [ X ] = ( H [ X ] ) * ( H ( X ) Y H ) and (14), we obtain (15). □

References

  1. McDavid, W.D.; Waggener, R.G.; Payne, W.H.; Dennis, M.J. Spectral effects on three-dimensional reconstruction from X rays. Med. Phys. 1975, 2, 321–324. [Google Scholar] [CrossRef]
  2. Kiss, M.B.; Bossema, F.G.; van Laar, P.J.; Meijer, S.; Lucka, F.; van Leeuwen, T.; Batenburg, K.J. Beam filtration for object-tailored X-ray CT of multi-material cultural heritage objects. Herit. Sci. 2023, 11, 130. [Google Scholar] [CrossRef]
  3. Pan, X.; Siewerdsen, J.; La Riviere, P.J.; Kalender, W.A. Anniversary Paper: Development of X-ray computed tomography: The role of Medical Physics and AAPM from the 1970s to present. Med. Phys. 2008, 35, 3728–3739. [Google Scholar] [CrossRef] [PubMed]
  4. Herman, G.T. Correction for beam hardening in computed tomography. Phys. Med. Biol. 1979, 24, 81. [Google Scholar] [CrossRef]
  5. Van Gompel, G.; Van Slambrouck, K.; Defrise, M.; Batenburg, K.J.; De Mey, J.; Sijbers, J.; Nuyts, J. Iterative correction of beam hardening artifacts in CT. Med. Phys. 2011, 38, S36–S49. [Google Scholar] [CrossRef] [PubMed]
  6. Rigaud, G. On analytical solutions to beam-hardening. Sens. Imaging 2017, 18, 5. [Google Scholar] [CrossRef]
  7. Kazantsev, D.; Jørgensen, J.S.; Andersen, M.S.; Lionheart, W.R.; Lee, P.D.; Withers, P.J. Joint image reconstruction method with correlative multi-channel prior for X-ray spectral computed tomography. Inverse Probl. 2018, 34, 064001. [Google Scholar] [CrossRef]
  8. Rigie, D.S.; La Riviere, P.J. Joint reconstruction of multi-channel, spectral CT data via constrained total nuclear variation minimization. Phys. Med. Biol. 2015, 60, 1741. [Google Scholar] [CrossRef] [PubMed]
  9. Hu, Y.; Nagy, J.G.; Zhang, J.; Andersen, M.S. Nonlinear optimization for mixed attenuation polyenergetic image reconstruction. Inverse Probl. 2019, 35, 064004. [Google Scholar] [CrossRef]
  10. Arridge, S.R.; Ehrhardt, M.J.; Thielemans, K. (An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods. Philos. Trans. R. Soc. A 2021, 379, 20200205. [Google Scholar] [CrossRef]
  11. Mory, C.; Sixou, B.; Si-Mohamed, S.; Boussel, L.; Rit, S. Comparison of five one-step reconstruction algorithms for spectral CT. Phys. Med. Biol. 2018, 63, 235001. [Google Scholar] [CrossRef] [PubMed]
  12. Heismann, B.; Balda, M. Quantitative image-based spectral reconstruction for computed tomography. Med. Phys. 2009, 36, 4471–4485. [Google Scholar] [CrossRef] [PubMed]
  13. Maaß, C.; Baer, M.; Kachelrieß, M. Image-based dual energy CT using optimized precorrection functions: A practical new approach of material decomposition in image domain. Med. Phys. 2009, 36, 3818–3829. [Google Scholar] [CrossRef]
  14. Willemink, M.J.; Persson, M.; Pourmorteza, A.; Pelc, N.J.; Fleischmann, D. Photon-counting CT: Technical principles and clinical prospects. Radiology 2018, 289, 293–312. [Google Scholar] [CrossRef] [PubMed]
  15. Kreisler, B. Photon counting Detectors: Concept, technical Challenges, and clinical outlook. Eur. J. Radiol. 2022, 149, 110229. [Google Scholar] [CrossRef] [PubMed]
  16. Schmidt, T.G. Optimal “image-based” weighting for energy-resolved CT. Med. Phys. 2009, 36, 3018–3027. [Google Scholar] [CrossRef]
  17. Niu, T.; Dong, X.; Petrongolo, M.; Zhu, L. Iterative image-domain decomposition for dual-energy CT. Med. Phys. 2014, 41, 041901. [Google Scholar] [CrossRef] [PubMed]
  18. Schirra, C.O.; Roessl, E.; Koehler, T.; Brendel, B.; Thran, A.; Pan, D.; Anastasio, M.A.; Proksa, R. Statistical reconstruction of material decomposed data in spectral CT. IEEE Trans. Med. Imaging 2013, 32, 1249–1257. [Google Scholar] [CrossRef]
  19. Cai, C.; Rodet, T.; Legoupil, S.; Mohammad-Djafari, A. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography. Med. Phys. 2013, 40, 111916. [Google Scholar] [CrossRef]
  20. Long, Y.; Fessler, J.A. Multi-material decomposition using statistical image reconstruction for spectral CT. IEEE Trans. Med. Imaging 2014, 33, 1614–1626. [Google Scholar] [CrossRef]
  21. Mechlem, K.; Ehn, S.; Sellerer, T.; Braig, E.; Münzel, D.; Pfeiffer, F.; Noël, P.B. Joint statistical iterative material image reconstruction for spectral computed tomography using a semi-empirical forward model. IEEE Trans. Med. Imaging 2017, 37, 68–80. [Google Scholar] [CrossRef]
  22. Weidinger, T.; Buzug, T.M.; Flohr, T.; Kappler, S.; Stierstorfer, K. Polychromatic iterative statistical material image reconstruction for photon-counting computed tomography. Int. J. Biomed. Imaging 2016, 2016, 5871604. [Google Scholar] [CrossRef]
  23. Barber, R.F.; Sidky, E.Y.; Schmidt, T.G.; Pan, X. An algorithm for constrained one-step inversion of spectral CT data. Phys. Med. Biol. 2016, 61, 3784. [Google Scholar]
  24. Chen, B.; Zhang, Z.; Xia, D.; Sidky, E.Y.; Pan, X. Non-convex primal-dual algorithm for image reconstruction in spectral CT. Comput. Med. Imaging Graph. 2021, 87, 101821. [Google Scholar] [CrossRef]
  25. Scherzer, O.; Grasmair, M.; Grossauer, H.; Haltmeier, M.; Lenzen, F. Variational Methods in Imaging; Springer: New York, NY, USA, 2009; Volume 167. [Google Scholar]
  26. Benning, M.; Burger, M. Modern regularization methods for inverse problems. Acta Numer. 2018, 27, 1–111. [Google Scholar] [CrossRef]
  27. Ebner, A.; Haltmeier, M. Plug-and-Play image reconstruction is a convergent regularization method. arXiv 2022, arXiv:2212.06881. [Google Scholar] [CrossRef]
  28. Kaltenbacher, B.; Neubauer, A.; Scherzer, O. Iterative Regularization Methods for Nonlinear Ill-Posed Problems; Walter de Gruyter: Berlin, Germany, 2008. [Google Scholar]
  29. Fessler, J.A. Method for Statistically Reconstructing Images from a Plurality of Transmission Measurements Having Energy Diversity and Image Reconstructor Apparatus Utilizing the Method. U.S. Patent 6,754,298, 22 June 2004. [Google Scholar]
  30. Du, K.; Ruan, C.C.; Sun, X.H. On the convergence of a randomized block coordinate descent algorithm for a matrix least squares problem. Appl. Math. Lett. 2022, 124, 107689. [Google Scholar] [CrossRef]
  31. Rabanser, S.; Neumann, L.; Haltmeier, M. Analysis of the block coordinate descent method for linear ill-posed problems. SIAM J. Imaging Sci. 2019, 12, 1808–1832. [Google Scholar] [CrossRef]
  32. Hanke, M.; Neubauer, A.; Scherzer, O. A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 1995, 72, 21–37. [Google Scholar] [CrossRef]
  33. Hanke, M. A regularizing Levenberg–Marquardt scheme, with applications to inverse groundwater filtration problems. Inverse Probl. 1997, 13, 79. [Google Scholar] [CrossRef]
  34. Rieder, A. On the regularization of nonlinear ill-posed problems via inexact Newton iterations. Inverse Probl. 1999, 15, 309. [Google Scholar] [CrossRef]
  35. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
  36. Bousse, A.; Kandarpa, V.S.S.; Rit, S.; Perelli, A.; Li, M.; Wang, G.; Zhou, J.; Wang, G. Systematic Review on Learning-based Spectral CT. arXiv 2023, arXiv:2304.07588. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the structure of the algorithms proposed in this work. Specifically, we introduce CP-full and CP-fast, which differ in the specific form of the update Δ X (see Section 3.3). Details on the recalibration step are given in Remark 1.
Figure 1. Flowchart of the structure of the algorithms proposed in this work. Specifically, we introduce CP-full and CP-fast, which differ in the specific form of the update Δ X (see Section 3.3). Details on the recalibration step are given in Remark 1.
Jimaging 10 00098 g001
Figure 2. Illustration of the forward model in MSCT for m = 3 materials, B = 5 energy bins, and using E = 150 values for energy discretization. First, the Radon transform is applied separately to each of the given material densities X 1 , X 2 , and X 3 , resulting in three material sinograms, which can be seen as a three-channel sinogram. Next, the matrix M is applied to each pixel, resulting in 150 energy sinograms. To each of these sinograms, x exp ( x ) is applied pointwise, resulting in 150 virtual energy data maps. By applying the matrix S pixel by pixel, one obtains the final data consisting of data maps. The continuous forward model can be visualized in a similar way by replacing the material images with continuous counterparts and the 150 energy channels with a function-valued channel.
Figure 2. Illustration of the forward model in MSCT for m = 3 materials, B = 5 energy bins, and using E = 150 values for energy discretization. First, the Radon transform is applied separately to each of the given material densities X 1 , X 2 , and X 3 , resulting in three material sinograms, which can be seen as a three-channel sinogram. Next, the matrix M is applied to each pixel, resulting in 150 energy sinograms. To each of these sinograms, x exp ( x ) is applied pointwise, resulting in 150 virtual energy data maps. By applying the matrix S pixel by pixel, one obtains the final data consisting of data maps. The continuous forward model can be visualized in a similar way by replacing the material images with continuous counterparts and the 150 energy channels with a function-valued channel.
Jimaging 10 00098 g002
Figure 3. Physical parameters determining the forward model. (Top left): Attenuation functions. (Top right): Incident spectrum. (Bottom left): Spectral response of the detectors. (Bottom right): Effective spectra. The figures are based on code (modified) and data from [11].
Figure 3. Physical parameters determining the forward model. (Top left): Attenuation functions. (Top right): Incident spectrum. (Bottom left): Spectral response of the detectors. (Bottom right): Effective spectra. The figures are based on code (modified) and data from [11].
Jimaging 10 00098 g003
Figure 4. Ground truth phantom (top row) and reconstructions using CP-fast (second row), CP-full (third row), Mechlem2018 [21] (fourth row), and the two-step algorithm Niu2014 [17] (bottom).
Figure 4. Ground truth phantom (top row) and reconstructions using CP-fast (second row), CP-full (third row), Mechlem2018 [21] (fourth row), and the two-step algorithm Niu2014 [17] (bottom).
Jimaging 10 00098 g004
Figure 5. Relative reconstruction error using the proposed CP-fast (top), proposed CP-full (middle), and Mechlem2018 [21] (bottom) as a function of the iteration index.
Figure 5. Relative reconstruction error using the proposed CP-fast (top), proposed CP-full (middle), and Mechlem2018 [21] (bottom) as a function of the iteration index.
Jimaging 10 00098 g005
Figure 6. Reconstructed slices of the second phantom: Ground truth phantom (top row) and reconstructions using CP-fast (second row), CP-full (third row), Mechlem2018 [21] (fourth row), and the two-step algorithm by Niu2014 [17] (bottom row).
Figure 6. Reconstructed slices of the second phantom: Ground truth phantom (top row) and reconstructions using CP-fast (second row), CP-full (third row), Mechlem2018 [21] (fourth row), and the two-step algorithm by Niu2014 [17] (bottom row).
Jimaging 10 00098 g006
Figure 7. Concentration of iodine (left), gadolinium (middle), and water (right) for the phantom shown in Figure 6 during the iteration.
Figure 7. Concentration of iodine (left), gadolinium (middle), and water (right) for the phantom shown in Figure 6 during the iteration.
Jimaging 10 00098 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Prohaszka, T.; Neumann, L.; Haltmeier, M. Derivative-Free Iterative One-Step Reconstruction for Multispectral CT. J. Imaging 2024, 10, 98. https://doi.org/10.3390/jimaging10050098

AMA Style

Prohaszka T, Neumann L, Haltmeier M. Derivative-Free Iterative One-Step Reconstruction for Multispectral CT. Journal of Imaging. 2024; 10(5):98. https://doi.org/10.3390/jimaging10050098

Chicago/Turabian Style

Prohaszka, Thomas, Lukas Neumann, and Markus Haltmeier. 2024. "Derivative-Free Iterative One-Step Reconstruction for Multispectral CT" Journal of Imaging 10, no. 5: 98. https://doi.org/10.3390/jimaging10050098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop