Next Article in Journal
An Adaptive Cubature Kalman Filter Based on Resampling-Free Sigma-Point Update Framework and Improved Empirical Mode Decomposition for INS/CNS Navigation
Previous Article in Journal
A Study on Optimizing the Maximal Product in Cubic Fuzzy Graphs for Multifaceted Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Proximal Complementary 1-TV Reconstruction for Limited Data Computed Tomography

1
Department of Mathematics, University of Innsbruck, 6020 Innsbruck, Austria
2
Department of Computer Science and Mathematics, OTH Regensburg, 93053 Regensburg, Germany
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(10), 1606; https://doi.org/10.3390/math12101606
Submission received: 22 March 2024 / Revised: 10 May 2024 / Accepted: 15 May 2024 / Published: 20 May 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In a number of tomographic applications, data cannot be fully acquired, resulting in severely underdetermined image reconstruction. Conventional methods in such cases lead to reconstructions with significant artifacts. To overcome these artifacts, regularization methods are applied that incorporate additional information. An important example is TV reconstruction, which is known to be efficient in compensating for missing data and reducing reconstruction artifacts. On the other hand, tomographic data are also contaminated by noise, which poses an additional challenge. The use of a single regularizer must therefore account for both the missing data and the noise. A particular regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction over multiple scales, in which case 1 curvelet regularization methods are well suited. To address this issue, in this paper, we present a novel variational regularization framework that combines the advantages of different regularizers. The basic idea of our framework is to perform reconstruction in two stages. The first stage is mainly aimed at accurate reconstruction in the presence of noise, and the second stage is aimed at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet–TV approach. We define and implement a curvelet transform adapted to the limited-view problem and illustrate the advantages of our approach in numerical experiments.

1. Introduction

Limited data computed tomography (CT) is present in a wide range of applications, such as digital breast tomosynthesis, dental tomography and non-destructive testing. In this case, the available data are only a subset of the full data that would be required to uniquely identify the scanned object. Due to the lack of available scans, certain image features are invisible, and important information may be obscured by artifacts generated during reconstruction [1,2]. Although the characterization of limited view artifacts has been well researched [3,4,5,6], effective artifact reduction or compensation for missing data is still a challenge. This is even more important when the tomographic data are also noisy, which is an additional hurdle to overcome.
Mathematically, limited-data CT can be written as an inverse problem of the form
v δ = N δ ( K Ω u ) ,
where u L 2 ( R 2 ) is the unknown image to be recovered, K Ω denotes the Radon transform with restricted angular range Ω S 1 , and N δ is the operator that adds noise to the data, parameterized by the noise level δ > 0 . While the inverse problem of recovering an image from CT measurements with complete noisy data is already ill-posed [7], the reconstruction problem for incomplete data is severely under-determined. Direct methods such as filtered back projection (FBP) are sensitive to noise, and do not handle missing data well, leading to typical limited data artifacts [3]. To account for noise and missing data, further information that is available about the object to be recovered must be incorporated. Specific methods are therefore required that can reliably remove noise, while also avoiding the generation of artifacts caused by limited data.

1.1. Variational Regularization

One of the most successful approaches to problems of the form (1) is variational regularization [8,9], in which a stable and robust solution u α δ L 2 ( R 2 ) is determined as minimizer of the Tikhonov functional
T α δ ( u ) = 1 2 K Ω u v δ 2 + α R ( u ) .
Here, R : L 2 ( R 2 ) R { } represents a suitable regularizer that incorporates prior information about the image to be recovered, K Ω u v δ 2 / 2 denotes the least squares data fitting functional, and α is the regularization parameter. The variational approach offers great flexibility, allowing for adaptation to the forward problem, the signal class, and the noise. For instance, total variation, R ( u ) = | u | TV , has been demonstrated to be a robust prior that can compensate for the missing data [10,11,12,13]. Another prominent example of a regularization functional is the 1 -norm, R ( u ) = Ψ u 1 , applied to wavelet or curvelet coefficients Ψ u . This approach has been shown to be statistically optimal for inverting the Radon transform from complete data [14].
However, the total variation’s monoscale nature does not achieve optimal reconstruction in the presence of noise [15,16], while wavelet- or curvelet-based priors [17,18] struggle to eliminate typical limited angle artifacts. This problem stems from the observation that extending the missing sinogram with zero values results in reconstructions with a smaller 1 norm compared to non-zero extensions. The absence of data in sinogram space often results in limited angle artifacts in the reconstruction domain. This phenomenon is evident in recent publications employing sparse shearlet reconstruction [19,20], where such artifacts persist, indicating the need for learning-based methods to address missing data extension. In contrast, in this work, we also follow a classical variational approach for the data completion step.
The individual advantages and disadvantages of specific regularizers have led to the development of so-called hybrid methods, which integrate two different regularizers within the framework of variational regularization (2). For instance, hybrid 1 -TV methods [21,22,23] use the regularizer R ( u ) = α | u | TV + β Ψ u 1 . Given the strengths and limitations outlined above for each individual regularizer, this approach is particularly appealing for CT with noisy limited data. However, the use of a single hybrid regularizer must account for both the limited data and the noise, which is a significant challenge. Due to its fixed structure, the hybrid regularizer may not completely avoid the drawbacks of the individual terms. In particular, the TV term can still over- or under-smooth certain scales in the visible domain, while the curvelet part might try to suppress the intensity of invisible coefficients. The latter is because of the fact that there are curvelets lying in the kernel of the limited angle Radon transform (cf. [24]). Our proposed methodology aims to address these drawbacks while preserving the benefits of both TV and curvelets. We achieve this by introducing a data space coupling term that provides more flexibility, allowing each regularizer to focus on its specific role more effectively.

1.2. Main Contribution

In this paper, we present a novel complementary 1 -TV algorithm that addresses both the limited data problem and the noise reduction problem. It is based on a modified variational regularization approach that selects a regularizer for each of the two tasks, and combines them in a synergistic way through data-proximity. More precisely, let Ψ * : 2 ( Λ ) L 2 ( R 2 ) denote the synthesis operator of some frame with countable index set Λ . The proposed iterative reconstruction method generates two reconstructions θ 2 ( Λ ) and u L 2 ( R 2 ) by alternately solving the following two-problems:
min θ K Ω ( Ψ * θ ) v δ 2 / 2 + α θ 1 + μ K Ω ( u Ψ * θ ) 2 / 2 min u R ( u ) + μ K Ω ( u Ψ * θ ) 2 / 2 .
In this setup, the auxiliary reconstruction Ψ * θ aims for a noise-suppressed reconstruction, which is addressed by the sparsity term θ 1 . The primary reconstruction u implicitly performs data completion by updating Ψ * ( θ ) based on the regularizer R ( u ) . A key element is the coupling of the two reconstructions, which requires that K Ω ( u Ψ * θ ) 2 is small, which we will refer to as data proximity. Consequently, both u and Ψ * θ approximately give the data v δ . There are many possible solutions due to ill-poseness, and the specific regularizers allow u and Ψ * θ to different significantly within the kernel of K Ω .
In a nutshell, the features and contributions of the proposed complementary algorithm in comparison to the prior art are the following:
  • We construct two separate reconstructions, u and Ψ * θ .
  • The two separate reconstructions are coupled only in data space, allowing for more flexibility in the image space.
These two properties significantly distinguish our method from existing ones, which either consist of a single reconstruction or reconstructions that are close to each other in image space. Allowing the reconstructions to vary differently in the null space of the operator, as we do, permits two very distinct reconstructions. For example, one optimally accounts for limited data and exclusively addresses noise removal, whereas the other focuses on artifact reduction and effectively completes the image in the null space. The coupling in the data space ensures that both steps do not negatively impact each other.
Our method is particularly different from post-processing an original reconstruction. In the latter case, the data proximity term K Ω ( u Ψ * θ ) 2 is replaced by a proximity term u Ψ * θ 2 in the reconstruction space, which forces u to be close to Ψ * ( θ ) , making artifacts difficult to remove. We also note that our concept is applicable to any image reconstruction problem with limited data, and that we focus on CT with limited data for the sake of clarity.

2. Background

Throughout this article, we will use the following notation. The Fourier transform of a function u L 2 ( R 2 ) is denoted by F u , where F u ( ξ ) R 2 u ( x ) e i ξ , x d x for integrable functions and extended to L 2 ( R 2 ) by continuity. We write u * ( x ) u ( x ) ¯ , where z ¯ denotes the complex conjugate of z C . Recall that the Fourier transform converts convolution into multiplication. In particular, for u , w L 2 ( R 2 ) , with F u L ( R 2 ) , the convolution u w L 2 ( R 2 ) is well-defined, and given by u w = F 1 ( ( F u ) · ( F w ) ) . Furthermore, we write F 2 u for the Fourier transform of u L 2 ( S 1 × R ) with respect to the second argument.

2.1. The Radon Transform

The Radon transform with full-angular range maps any function u L 1 ( R 2 ) L 2 ( R 2 ) to the line integrals
K u ( ω , s ) ω u ( x + s ω ) d x for ( ω , s ) S 1 × R .
Here, S 1 = { ω R 2 ω = 1 } , and any line of integration { x R 2 ω , x = s } is described by a unit normal vector ω S 1 and a signed distance s from the origin. The Radon transform can be extended to an unbounded densely defined closed operator K : D ( K ) L 2 ( R 2 ) L 2 ( S 1 × R ) with domain D ( K ) { u L 2 ( R 2 ) · 1 / 2 F u L 2 ( R 2 ) } ; see [25].
 Lemma 1 
(Fourier slice theorem). For all u D ( K ) , we have F 2 K u ( ω , σ ) = F u ( σ ω ) .
As opposed to the full data case, in limited data CT, the Radon transform is only known for a certain subset of S 1 × R . We will model the limited view data using a binary mask (cut-off function). For any subset A S 1 × R , we denote by χ A the indicator function defined by χ A ( ω , s ) = 1 if ( ω , s ) A , and χ A ( ω , s ) = 0 otherwise.
 Definition 1. 
For Ω S 1 , we define the limited-angle Radon transform as
K Ω : D ( K Ω ) L 2 ( R 2 ) L 2 ( S 1 × R ) : u χ Ω × R · ( K u ) .
The Fourier slice theorem states that for any ω S 1 , the Fourier transform of the Radon transform of some function in the second component equals the Fourier transform of that function along the Fourier slice { σ ω σ R } . In particular, limited angle CT data are in one-to-one correspondence with the Fourier transform F u restricted to the set W Ω { σ ω σ R ω Ω } . We will call W Ω the visible wavenumber set, as only Fourier coefficients for wave numbers in W Ω are provided by the data. Accordingly, we call R 2 W Ω the invisible wavenumber set (cf. [3]). We see that if R 2 W Ω has non-vanishing measure, then K Ω has non-vanishing kernel consisting of all functions u D ( K Ω ) = D ( K ) W Ω with supp F u W Ω .
In limited-view CT, the set W Ω forms a wedge, whereas in the sparse-view case, the set W Ω forms a fan; see the left image in Figure 1.

2.2. Frames and TI-Frames

We will use the fact that the desired image u has a sparse or compressible representation in a suitable frame. In particular, we will use curvelet frames, which give an optimal sparse representation of cartoon-like images [14]. The same is true for shearlets [26]. Curvelets and shearlets form frames of L 2 ( R 2 ) , and this section provides some necessary background.

2.2.1. Translational-Invariant (TI) Frames

Let I be a countable index set at most. A family ( ψ i ) i I in L 2 ( R 2 ) is called a translation invariant frame (TI-frame) for L 2 ( R 2 ) if F ψ i L ( R 2 ) for all i I , and from some constants A , B > 0 , we have
u L 2 ( R 2 ) : A u 2 i I ψ i u 2 B u 2 .
A TI-frame is called tight if A = B = 1 . From ψ i u = F 1 ( ( F ψ i ) · ( F u ) ) and Plancherel’s theorem, we obtain ψ i u 2 = 2 π R 2 | F ψ i | 2 | F u | 2 . The right inequality in (3) thus implies ( ψ i u ) i I 2 ( I , L 2 ( R 2 ) ) .
Along with TI-frames, we will make use of the TI-analysis and TI-synthesis operators, respectively, which are defined by
Ψ : L 2 ( R 2 ) 2 ( I , L 2 ( R 2 ) ) : u ( ψ i u ) i I , Ψ * : 2 ( I , L 2 ( R 2 ) ) L 2 ( R 2 ) : ( θ i ) i I i I ψ i * θ i .
Note that the TI-analysis operator and the TI-synthesis operator are the adjoint of each other. The composition Ψ * Ψ is known as the TI-frame operator. Using the definition of the TI-analysis operator, we can rewrite the frame condition (3) as A u 2 Ψ u 2 B u 2 for u L 2 ( R 2 ) . The right inequality in (3) states that the TI-analysis operator Ψ is a well-defined bounded linear operator. The left inequality states that Ψ is bounded from below, such that the pseudo-inverse Ψ ( Ψ * Ψ ) 1 Ψ * is continuous.
See [27] for general background on TI-frames, and [28,29,30] for TI-frames in the context of inverse problems.

2.2.2. Regular Frames

Regular frames use inner products instead of convolutions as in TI-frames for defining coefficients. Let Λ be a countable index set at most. A family ( ψ λ ) λ Λ in L 2 ( R 2 ) is called a frame for L 2 ( R 2 ) if
u L 2 ( R 2 ) : A u 2 λ Λ | ψ λ , u | 2 B u 2 ,
for some A , B > 0 . A frame is called tight if A = B = 1 . In some sense, the TI-frame can be seen as a frame with index I × R 2 . Note that the TI-frame is not a regular frame, since the translation parameter, given by R , is uncountable. Similar to the TI case, the analysis and synthesis operators of a regular frame are defined by
Ψ : L 2 ( R 2 ) 2 ( Λ ) : u ( ψ λ , u ) λ Λ , Ψ * : 2 ( Λ ) L 2 ( R 2 ) : ( θ λ ) λ Λ λ Λ ψ λ * θ λ ,
and the composition Ψ * Ψ is the frame operator.
Under suitable regularity assumptions [27,31], a regular frame with index set I × Z 2 can be obtained from a TI-frame with index set I by discretizing the convolution in (3). For multiscale systems, such as wavelets or curvelets, the associated I-dependent subsampling can destroy translation invariance and, thus, causing performance degradation and affecting reconstruction. The advantages of TI-frames over regular frames have been investigated in [30] for plain denoising and in [28] for general inverse problems.

2.3. Variational Image Reconstruction

A practically successful and theoretically well-analyzed method for solving (1) is variational regularization [8,9]. Here, the available prior information is incorporated by a regularization functional R : L 2 ( R 2 ) R { } , and an approximate image is recovered by minimizing the Tikhonov functional T α δ ( u ) = K Ω u v δ 2 / 2 + α R ( u ) with respect to u, cf. (2).
Variational regularization is well-posed, stable, and convergent in the following sense: (i) T α δ ( · ) has a minimizer u α δ ; (ii) minimizers depend continuously on data v δ ; and (iii) if v v δ δ with v ran ( K Ω ) and α = α ( δ ) is selected properly, then u α δ converges (as δ 0 ) to an R -minimizing solution of K Ω u = v defined by
min u R ( u ) such that K Ω u = v .
These properties hold true under the assumption that R is convex, weakly lower semicontinuous and coercive [9]. The characterization (5) of the limiting solutions reveals two separate tasks to be performed by the regularizer: Besides noise-robust reconstructions via minimization of the Tikhonov functional, it also serves as criteria for selecting a particular solution in the limit of noise-free data. It may be challenging to perform both tasks well with a single regularizer. Note that the selection of a particular solution via (5) addresses the non-uniqueness and implicitly performs data completion to estimate the missing data K S 1 Ω u . This is equivalent to the selection of the proper component of the reconstruction in the kernel ker ( K Ω ) . The data completion strongly depends on the chosen regularizer. The standard Tikhonov regularizer R = · 2 / 2 completes missing data with zero, and different regularizers perform non-zero data completion.
While there are many reasonable choices for the regularizer R , in this paper, we will mainly focus on the 1 -norm with respect to a suitably chosen frame and the total variation, each one coming with its own benefits and shortcomings.

2.3.1. Sparse 1 -Regularization

Let Ψ * denote the synthesis operator of a frame and set Ψ ( Ψ * Ψ ) 1 Ψ * . In particular, any u L 2 ( R 2 ) can be written as u = Ψ Ψ u . Synthesis sparsity means that u = Ψ * θ , where θ has only a few non-zero entries, whereas analysis sparsity refers to Ψ u having only few non-vanishing entries. Sparsity can be implemented via regularization using the 1 -norm. There are at least two different basic instances of sparse 1 -regularization, namely the synthesis and analysis formulations
f α , δ ana = arg min u 1 2 K Ω u v δ 2 + α Ψ u 1 ,
f α , δ syn = Ψ * arg min θ 1 2 K Ω ( Ψ * θ ) v δ 2 + α θ 1 .
Synthesis and analysis regularization are equivalent in the case where the frame is actually a basis. In this case, the regularized solutions can be explicitly computed via the diagonal frame decomposition [28,32]. In general, it is important to note that synthesis regularization, analysis regularization, and regularization through diagonal frame decomposition are fundamentally different, as highlighted in [33].
Frame based sparsity constraints have been widely employed for various reconstruction tasks [34,35,36]. It is worth noting that theoretical and practical insights from general variational regularization can be particularly applied to 1 -regularization. Additionally, 1 -regularization comes with improved recovery guarantees both in the deterministic and statistical context [14,37,38].

2.3.2. TV Regularization

Total variation regularization is a special case of variational regularization [9,39] where the regularizer in (2) is taken as the total variation (TV)
| u | TV sup R 2 u div v v C c 1 ( R 2 , R 2 ) v 2 , 1 ,
where v 2 , sup x ( v 1 ( x ) 2 + v 2 ( x ) 2 ) 1 / 2 . TV regularization has been proven to account for missing data in CT image reconstruction well [10,12].
Using the TV semi-norm as a regularizer tends to reduce noise while preserving edges within the image. However, since this is a monoscale approach, there is a trade-off between noise reduction and preservation of features at certain scales. Natural images have features across multiple scales which become either over or under smoothed depending on the particular choice of the regularization parameter [15,16]. This already has negative impact for fully sampled tomographic systems or simple denoising. To account for the noise, a sufficiently large regularization parameter is required that, at the same time, removes structures at small scales.

2.3.3. Hybrid Regularizers

Hybrid regularizers aim to combine benefits of the 1 regularizer and an additional regularizer, such as the TV-seminorm. Extending (6) and (7), one can define the analysis and synthesis variants of hybrid regularizers. In this work, we focus on the synthesis variant, which minimizes
T α , β hybrid ( u ) = 1 2 K Ω ( Ψ * θ ) v δ 2 + α θ 1 + β R ( Ψ * θ ) .
The 1 -term targets a noise-reduced reconstruction, and the R -term targets artifact reduction. Various forms of hybrid 1 -TV regularization techniques have been proposed [22,34,40]. While these methods have been shown to outperform both pure TV and pure 1 regularization, they still carry the limitations of both approaches.
Minimizing (8) has the drawback that the 1 -penalty and the TV penalty work against each other in the following sense: The 1 -norm enforces sparsity of the reconstructed coefficients and, for that purpose, it seeks to recover an image where missing data are completed by values close to zero. While this is beneficial for denoising sparse signals, it also has the effect of keeping missing coefficients at zero when asking for a small 1 norm. Conversely, one strength of TV is the possibility to compensate for missing data which, in turn, leads to the generation of non-zero values. This is most clearly observed in the context of plain inpainting where the forward operator is given by the restriction v Ω = u | Ω . For example, if u is a constant image, then filling the missing data with a constant value results in minimal total variation. However, this works against the sparsity constraint in a localized frame, which tries to fill missing data with small intensity values.
In particular, in the context of limited angle tomography, it has been shown that there are curvelets lying in the kernel of the Radon transform (cf. [24]), so-called invisible curvelets. Consequently, if the TV-regularization were to produce image features that are in the kernel of the Radon transform, the 1 -Minimization of the curvelet coefficients would tend to eliminate those features again. In this sense, both regularizers can work against each other, resulting in images that compromise between both terms. As a result, the advantages of each regularizer are not optimally combined through a hybrid approach.

3. Complementary 1 -TV Reconstruction

We now describe our proposed framework, which basically alternates between a reconstruction step and an artifact reduction step inspired by backward backward (BB) splitting. In what follows, let Ψ * : Θ L 2 ( R 2 ) be the synthesis operator of a frame (where Θ = 2 ( Λ ) ) or a TI-frame (where Θ = 2 ( Λ , L 2 ( R 2 ) ) ).

3.1. BB Splitting Algorithm

The starting point of our approach stems from the application of the BB splitting to the hybrid functional (8). For that purpose, we consider the splitting
T α , β δ ( θ ) = F α δ ( θ ) + β R ( Ψ * θ )
with
F α δ ( θ ) 1 2 K Ω ( Ψ * θ ) v δ 2 + α θ 1 .
The BB splitting algorithm, with coupling constant μ > 0 and starting value u 0 L 2 ( R 2 ) , in this case, is given by   
θ n arg min θ F α δ ( θ ) + μ 2 u n Ψ * θ 2 ,
u n + 1 arg min u β R ( u ) + μ 2 u Ψ * θ n 2 .
If Ψ * is unitary, the BB splitting algorithm (10), (11) is known to converge to the minimizer of F α δ ( θ ) + R β , μ ( Ψ * θ ) , where R β , μ ( u ) inf w β R ( u ) + μ u w 2 / 2 is the so-called Moreau envelope [41].
We would like to point out that, in this approach, the iterates θ n , u n are coupled via the proximity term u Ψ * θ 2 / 2 , resulting in two sequences that are close to each other in the reconstruction domain.

3.2. Proposed Reconstruction Strategy

Our algorithm is a modification of the BB splitting iteration ((10) and (11)) that replaces the proximity term Ψ * θ u 2 / 2 by the data-proximity coupling term K Ω ( u Ψ * θ ) 2 / 2 . Our approach involves substituting image domain coupling with data domain coupling, thereby providing more flexibility and effectively resolving the inherent competing nature both regularizers.
We construct two sequences, ( θ n ) n N and ( u n ) n N , such that Ψ * θ n and u n are approximate solutions of K Ω u = v δ , targeting different particular solutions. The reconstruction Ψ * θ n is a noise reduced reconstruction, whereas u n is an updated version of Ψ * θ n targeting reduced limited data artifacts based on R . To that end, we define F α δ as in (9), and consider the regularizer
G β ( u ) u β | u | TV + 1 0 ( u ) ,
with 1 0 being the indicator function of the positive cone given by 1 0 ( u ) = 0 if u 0 , and 1 0 ( u ) = otherwise.
Image reconstruction is performed in an iterative fashion similar to (10) and (11); however, we use the data-proximity coupling K Ω ( u w ) 2 / 2 . For that purpose, we suggest the iterative procedure
θ n arg min θ F α δ ( θ ) + μ 2 K Ω ( u n Ψ * θ ) 2 ,
u n + 1 arg min u G β ( u ) + μ 2 K Ω ( u Ψ * θ n ) 2 ,
with starting value u 0 L 2 ( R 2 ) . Here, K Ω ( u Ψ * θ ) 2 / 2 is the data-proximity coupling term and μ , α , β > 0 are parameters. The resulting complementary 1 -TV reconstruction procedure is summarized in Algorithm 1.
Algorithm 1 Proposed Complementary 1 -TV Minimization
  • Choose μ , α , β > 0 and N N
  • Initialize f 0 0 and n 0
  • repeat
  •      θ n arg min θ F α δ ( θ ) + μ K Ω ( u n Ψ * θ ) 2 / 2
  •      u n + 1 arg min u G β ( u ) + μ K Ω ( u Ψ * θ n ) 2 / 2
  •      n n + 1
  • until  n N
The proposed steps ((12) and (13)) in Algorithm 1 come with a clear interpretation. The first step, (12), is a sparse 1 -reconstruction scheme with good noise handling capabilities. The second step minimizes the TV norm with the penalty K Ω ( u Ψ * θ ) 2 / 2 , and targets artifact reduction.

3.3. Theoretical Properties

The detailed mathematical analysis of (12) and (13) is challenging, and beyond the scope of this paper. However, one notices that (12) and (13) represent an alternating minimization scheme [42] for the functional
D α , β , μ δ ( θ , u ) F α δ ( θ ) + G β ( u ) + μ 2 K Ω ( u Ψ * θ ) 2 .
From that, we can deduce the convergence of the proposed algorithm, provided that we ensure the convergence of the alternating minimization scheme, which poses challenging and interesting in its own right [43]. A comprehensive work addressing the convergence of alternating minimization is presented in [43], which includes our functional for the special case where the spaces are finite-dimensional and the operator K Ω has a vanishing kernel. However, our data-complementary method is particularly interested in incomplete data, for which convergence is a challenging issue. Both the convergence for the case of infinite dimension and vanishing kernel are subjects of future research.
Another interesting mathematical question for future research is the investigation of regularization properties [9] of the functional (14). This is non-standard, at least because of the presence of the two regularized solutions involved. Along the same line, the characterization of limiting solutions as the noise level simultaneously tends to zero with α , β , and μ is an open issue.

4. Numerical Experiments

In this section, we present numerical results using the proposed Algorithm 1 and compare it with standard filtered back projection (FBP), 1 -synthesis regularization (7), TV regularization, and hybrid 1 -TV regularization (8). In particular, we explore both the limited view and sparse angle scenarios, utilizing the NCAT and FORBILD phantoms [44,45], as the images to be reconstructed (cf. Figure 2). The NCAT phantom simulates a thorax CT scan, featuring the spine at the bottom and ribs on the sides. In contrast, the FORBILD phantom represents a head phantom, offering additional features across various scales compared to the well-known Shep Logan head phantom. Besides synthetic data experiments, we incorporate reconstructions from real CT data of a lotus root [46]. We note that comparisons with other reconstruction methods, such as [47,48,49], lie beyond the scope of this article, and are the subject of future work.
All algorithms were implemented in Matlab R2023a (MathWorks, Natick, MA, USA), using Matlab’s standard functions for the forward and the adjoint Radon transforms.

4.1. Implementation Details

All minimization problems are solved with the Chambolle–Pock algorithm [50] using 200 iterations for 1 -minimization, and 500 iterations for TV and hybrid 1 -TV minimization. This was also the case for the complementary approach, where for 10 5 and 10 4 photon counts, we chose N = 10 , and for 10 3 photon counts, we chose N = 4 outer iterations. We take the n-th initial value for the θ and u update as θ n 1 and u n 1 , respectively. For Ψ , we use a self-designed TI curvelet transform that in the case of limited view data are adapted to the visible wedge; see Appendix A. Total variation is implemented as the ( 2 , 1 ) -norm of the discrete gradient computed with finite differences.
The regularization parameters for Algorithm 1 are optimized for μ , α and non-stationary β = β 0 / 2 n . Since the described reconstruction techniques rely on good choices for these parameters, we perform systematic parameter sweeps in all cases to obtain optimal reconstructions and a fair comparison. The parameters were optimized in terms of the relative 2 reconstruction error u rec u 2 / u 2 , where u is the true signal and u rec the reconstruction. For each parameter and method, we performed a 1D grid search to obtain the lowest 2 reconstruction error. In particular, for the proposed complementary 1 -TV algorithm, we first determine the optimal parameter α , and used the optimal choice of the θ -update as input for the optimization of the parameter β . All subsequent iterations were then calculated using these parameters. For our limited view experiments from synthetic data, presented in Figure 3 and Figure 4, we used angular sampling points ω ( ϕ ) = ( cos ( ϕ ) , sin ( ϕ ) ) with ϕ = 65 ° , , 64 ° , resulting in a total number of 130 directions covering an angular range of 130°. Using this setup, we generated synthetic data from the NCAT and the FORBILD phantom (see Figure 2). To mimic real life applications, we perturbed these data by Poisson noise with different noise levels corresponding to 10 a incident photons per pixel bin with a = 3 , 4 , 5 .
To assess our method’s performance on real data, we utilized measured the X-ray data of a lotus root [46], which we downsampled with respect to the angular variable to acquire limited-view data consisting of 51 equally distributed angles over an angular range of approximately 160°. It is important to note that although the missing angular range is relatively small, the number of views within this angular range is also limited, resulting in a combination of both limited view and sparse angle setups.
For the sparse view problem, we generate Radon data with an angular range of 180°, and a total number of 50 angular projections, and we perturbed the data by Poisson noise with 10 4 incident photons per pixel bin.

4.2. Results for Limited View Data

In our experiments, we used two different sets of synthetic data that were generated using the NCAT phantom and the FORBILD head phantom [45]. The reconstructions were computed using FBP, 1 reconstruction, TV reconstruction, hybrid 1 -TV and the proposed complementary 1 -TV reconstruction. The results are shown in Figure 3 and Figure 4, and the corresponding PSNR and SSIM values are presented in Table 1 and Table 2.
The reconstruction results of the NCAT phantom indicate that the proposed complementary 1 -TV approach effectively integrates the denoising and artifact removal capabilities of the regularizers. A closer inspection of the reconstructions depicted in Figure 3 reveals that both the FBP-reconstructions and the 1 -reconstructions still exhibit limited view artifacts. While these artifacts are less prominent in the TV regularized reconstructions, there is also an observed discrepancy in the fine details of the spine within the magnified image region. This phenomenon is typical for TV regularization when the regularization parameter has to be set relatively high in order to mitigate noise, resulting in block-like artifacts. Similar findings apply to the hybrid 1 -TV reconstruction.
A detailed examination of the reconstructions from our proposed approach, particularly focusing on the fine details of the spine, demonstrates its efficacy in both mitigating limited view artifacts and preserving fine details more accurately. Compared to the TV-reconstruction and 1 -TV-reconstruction, the fine details, as observed in the zoomed region, suffer less from blocky structures generated by the TV-term. Conversely, in comparison to the 1 -reconstruction, our approach effectively eliminates limited view artifacts and offers a more accurate approximation of the overall shape of the phantom. These observations especially hold true for the low-noise scenario with 10 5 photon counts, but remain consistent across all noise levels, where all methods exhibit some degradation in reconstruction quality.
Especially at higher noise levels, the proposed method still maintains a high level of detail visible in the recovered images and achieves artifact-free reconstructions. We attribute the remaining perturbations to the soft-thresholding procedure, which is part of the θ -update step. It is worth noting that, at higher noise levels, no method reliably recovers the fine structures anymore. However, for TV and hybrid 1 -TV regularization, some of the ribs, which are boundaries of ellipse-like structures, now appear to be filled. Simple curvelet- 1 regularization and the complementary 1 -TV approach still recover the fine holes inside these structures. In this context as well, our method is capable of removing limited view artifacts while also providing a good approximation to the overall shape and details of the phantom.
To further validate our method, we conducted another limited-angle reconstruction using the FORBILD head phantom (refer to Figure 4). Once again, we compared the performance of the complementary 1 -TV method with FBP, 1 , TV, and hybrid 1 -TV. Quantitative results are presented in Table 2. In this case, the proposed complementary 1 -TV method emerges as the superior choice across all considered error measures. Although the TV-reconstruction and our proposed approach yield similar results, our method exhibits slightly better performance in terms of all metrics (PSNR and SSIM). Additionally, upon closer examination of the reconstructions, it becomes evident that our method produces fewer limited-view artifacts compared to the TV reconstruction. Nevertheless, the TV method appears to reconstruct the FORBILD phantom quite effectively, which we attribute to the piecewise constant nature of the phantom, aligning well with TV regularization.
We finally evaluated the performance of our method using real CT data of the lotus root [46]. These data consisted of 51 projections evenly distributed over an angular range of 160°, indicating that in this case we had not only to deal with a limited angular range, but also with a sparse angle setup. The results of this experiment are presented in Figure 5. They show that our method (along with all other methods) also performs effectively on real data, allowing for similar conclusions as those drawn above from synthetic data. However, it is worth noting that, in this instance, no extensive parameter search was conducted, leaving potential for further improvement.
In summary, the visual inspections of the presented limited view reconstructions show that our proposed algorithm effectively combines the advantages of both the denoising capabilities of curvelet- 1 regularization and the artifact removal and data recovery properties of TV regularization. Additionally, from quantitative comparisons, where error metrics are provided in Table 1 and Table 2, we observe that the proposed complementary 1 -TV approach consistently yields competitive reconstructions in all cases and outperforms others in many situations. Quantitatively, TV regularization and the complementary 1 -TV approach demonstrate similar performance. However, qualitatively, the advantages of the complementary 1 -TV method are clearly evident.

4.3. Results for Sparse View Data

Figure 6 shows the reconstruction results for the sparse view problem using FBP, 1 curvelet reconstruction, TV reconstruction, hybrid 1 -TV, and the proposed complementary 1 -TV regularization. We observe that all reconstruction methods are capable of reproducing the phantom quite effectively. Upon closer examination of the magnified details, we note that the 1 curvelet reconstruction accurately captures the spine. However, we also observe perturbations in the phantom, resulting from the soft-thresholding of curvelet coefficients.
The TV regularized reconstruction, while not exhibiting severe artifacts, struggles to recover fine details adequately. Additionally, some of the inner holes of the ribs begin to fill up due to TV regularization, similar to the limited view case. On the other hand, both the hybrid 1 -TV and the proposed complementary 1 -TV reconstruction successfully integrate the advantages of curvelet- 1 and TV regularization. The spine is represented well, and neither reconstruction suffers from curvelet artifacts.
A quantitative error assessment is provided in Table 3. From this standpoint, the hybrid 1 -TV method appears to outperform the other methods marginally. However, the visual distinction is minimal, with both methods yielding equally impressive reconstructions that accurately represent the fine details of the phantom.

5. Conclusions

Similar to many other image reconstruction problems, limited-data CT suffers from instability regarding noise and non-uniqueness, leading to artifacts in image reconstruction. Common regularization approaches use a single regularizer to address both issues, which is accurate for one of the two tasks, but not well adapted to the other. To address this issue, in this paper, we propose a complementary 1 -TV algorithm that advantageously combines the denoising properties of 1 -curvelet regularization and the data completion properties of TV. The main ingredient of our procedure is data-proximity coupling instead of the standard image-space coupling.
There are many potential future research directions extending our framework. We can integrate the data-proximity coupling into other splitting type method using proximal terms such as the ADMM algorithm. Further, data-proximity coupling can be combined with preconditioning or other coupling terms. For example, one might replace K Ω ( u Ψ * θ ) 2 by P ker ( K Ω ) ( u Ψ * θ ) , or may use hard constraints forcing K Ω Ψ * θ = K Ω u . Further, one can also consider general discrepancy functionals F 0 in place of the least squares functional K Ω u v δ 2 / 2 . From the analysis side, studying convergence of iterative procedures as well as regularization properties is an important line of future research. Furthermore, a comprehensive investigation of TI-frames for iterative regularization methods would be an interesting research focus. This includes a thorough analysis of theoretical properties along with numerical experiments. In particular, in combination with the limited view CT problem, the study of wedge adapted curvelets, and similar extensions to other limited data problem, could be of high interest.
Along with the potential extension of our framework, its theoretical analysis is considered as the main line of future work. This, on the one hand, includes convergence of the algorithm based on its relation (14). Further, regularization properties of the joint functional (14) have yet to be identified.

Author Contributions

Conceptualization, J.F. and M.H.; writing—original draft, S.G.; writing—review & editing, J.F. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

The contribution by S.G. is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 847476. The views and opinions expressed herein do not necessarily reflect those of the European Commission.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: https://www.fips.fi/dataset.php (accessed on 21 March 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Wedge-Adapted TI Curvelet Frames

Standard curvelets are not well adapted to limited angle data, as some curvelets elements might may have small visible components. Our aim is therefore to construct a curvelet transform that is adapted to the limited view data K Ω , where Ω = { ( cos ϕ , sin ϕ ) ϕ [ Φ , Φ [ } for some Φ < π / 2 . The basic idea is to construct a specific partition of the frequency plane that respects the visible wedge W Ω = R Ω ; see the left image of Figure 1. We work with TI variants, as the lack of translation invariance usually results in visual artifacts [27]. For a recent work on TI-frames in the context of regularization theory, see [28]. Another wedge-adapted curvelet transforms was developed in [51].

Appendix A.1. Standard TI Curvelet Frame

Consider the basic radial and angular Mayer base windows W : [ 1 / 2 , 2 ] [ 0 , 1 ] and V : [ 1 , 1 ] [ 0 , 1 ]
W ( r ) cos ( π / 2 ) ν 5 6 r if 2 / 3 r 5 / 6 1 if 5 / 6 r 4 / 3 cos ( π / 2 ) ν 3 r 4 if 4 / 3 r 5 / 3 0 otherwise , V ( ϕ ) 1 if | ϕ | 1 / 3 cos ( π / 2 ) ν 3 | ϕ | 1 , 1 / 3 | ϕ | 2 / 3 0 otherwise .
Here, the auxiliary function ν is chosen to satisfy ν ( 0 ) = 0 , ν ( 1 ) = 1 and ν ( x ) + ν ( 1 x ) = 1 . Possible choices are polynomials, for example ν ( x ) = 3 x 2 2 x 3 , ν ( x ) = 5 x 3 5 x 4 + x 5 or ν ( x ) = x 4 ( 35 84 x + 70 x 2 20 x 3 ) . Depending on the choice of ν , the angular windows have smaller or bigger overlap. In this paper, we use ν ( x ) = χ ( 0 , 1 ) s ( x 1 ) / ( s ( x 1 ) + s ( x ) ) with s ( x ) = exp ( ( 1 + x ) 2 ( 1 x ) 2 ) .
The TI-curvelets are defined in the frequency space using products of rescaled versions of the radial and angular base windows
F ψ j , ( ξ ) = 2 3 j / 4 W ( 2 j r ) 2 · V 2 π ϕ / N j 2 ,
where ξ = r ( cos ϕ , sin ϕ ) and N j N and Λ ( j , ) j N { N j / 2 , , N j / 2 1 } . At scale j, the radial window W ( 2 j r ) defines a ring that is further partitioned into N j angular wedges V 2 π ϕ / N j .
Theorem A1. 
( ψ j , ) ( j , ) Λ is a tight TI-frame.
Proof. 
From the definition of the basis windows, we have = N j / 2 N j / 2 1 V 2 π ϕ / N j 2 = 1 and j Z | W ( 2 j r ) | 2 = 1 and, therefore, j , | F ψ j , ( ξ ) | 2 = 1 . By the Plancherel identity, this is equivalent to the tight frame condition (3) with A = B = 1 . □
Curvelet frames are defined by sampling ψ j , u at points M j , k with a sampling matrix M j , R 2 × 2 and sampling index k Z 2 . Defining ψ j , , k : = ψ j , ( x M j , k ) results in curvelet coefficients ψ j , u ( M j , k ) = ψ j , , k , f . The family ( ψ j , , k ) j , , k is a tight frame with the associated reproducing formula u = j , , k u , ψ j , , k ψ j , , k ¯ . Note that the scale and wedge depending sampling destroys the translation invariance and the improved denoising property of TI systems [28,30].
Figure A1. (a) Standard curvelet tiling. (b) Visible wedge W Ω indicated in blue and non-adapted standard curvelet tiling. (c) Visible wedge W Ω and wedge adapted tiling.
Figure A1. (a) Standard curvelet tiling. (b) Visible wedge W Ω indicated in blue and non-adapted standard curvelet tiling. (c) Visible wedge W Ω and wedge adapted tiling.
Mathematics 12 01606 g0a1

Appendix A.2. Wedge Adaption

Due to the limited angular range, the essential support of the Fourier transformed curvelets near the boundary of the visible wedge W Ω is not fully contained in W Ω ; see Figure A1b. This results in an associated curvelet transform that is not well adapted to the kernel of the limited Radon transform [24]. In order to adapt to the visible wedge, we modify the standard angular tiling and define two systems ( ψ j , vis ) j , and ( ψ j , inv ) j , that we call the visible and invisible parts of the curvelet family. For that purpose, we define the adjusted angular windows V vis ( ϕ ) and V inv , and make sure that the windows at the boundary sum up to one. Now, the wedge-adapted TI curvelets ψ j , vis , ψ j , inv are defined as in (A1), with V replaced by V vis , V inv , respectively. As in Theorem A1, one shows that the family ( ψ j , vis , ψ j , inv ) j , forms a TI-frame of L 2 ( R 2 ) . Opposed to the standard TI curvelet frame ( ψ j , ) j , , it has controlled overlap at the boundary between the visible and invisible frequencies. In a similar manner, we could construct wedge adapted curvelets where we use different numbers N j d for each of the four basic wedges. Finally, note that each of windows has finite bandwidth. Thus, similar to the case of the standard curvelets, we can use Shannon sampling theorem to define a wedge adapted curvelet frame by wedge adapted sampling. A detailed mathematical analysis of properties of its properties is beyond the scope of this paper.

References

  1. Quinto, E.T. Singularities of the X-ray transform and limited data tomography in R2 and R3. SIAM J. Math. Anal. 1993, 24, 1215–1225. [Google Scholar] [CrossRef]
  2. Quinto, E.T. Artifacts and visible singularities in limited data X-ray tomography. Sens. Imaging 2017, 18, 1–14. [Google Scholar] [CrossRef]
  3. Frikel, J.; Quinto, E.T. Characterization and reduction of artifacts in limited angle tomography. Inverse Probl. 2013, 29, 125007. [Google Scholar] [CrossRef]
  4. Frikel, J.; Quinto, E.T. Artifacts in Incomplete Data Tomography with Applications to Photoacoustic Tomography and Sonar. SIAM J. Appl. Math. 2015, 75, 703–725. [Google Scholar] [CrossRef]
  5. Frikel, J.; Quinto, E.T. Limited Data Problems for the Generalized Radon Transform in Rn. SIAM J. Math. Anal. 2016, 48, 2301–2318. [Google Scholar] [CrossRef]
  6. Borg, L.; Jørgensen, J.S.; Frikel, J.; Quinto, E.T. Analyzing reconstruction artifacts from arbitrary incomplete X-ray CT Data. SIAM J. Imaging Sci. 2018, 11, 2786–2814. [Google Scholar] [CrossRef]
  7. Natterer, F. The Mathematics of Computerized Tomography; SIAM: Philadelphia, PA, USA, 2001. [Google Scholar]
  8. Benning, M.; Burger, M. Modern regularization methods for inverse problems. Acta Numer. 2018, 27, 1–111. [Google Scholar] [CrossRef]
  9. Scherzer, O.; Grasmair, M.; Grossauer, H.; Haltmeier, M.; Lenzen, F. Variational Methods in Imaging; Springer: New York, NY, USA, 2009. [Google Scholar]
  10. Persson, M.; Bone, D.; Elmqvist, H. Total variation norm for three-dimensional iterative reconstruction in limited view angle tomography. Phys. Med. Biol. 2001, 46, 853. [Google Scholar] [CrossRef] [PubMed]
  11. Velikina, J.; Leng, S.; Chen, G.H. Limited view angle tomographic image reconstruction via total variation minimization. In Proceedings of the Medical Imaging 2007: Physics of Medical Imaging, San Diego, CA, USA, 17–22 February 2007; Volume 6510, pp. 709–720. [Google Scholar]
  12. Sidky, E.Y.; Pan, X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 2008, 53, 4777. [Google Scholar] [CrossRef]
  13. Wang, T.; Nakamoto, K.; Zhang, H.; Liu, H. Reweighted anisotropic total variation minimization for limited-angle CT reconstruction. IEEE Trans. Nucl. Sci. 2017, 64, 2742–2760. [Google Scholar] [CrossRef]
  14. Candes, E.J.; Donoho, D.L. Recovering edges in ill-posed inverse problems: Optimality of curvelet frames. Ann. Stat. 2002, 30, 784–842. [Google Scholar] [CrossRef]
  15. Haltmeier, M.; Li, H.; Munk, A. A variational view on statistical multiscale estimation. Annu. Rev. Stat. Appl. 2022, 9, 343–372. [Google Scholar] [CrossRef]
  16. Candes, E.J.; Guo, F. New multiscale transforms, minimum total variation synthesis: Applications to edge-preserving image reconstruction. Signal Process. 2002, 82, 1519–1543. [Google Scholar] [CrossRef]
  17. Sahiner, B.; Yagle, A.E. Limited angle tomography using wavelets. In Proceedings of the Nuclear Science Symposium and Medical Imaging Conference, San Francisco, CA, USA, 31 October–6 November 1993; pp. 1912–1916. [Google Scholar]
  18. Rantala, M.; Vanska, S.; Jarvenpaa, S.; Kalke, M.; Lassas, M.; Moberg, J.; Siltanen, S. Wavelet-based reconstruction for limited-angle X-ray tomography. IEEE Trans. Med. Imaging 2006, 25, 210–217. [Google Scholar] [CrossRef] [PubMed]
  19. Bubba, T.A.; Kutyniok, G.; Lassas, M.; März, M.; Samek, W.; Siltanen, S.; Srinivasan, V. Learning the invisible: A hybrid deep learning-shearlet framework for limited angle computed tomography. Inverse Probl. 2019, 35, 064002. [Google Scholar] [CrossRef]
  20. Andrade-Loarca, H.; Kutyniok, G.; Öktem, O.; Petersen, P. Deep microlocal reconstruction for limited-angle tomography. Appl. Comput. Harmon. Anal. 2022, 59, 155–197. [Google Scholar] [CrossRef]
  21. Vandeghinste, B.; Goossens, B.; Van Holen, R.; Vanhove, C.; Pizurica, A.; Vandenberghe, S.; Staelens, S. Combined shearlet and TV regularization in sparse-view CT reconstruction. In Proceedings of the 2nd International Meeting on Image Formation in X-ray Computed Tomography, Baltimore, MD, USA, 12–16 June 2022. [Google Scholar]
  22. Kai, C.; Min, J.; Qu, Z.; Yu, J.; Yi, S. Moreau-envelope-enhanced nonlocal shearlet transform and total variation for sparse-view CT reconstruction. Meas. Sci. Technol. 2020, 32, 015405. [Google Scholar] [CrossRef]
  23. Papafitsoros, K.; Schönlieb, C.B. A combined first and second order variational approach for image reconstruction. J. Math. Imaging Vision 2014, 48, 308–338. [Google Scholar] [CrossRef]
  24. Frikel, J. Sparse regularization in limited angle tomography. Appl. Comput. Harmon. Anal. 2013, 34, 117–141. [Google Scholar] [CrossRef]
  25. Smith, K.T.; Solmon, D.C.; Wagner, S.L. Practical and mathematical aspects of the problem of reconstructing objects from radiographs. Bull. Am. Math. Soc. 1977, 83, 1227–1270. [Google Scholar] [CrossRef]
  26. Kutyniok, G.; Lim, W.Q. Compactly supported shearlets are optimally sparse. J. Approx. Theory 2011, 163, 1564–1589. [Google Scholar] [CrossRef]
  27. Mallat, S. A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way, 3rd ed.; Academic Press, Inc.: Cambridge, MA, USA, 2008. [Google Scholar]
  28. Göppel, S.; Frikel, J.; Haltmeier, M. Translation invariant diagonal frame decomposition of inverse problems and their regularization. Inverse Probl. 2023, 39, 065011. [Google Scholar] [CrossRef]
  29. Parhi, R.; Unser, M. The Sparsity of Cycle Spinning for Wavelet-Based Solutions of Linear Inverse Problems. IEEE Signal Process. Lett. 2023, 30, 568–572. [Google Scholar] [CrossRef]
  30. Coifman, R.R.; Donoho, D.L. Translation-invariant de-noising. In Wavelets and Statistics; Springer: New York, NY, USA, 1995; pp. 125–150. [Google Scholar]
  31. Daubechies, I. Ten Lectures on Wavelets; SIAM: Philadelphia, PA, USA, 1992. [Google Scholar]
  32. Ebner, A.; Frikel, J.; Lorenz, D.; Schwab, J.; Haltmeier, M. Regularization of inverse problems by filtered diagonal frame decomposition. Appl. Comput. Harmon. Anal. 2023, 62, 66–83. [Google Scholar] [CrossRef]
  33. Frikel, J.; Haltmeier, M. Sparse regularization of inverse problems by operator-adapted frame thresholding. In Mathematics of Wave Phenomena; Springer: Berlin/Heidelberg, Germany, 2020; pp. 163–178. [Google Scholar]
  34. Vandeghinste, B.; Goossens, B.; Van Holen, R.; Vanhove, C.; Pižurica, A.; Vandenberghe, S.; Staelens, S. Iterative CT reconstruction using shearlet-based regularization. IEEE Trans. Nucl. Sci. 2013, 60, 3305–3317. [Google Scholar] [CrossRef]
  35. Bubba, T.A.; Labate, D.; Zanghirati, G.; Bonettini, S. Shearlet-based regularized reconstruction in region-of-interest computed tomography. Math. Model. Nat. Phenom. 2018, 13, 34. [Google Scholar] [CrossRef]
  36. Candes, E.J.; Donoho, D.L. Curvelets and reconstruction of images from noisy Radon data. In Proceedings of the Wavelet Applications in Signal and Image Processing VIII, San Diego, CA, USA, 30 July–4 August 2000; Volume 4119, pp. 108–117. [Google Scholar]
  37. Grasmair, M.; Haltmeier, M.; Scherzer, O. Sparse regularization with q penalty term. Inverse Probl. 2008, 24, 055020. [Google Scholar] [CrossRef]
  38. Lorenz, D.A. Convergence rates and source conditions for Tikhonov regularization with sparsity constraints. J. Inverse Ill-Posed Probl. 2008, 16, 463–478. [Google Scholar] [CrossRef]
  39. Acar, R.; Vogel, C.R. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl. 1994, 10, 1217. [Google Scholar] [CrossRef]
  40. Luo, X.; Yu, W.; Wang, C. An image reconstruction method based on total variation and wavelet tight frame for limited-angle CT. IEEE Access 2017, 6, 1461–1470. [Google Scholar] [CrossRef]
  41. Combettes, P.L.; Pesquet, J.C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: New York, NY, USA, 2011; pp. 185–212. [Google Scholar]
  42. Bertsekas, D.P. Nonlinear programming. J. Oper. Res. Soc. 1997, 48, 334. [Google Scholar] [CrossRef]
  43. Xu, Y.; Yin, W. A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 2013, 6, 1758–1789. [Google Scholar] [CrossRef]
  44. Segars, W.P.; Mahesh, M.; Beck, T.J.; Frey, E.C.; Tsui, B.M. Realistic CT simulation using the 4D XCAT phantom. Med. Phys. 2008, 35, 3800–3808. [Google Scholar] [CrossRef] [PubMed]
  45. Yu, Z.; Noo, F.; Dennerlein, F.; Wunderlich, A.; Lauritsch, G.; Hornegger, J. Simulation tools for two-dimensional experiments in x-ray computed tomography using the FORBILD head phantom. Phys. Med. Biol. 2012, 57, N237. [Google Scholar] [CrossRef] [PubMed]
  46. Bubba, T.A.; Hauptmann, A.; Huotari, S.; Rimpeläinen, J.; Siltanen, S. Tomographic X-ray data of a lotus root filled with attenuating objects. arXiv 2016, arXiv:1609.07299. [Google Scholar]
  47. Xu, J.; Zhao, Y.; Li, H.; Zhang, P. An image reconstruction model regularized by edge-preserving diffusion and smoothing for limited-angle computed tomography. Inverse Probl. 2019, 35, 085004. [Google Scholar] [CrossRef]
  48. Zhang, Z.; Chen, B.; Xia, D.; Sidky, E.Y.; Pan, X. Directional-TV algorithm for image reconstruction from limited-angular-range data. Med. Image Anal. 2021, 70, 102030. [Google Scholar] [CrossRef] [PubMed]
  49. Gong, C.; Liu, J. Structure-guided computed tomography reconstruction from limited-angle projections. J. X-ray Sci. Technol. 2023, 31, 95–117. [Google Scholar] [CrossRef] [PubMed]
  50. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 2011, 40, 120–145. [Google Scholar] [CrossRef]
  51. Pan, B.; Arridge, S.R.; Lucka, F.; Cox, B.T.; Huynh, N.; Beard, P.C.; Zhang, E.Z.; Betcke, M.M. Photoacoustic Reconstruction Using Sparsity in Curvelet Frame: Image Versus Data Domain. IEEE Trans. Comput. Imaging 2021, 7, 879–893. [Google Scholar] [CrossRef]
Figure 1. Left: visible wavenumbers (blue) for limited view data covering 130°. Right: visible wavenumbers (blue) for sparse angular sampling using 20 angles.
Figure 1. Left: visible wavenumbers (blue) for limited view data covering 130°. Right: visible wavenumbers (blue) for sparse angular sampling using 20 angles.
Mathematics 12 01606 g001
Figure 2. Phantom images that were in our reconstruction experiments from synthetic CT data. (a) NCAT phantom [44]; (b) FORBILD phantom [45].
Figure 2. Phantom images that were in our reconstruction experiments from synthetic CT data. (a) NCAT phantom [44]; (b) FORBILD phantom [45].
Mathematics 12 01606 g002
Figure 3. Reconstruction of the NCAT phantom (cf. Figure 2) from limited view data collected over an angular range 130°. Each column shows the reconstruction results obtained through different reconstruction methods, all using identical CT data and noise levels, with a photon count of 10 a . The pixel value range is set to [ 0 , 1 ] for all images.
Figure 3. Reconstruction of the NCAT phantom (cf. Figure 2) from limited view data collected over an angular range 130°. Each column shows the reconstruction results obtained through different reconstruction methods, all using identical CT data and noise levels, with a photon count of 10 a . The pixel value range is set to [ 0 , 1 ] for all images.
Mathematics 12 01606 g003
Figure 4. Reconstruction of the FORBILD phantom (cf. Figure 2) from limited view data collected over an angular range 130°. The data were perturbed with Poisson noise with 10 5 incident photons per pixel. The best reconstruction parameters were determined through a grid search. Reconstructions show best images with respect to SSIM. (a) FBP; (b) 1 ; (c) TV; (d) 1 -TV; (e) proposed.
Figure 4. Reconstruction of the FORBILD phantom (cf. Figure 2) from limited view data collected over an angular range 130°. The data were perturbed with Poisson noise with 10 5 incident photons per pixel. The best reconstruction parameters were determined through a grid search. Reconstructions show best images with respect to SSIM. (a) FBP; (b) 1 ; (c) TV; (d) 1 -TV; (e) proposed.
Mathematics 12 01606 g004
Figure 5. Reconstruction of the real CT-data of the lotus root (cf. [46]) from a limited angular range covering 130°. (a) FBP; (b) 1 ; (c) TV; (d) 1 -TV; (e) proposed.
Figure 5. Reconstruction of the real CT-data of the lotus root (cf. [46]) from a limited angular range covering 130°. (a) FBP; (b) 1 ; (c) TV; (d) 1 -TV; (e) proposed.
Mathematics 12 01606 g005
Figure 6. Reconstructions from sparse view data using 50 projections over an angular range of 180°. The pixel value range is set to [ 0 , 1 ] for all images. (a) FBP; (b) 1 ; (c) TV; (d) 1 -TV; (e) proposed.
Figure 6. Reconstructions from sparse view data using 50 projections over an angular range of 180°. The pixel value range is set to [ 0 , 1 ] for all images. (a) FBP; (b) 1 ; (c) TV; (d) 1 -TV; (e) proposed.
Mathematics 12 01606 g006
Table 1. NCAT phantom metrics for reconstructions presented in Figure 3. The best values are in bold.
Table 1. NCAT phantom metrics for reconstructions presented in Figure 3. The best values are in bold.
# PhotonsMethodPSNRSSIM
10 5 FBP17.10210.2693
1 22.5900.559
TV29.7250.953
1 -TV25.41240.8540
proposed31.4380.949
10 4 FBP16.73060.1635
1 22.12910.5430
TV27.15900.9210
1 -TV24.08590.7633
proposed29.01410.8815
10 3 FBP14.11890.0696
1 21.49740.4328
TV24.93210.8621
1 -TV23.71000.7898
proposed26.14200.7906
Table 2. FORBILD phantom metrics for reconstructions presented in Figure 4. The best values are in bold.
Table 2. FORBILD phantom metrics for reconstructions presented in Figure 4. The best values are in bold.
# PhotonsMethodPSNRSSIM
10 5 FBP18.07470.3567
1 22.85270.8449
TV24.92940.9516
1 -TV24.67160.9289
proposed25.17270.9540
Table 3. Reconstruction metrics for sparse view reconstructions of the NCAT phantom presented in Figure 6. The best values are in bold.
Table 3. Reconstruction metrics for sparse view reconstructions of the NCAT phantom presented in Figure 6. The best values are in bold.
# PhotonsMethodPSNRSSIM
10 4 FBP20.87020.1767
1 29.72900.7308
TV30.39330.9294
1 -TV32.04450.8884
proposed31.02890.9302
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Göppel, S.; Frikel, J.; Haltmeier, M. Data-Proximal Complementary 1-TV Reconstruction for Limited Data Computed Tomography. Mathematics 2024, 12, 1606. https://doi.org/10.3390/math12101606

AMA Style

Göppel S, Frikel J, Haltmeier M. Data-Proximal Complementary 1-TV Reconstruction for Limited Data Computed Tomography. Mathematics. 2024; 12(10):1606. https://doi.org/10.3390/math12101606

Chicago/Turabian Style

Göppel, Simon, Jürgen Frikel, and Markus Haltmeier. 2024. "Data-Proximal Complementary 1-TV Reconstruction for Limited Data Computed Tomography" Mathematics 12, no. 10: 1606. https://doi.org/10.3390/math12101606

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop