^{1}

^{*}

^{2}

^{1}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets—eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used—Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods—CS SENSE and l1SPIRiT and two calibration free techniques—Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

In parallel MRI (pMRI), the object under study is scanned by multiple receiver coils. In order to expedite scanning, the K-space is partially sampled at each of the channels. The problem is to reconstruct the image given the partial K-space samples. The problem is rendered even more challenging by the fact that, each of the receiver coils has their own sensitivity profiles depending on their field of view; these sensitivity profiles are not accurately known beforehand.

In the past, all pMRI techniques required the sensitivity profile to be estimated either explicitly (SENSE [

All the aforementioned pMRI reconstruction methods proceed in two stages—(i) In the calibration stage, the sensitivity maps or the interpolation weights are estimated; (ii) Based on these estimates, the image is reconstructed in the reconstruction stage. The reconstruction accuracy of the images is sensitive to the accuracy of the calibration stage. The calibration in turn depends on the choice of certain parameters, e.g., the window size—size of the central K-space region that has been fully sampled (for all the aforementioned methods) and the kernel size for estimating the interpolation weights (for GRAPPA and SPIRiT). These parameters are manually tuned and the best results are reported. The GRAPPA formulation has been studied in detail, and there is a study which claims to offer insights regarding the choice of GRAPPA reconstruction parameters [

In this work, we improve upon our previous work on calibration free reconstruction (see Section 2.2). Our method reconstructs each of the different multi-coil images, which are then combined by the sum-of-squares approach (used in GRAPPA and SPIRiT). We compare our method with state-of-the-art parallel MRI reconstruction methods; two of these are calibrated techniques—CS SENSE [

Mathematically the sensitivity encoding of MR images is a modulation operation where the signal (image) is modulated by the sensitivity function (map) of the coils. All the aforesaid studies are based on the assumption the sensitivity map is smooth. Moreover the design on the receiver coils ensure that there sensitivity does not vanish anywhere,

Our reconstruction method is based on the fact that the position of the high valued transform coefficients in the different sensitivity encoded coil images remain the same. Based on the precepts of Compressed Sensing (CS) we formulated the reconstruction as a row-sparse Multiple Measurement Vector (MMV) recovery problem. Our method produces one sensitivity encoded image corresponding to each receiver coil in a fashion similar to GRAPPA and SPiRIT. Both of these methods reconstruct the final image as a sum-of-squares of the sensitivity encoded images. In this paper, we will follow the same combination technique.

Row-sparse MMV optimization can be either formulated as a synthesis prior or an analysis prior problem. However it is not known apriori which of these formulations will yield a better result. Even though the synthesis prior is more popular, it has been found that the analysis prior yields better results than the synthesis prior. Both of the analysis and the synthesis prior formulations can either be convex or non-convex. The Spectral Projected Gradient algorithm [

The K-space data acquisition model for multi-coil parallel MRI scanner is given by:
_{i}^{th}_{Ω}_{Ω}_{Ω}_{i}^{th}_{i}

Since the receiver coils only partially sample the K-space, the number of K-space samples for each coil is less than the size of the image to be reconstructed. Thus, the reconstruction problem is under-determined. Following the works in CS based MR image reconstruction [_{i}

The analysis prior optimization directly solves for the images. The synthesis prior formulation solves for the transform coefficients. In situations where the sparsifying transform is orthogonal (Orthogonal: Ψ^{T}^{T}^{T}^{T}_{i}_{i}

However, such piecemeal reconstruction of coil images does not yield optimal results. In this paper, we will reconstruct all the coil images simultaneously by solving a MMV recovery problem. _{1}_{C}_{1}_{C}_{1}_{C}

The multi-coil images (_{i}

This can be clarified with a toy example.

If finite difference is used as the sparsifying transform, the discontinuities along the edges are captured,

Based on this toy example, we consider the MMV formulation

In

We propose to solve _{j}_{→} is the vector whose entries form the ^{th}_{F}

The values of the inner (_{2}_{p}_{2}_{p}

The aforesaid problem

The analysis prior optimization directly solves for the images. The synthesis prior formulation solves for the transform coefficients. In situations where the sparsifying transform is orthogonal or a tight-frame, the inverse problem

The images are recovered by:

The final image (

The analysis and the synthesis priors yield same results for orthogonal transforms but different results for redundant tight-frames.

In a recent work, a method similar to ours has been proposed [

This is actually the unconstrained version of our prior analysis problem

There have been other studies that used joint-sparsity models for parallel MRI reconstruction [

Prior to this work, we proposed a naive version of the CaLM MRI technique [

In this formulation, the vector x will be group-sparse in transform domain for the same reasons it is row-sparse in the proposed formulation. In [

In this work, we also do an in-depth analysis as to why the proposed technique is likely to be successful. None of the previous studies [

During the review, one of the reviewers pointed out to a few recent studies that do not require a calibration stage [

The Hankel matrix thus formed is low-rank owing to local correlations. In [

SAKE is pegged on the idea that the coil images are correlated spatially; also the various channel images are correlated. Thus, the K-space samples are also correlated (The Fourier transform being orthogonal do not disturb the correlation). To overcome the computational issue of SAKE, the CLEAR technique was proposed in [

A lot of practical CS problems exploit the sparsity of the natural signals in the wavelet basis in order to reconstruct them. The sparsity of the wavelet coefficients arises on account of the piecewise smooth (e.g., piecewise polynomial) structure of such signals, and the vanishing moments of wavelets. A precise way of describing this is that the action of any wavelet

In this sub-section, we make some observations on how the sparsity of the piecewise smooth signal is affected by modulation. To keep it simple, we work with one-dimensional signals. Let

Note that if

Therefore, the only situation of interest is that in which

_{0}, and _{0} if and only if _{0}) is non-zero (see

Note that by smooth we mean that _{0} in the sense that _{0}, but has different left and right limits at _{0}, that is, _{0} from the left and right of _{0}. As a simple example, consider the Heaviside function with a transition at _{0}.

In practice this proposition demands that the sensitivity map (modulation function) should be smooth and non-vanishing. The fact that the sensitivity map is smooth is well known and is the basis of all studies in parallel MRI. But we make the additional demand that the sensitivity map should be non-vanishing as well. Ideally this constraint is satisfied by the design of the scanner—there is no portion of the subject which is completely blind to a particular channel; thus the sensitivity profile for all the channels are non-vanishing.

Note that higher-order singularities can arise when two smooth functions are glued together. For example, consider the function obtained by gluing the zero function and a polynomial:

It is clear that ^{(}^{n}^{)}(_{0}. As a result, the wavelet transform of _{0}.

So what is the effect of modulation on the wavelet transform of such signals? Of course, one would expect _{0}.

_{0}, but its _{0}. Then _{0} if and only if ^{(}^{k}^{)}(_{0}) = 0 for _{0}.

Combined with _{0}, unless the _{0} (see

For parallel MRI reconstruction, the sensitivity map modulates the underlying signal (MR image). The sensitivity maps are assumed to be smooth and can be modeled as polynomials [

We show a toy example. We considered a function ^{2}) (2 heaviside(t)-4 heaviside(t-T)). Which was modulated by two polynomials of small order; the modulation functions are:

The original function and its modulated versions are shown in

We compute the wavelet transforms of the original and the modulated signals. These are shown in

It can be seen from

The Majorization-Minimization (MM) approach [

For the synthesis prior _{Ω}_{Ω}

Instead of solving the aforesaid constrained problems, we propose solving their unconstrained counterparts,

The constrained and the unconstrained formulations are equivalent for proper choice of the Lagrangian

We solve this problem by the Majorization-Minimization (MM) approach [

Let

Set _{0}

Repeat step 2–4 until suitable a stopping criterion is met.

Choose _{k}

_{k}

_{k}_{k}_{k}

Set _{k +}_{1} as the minimizer for _{k}

Set

For this paper, the problems to be solved are ^{T}H

_{1}_{2}

Minimizing

These updates

For the synthesis prior problem, we need to solve _{2}^{th}

Setting the derivative to zero and re-arranging, we get:

This can be solved by the following soft-thresholding:

Initialize:

^{(0)}

Solving the analysis prior problem requires minimization of

Setting the gradient to zero we get:

It is not possible to solve

From

Adding ^{T}z^{T}A

This leads to the following algorithm for solving the analysis prior joint-sparse optimization problem.

Initialize:

^{(0)}=0 Repeat until convergence:

We have derived algorithms to solve the unconstrained problems. As mentioned before, the constrained and the unconstrained forms are equivalent for proper choice of

The cooling technique solves the constrained problem in two loops. The outer loop decreases the value of

Initialize:

^{(0)}=0;

^{T}x

^{1}

^{2}

^{2}(inner loop ends)

^{1}(outer loop ends)

Initialize:

^{(0)}=0;

^{T}x

^{1}

^{2}

^{2}(inner loop ends)

^{1}(outer loop ends)

In this work, we proposed solving the reconstruction problem via non-convex optimization algorithms. Theoretically one may argue about the convergence of such algorithms to local minima. However, in practice it has never been a problem. In previous studies [

There are two sets of ground-truth data used for our experimental evaluation (

In this work, we show results for two different K-space sampling schemes (

We compare our proposed method with two state-of-the art calibrated methods—L1SPIRiT [

For CS SENSE the sensitivity profiles are estimated in the fashion shown in [

Our proposed method and the DCS based method propounded in [

For our non-convex formulation, we found that the best results were obtained for

The DCS reconstruction yields the worse results. This is expected—DCS is an ad hoc algorithm and consequently it fails. Our proposed non-convex analysis prior formulation yields the best results. The synthesis prior formulation is slightly worse off than the analysis prior. The SAKE technique does not yield as good results as our proposed technique. CS SENSE and l1SPIRiT yield better results than SAKE, but they have to be thoroughly calibrated and hence are not robust.

Although NMSE is an often used metric for evaluating the reconstruction accuracy, it does not always reflect the qualitative aspects of reconstruction. For qualitative evaluation we show the reconstructed images in

In order to elucidate the reconstruction even more, we show the difference (between groundtruth and reconstructed) images for the brain image. The difference images are shown in

State-of-the-art parallel MRI techniques either implicitly or explicitly require a calibration stage to estimate the sensitivity maps (for SENSE, SMASH and related techniques) or interpolation weights (for GRAPPA, SPIRiT and related techniques). Thus, all these methods are sensitive to the calibration stage. In recent times there is a concerted effort in developing calibration free reconstruction techniques. In this paper we improve upon a previous technique calibration free reconstruction technique [

We compare our proposed technique with other calibrated and calibration free methods. We find that our proposed non-convex analysis prior formulation always yields the best results. However there are two shortcomings with the proposed method. The first one is more of a constraint than a shortcoming. Our technique does not work with uniform periodic undersampling. This is because, our solution approach requires solving an under-determined problem

The second problem with our work is on the assumption that the modulation function is smooth that does not change the number of discontinuities in the image. However, the function can introduce new discontinuities if the function is zero in certain positions. Ideally this is taken care of during the design of the scanner, the FOV is designed such that no area of the subject is completely blind to the channel. However, if the SNR the modulation function can be effectively zero. This would violate the row-sparsity assumption and our method would fail to produce good results.

The authors are thankful to Michael Lustig for multi-channel MRI data and codes for SPIRiT and GRAPPA. This work was supported by NSERC and by Qatar National Research Fund (QNRF) No. NPRP 09-310-1-058.

The authors declare no conflict of interest.

(

(

Formation of low-rank hankel matrix.

(

(

Original and modulated signals. (

Wavelet transform of original and modulated signals. (

Groundtruth images: Brain and Shepp-Logan Phantom.

(

Reconstruction for Variable Density Random sampling. From Top to Bottom: DCS Reconstruction, l1SPIRiT, CS SENSE, SAKE, Proposed Non-Convex Synthesis Prior, Proposed Non-Convex Analysis Prior.

Difference Images. (

Comparison of reconstruction accuracies for calibration-free techniques.

| ||||
---|---|---|---|---|

Type of Sampling → | VDR | Radial | VDR | Radial |

l1SPIRiT [ |
0.13 | 0.07 | 0.09 | |

CS SENSE [ |
0.16 | 0.28 | 0.14 | 0.04 |

DCS reconstruction [ |
0.25 | 0.19 | 0.29 | 0.17 |

SAKE [ |
0.14 | 0.14 | 0.10 | |

Proposed non-convex synthesis prior | 0.08 | 0.15 | 0.01 | |

Proposed non-convex analysis prior |