Next Article in Journal
Optimization of DLTS Hinges for the Assembly of the Solar Arrays of a Communication CubeSat
Previous Article in Journal
Evaluation of Periodontal Status and Oral Health Habits with Continual Dental Support for Young Patients with Hemophilia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework for Reconstructing Super-Resolution Magnetic Resonance Images from Sparse Raw Data Using Multilevel Generative Methods

by
Krzysztof Malczewski
Institute of Information Technology, Warsaw University of Life Sciences, Nowoursynowska 166, 02-787 Warsaw, Poland
Appl. Sci. 2024, 14(4), 1351; https://doi.org/10.3390/app14041351
Submission received: 12 October 2023 / Revised: 24 January 2024 / Accepted: 29 January 2024 / Published: 6 February 2024

Abstract

:
Super-resolution magnetic resonance (MR) scans give anatomical data for quantitative analysis and treatment. The use of convolutional neural networks (CNNs) in image processing and deep learning research have led to super-resolution reconstruction methods based on deep learning. The study offers a G-guided generative multilevel network for training 3D neural networks with poorly sampled MR input data. The author suggest using super-resolution reconstruction (SRR) and modified sparse sampling to address these issues. Image-based Wasserstein GANs retain k-space data sparsity. Wasserstein Generative Adversarial Networks (WGANs) store and represent picture space knowledge. The method obtains null-valued k-space data and repairs fill gaps in the dataset to preserve data integrity. The proposed reconstruction method processes raw data samples and is able to perform subspace synchronization, deblurring, denoising, motion estimation, and super-resolution image production. The suggested algorithm uses different preprocessing methods to deblur and denoise datasets. Preliminary trials contextualize and speed up assessments. Results indicate that reconstructed pictures have better high-frequency features than sophisticated multi-frame techniques. This is supported by rising PSNR, MAE, and IEM measurements. A k-space correction block improves GAN network refinement learning in the suggested method. This block improves the network’s ability to avoid unnecessary data, speeding reconstruction. A k-space correction module can limit the generator’s output to critical lines, allowing the reconstruction of only missing lines. This improves convergence and speeds rebuilding. This study shows that this strategy reduces aliasing artifacts better than contemporaneous and noniterative methods.

1. Introduction

MRI has been a prominent brain abnormality diagnosis tool in the recent decade. HR medical imaging shows detail. Thus, high-quality visualization demand is expanding significantly. MRI scanner limits and transmission bandwidth difficulties may make it difficult to obtain brain magnetic resonance (MR) images with optimal resolution for application. Recently, picture signal processing research has focused on super-resolution picture reconstruction. Addressing the problem leads to an enhancement of this particular area. As stated by the source referenced as [1], the ability to achieve high-resolution imaging is impeded by hardware and physics limitations. Consequently, this results in extended scan durations, restricted spatial coverage, and diminished signal-to-noise ratio. According to [2], the unsettled nature of SR makes resolution challenging. After resolution loss, unlimited high-quality photographs may yield the same low-resolution image. Restoring texture and structure is hard. SR is a convex optimization problem that aims to resolve HR issues while maintaining regularization [3]. Regularization terms require initial picture distribution knowledge, usually empirical conjectures. Piecewise stability is assumed by popular limitations like complete variation. This method may struggle with photographs of intricate local architecture and features. Learning-oriented methods require less prior information. Deep learning algorithms have the potential to effectively replicate the intricate connections between low-resolution and high-resolution images, hence enhancing the Single-Image Super-Resolution (SISR) process for intricate images, even in demanding circumstances. Super-Resolution SRCNNs and their speedier variants can provide high-quality Single-Image Super-Resolution results for 2D natural photographs utilizing structured CNNs. Several studies reveal this [4,5]. Many decades ago, classic algorithms were divided into patch, edge, sparse coding, prediction, and statistics. These methods are computationally cheaper than deep learning. Also restricted is information recovery. Deep learning has made convolutional neural networks popular, enhancing SR. Early deep-learning systems cannot super-resolve medical images. Medical imaging uses 3D volumes. Previous CNNs only worked per slice, rejecting input from neighboring third-dimensional structures. Three-dimensional models include more parameters than two-dimensional models, requiring more memory and computing resources and reducing their flexibility. A study [6] found that MSE and PSNR are unreliable measures of picture visual accuracy. Sharpness and integrity decrease with MSE optimization alone. The utilization of 3D Multi-Level Densely Connected Super-Resolution Networks (mDCSRNs) has the potential to provide assistance in this matter. According to [7], a strongly coupled network gives the mDCSRN a low weight. The model’s size and speed increase with intensity difference optimization without sacrificing performance. System performance improves with GAN training. Images with more clarity and authenticity show this. According to [8], super-resolution (SR) technology enhances computer vision tasks like semantic segmentation. HR data collection using super-resolution (SR) technology has several uses. The choice of the Video Super-Resolution (VSR) algorithm is critical. Partitioning the VSR technique into multi-frame SR subtasks led to flicker artifacts and computationally expensive procedures [9,10]. The above methods ignore people’s perceptions, resulting in poor super-resolution reconstruction. In SR, GANs were introduced. Computer vision applications like super-resolution used GANs. The Generative Adversarial Network for image super-resolution (SR) SRGAN recovers high-quality textures in low-resolution (LR) images using adversarial and perceptual loss. This network revives textures and high-frequency components. But it is restricted. Noise and shifting are caused by GAN. According to sources [11,12], researchers evaluated SR alongside other image-enhancing methods. Most MRI picture distortions are from patient in-plane motion. Motion is important for super-resolution. Recent studies suggest that convolutional neural networks (CNNs) can enhance medical image quality [13,14,15]. Given the competitive image-processing scene, this is remarkable.
To address SRR, [16] developed SRCNN, a deep convolutional network. CNNs debuted in SR. The FSRCNN followed. Neural network computation is improved by its compact hourglass shape. A sub-pixel convolving layer should replace the deconvolving layer, according to Shi et al. [17]. This method simplifies training. Linear networks with basic configurations support these strategies. Over-parameterization rises with network depth. Recursive networks can overcome obstacles by iteratively adding weights [18,19]. Better network depth enhances performance. Deeper networks have more gradient outbursts. Kim and colleagues suggested training on residuals to address the seeming conflict [18]. Like HR representatives, the LR shot has low-frequency content, which inspired this strategy. Sparse residuals boost convergence. According to [20,21], residual learning frameworks were used. CNNs can map undersampled input onto full-sampled images to create high-quality images, according to several studies. Medical imaging researchers like this strategy. Many research works show that CNNs can reconstitute compressed sensing magnetic resonance imaging [22]. CNNs are trained to reconstruct undersampled images [23]. This method yields high-quality images from undersampled data [24,25]. Some studies employ hybrid techniques to enhance image quality, operating on either k-space or image space [26,27]. As mentioned, GAN training is challenging [28]. Eo and colleagues developed the KIKI-net architecture, which uses convolutional neural networks for k-space and image space operations. Restoring tissue architecture and minimizing aliasing artifacts improves picture quality by reducing loss functions in both domains. The research in references [29,30] uses the dense-connected convolutional neural network DenseNet. Compact skip connections enhance feature utilization. GANs are integrated into SRGAN for SISR, as per the literature [31]. Perceptual loss algorithms can restore photorealistic LR textures. The goal is perception. Hyun et al. utilized convolutional neural networks and k-space rectification to replace missing data with starting data [32]. Hybrid CNNs have aliasing artifacts despite surpassing image-based CNNs. Thus, aliasing artifact suppression must improve.
Exams limit magnetic resonance imaging. Many researchers have considered MRI data acceleration. The author fills the k-space with phase-encoded subsets. This method is similar to the blades of PROPELLER, except Hermitian symmetry halves the complex space. It retrieves the missing k-space element. This strategy enhances high-frequency element comprehension. This article describes a Generative Adversarial Network (GAN) technique for Compressed Sensing Magnetic Resonance Imaging (CS-MRI) reconstruction inspired by previous research. The method combines image-based GANs with k-space adjustments. It beats solo and noniterative k-space rectification. This approach combines deformable image registration with GAN, which has been extended to multiframe picture integration. The Wasserstein Generative Adversarial Network (WGAN) improves training algorithm efficiency and model convergence. Targeted exploratory investigations dispute the findings. This publication describes a strategy to improve MR image edge delineation and reduce acquisition time.
The main contributions of this work are as follows:
1.
The framework algorithm demonstrates a significant degree of comprehensiveness in its approach to the reconstruction of magnetic resonance (MR) images.
2.
This encompasses various aspects such as the sampling strategies employed for collecting raw data, the synchronization techniques utilized for k subspaces, the processes of deblurring and denoising, the estimation of motion, and eventually, the reconstruction of super-resolution images.
3.
This paper presents a novel model that utilizes a Generative Adversarial Network (GAN)-based super-resolution technique for the reconstruction of MR images.
4.
The algorithm is specifically created to extract features from images at different scales. In general, it is common for other authors to employ a reductionist approach when addressing this specific issue.
5.
The system demonstrates the ability to extract visual cues across various scales. The topic at hand is often excessively simplified by authors with contrasting perspectives.
6.
The technology utilizes specific preprocessing phases to address the removal of motion blur and noise layers.
7.
The solution being suggested utilizes a convolutional neural network-based methodology for the purpose of magnetic resonance image reconstruction. Its primary objective is to rebuild low-quality images that have been obtained from highly sparse k-space.
8.
The methodology described above utilizes the compressed sensing framework in order to prioritize the reduction in data collecting times.
9.
The author’s GAN-based deformable motion estimation approach is integrated within the reconstruction layer of the procedure.
10.
A novel approach is described for deformable motion registration, utilizing Generative Adversarial Networks. Pyramidal registration is a method that trains parameters for spatial transformation in order to correct image motion. The proposed registration system employs a Generative Adversarial Network (GAN) as its core architecture. The network is trained using multiple loss constraint functions to enable unsupervised training. Therefore, the proposed system does not depend on correct registration by ground-truth deformation.
11.
The proposed approach has demonstrated its superiority over all competing algorithms. The algorithm presented achieved the highest Peak Signal-to-Noise Ratio (PSNR) values, with a 9 percent increase compared to the second best method. It also has the best Image Enhancement Measure (IEM) values, with the second best method achieving a 7.8 percent lower rate. Additionally, it has the lowest Mean Absolute Error (MAE) rates, with the second best method having a 2.1 percent greater error.
12.
The suggested scanning strategy effectively reduced the scanning duration by a factor of three (112 s compared to 359 s).
The organization of the manuscript is representative of the primary algorithm’s flowchart. The proposed approach employs several networks to address various image processing tasks, including deblurring, denoising, registering low-quality images, and enhancing overall image resolution.

2. The Procedure for Reconstructing Subimages of MR Blades Using Convolutional Neural Networks

Numerous sophisticated convolutional neural network topologies have previously been suggested in order to accomplish effective image-to-image conversion. The suggested methodology employs a network architecture that is based on a fully convolutional network. The architectural design under consideration has thoroughly been discussed in previous scholarly works [33]. The selected architecture was chosen due to its well-documented history of delivering outstanding performance in the field of medical imaging. The process of encoding demonstrates parallels classic convolutional neural network (CNN) architectures, where a sequence of two-dimensional 3 × 3 convolving layers are iteratively utilized. Following the processing of each layer, a leaky rectified linear unit, batch normalization, and a 2 × 2 maximum pooling technique are implemented for downsampling.
The objective of the technique presented in this research is to reconstruct a set of LR magnetic resonance scans by utilizing a sequence of k-space blades that exhibit significant sparsity. The suggested sampling methodology promotes the decrease in data density and utilizes a conjugate symmetric mask to alleviate the time required for data capture. The procedure for improving the quality of low-resolution photos entails the application of deblurring and registration layers to address the effects of motion and blur. The U-net structure previously mentioned was trained utilizing the mean squared error loss function, which may be formally represented as follows. In this particular case, each zero-filling image is linked to a completely sampled image, which is represented as χ t r u e . The study employed the Adam optimizer, as mentioned in the cited literature [34], to minimize the loss function. The training ratio was established at 0.0001, and the procedure was executed over a span of 100 epochs. The training procedure employed a restricted collection of 32 photos. The determination of these hyperparameters was grounded in empirical facts:
β i = a r g m i n β i χ t r u e f β i | F 1 ( y 0 ) | i = 0 a r g m i n β i χ t r u e f β i f β i ( χ i ) otherwise

3. The Application of Generative Adversarial Networks (GANs) within the Framework of Super-Resolution Image Reconstruction

Figure 1 shows the model framework. The system has deformable motion estimation and a reconstructing network. The second component has producing and discriminating blocks. The Generative Adversarial Network (GAN) framework’s motion correction effectiveness depends on its ability to restore images and recover missing raw material. This feature produces high-quality photographs. The generator aims to create samples that resemble real data, while the discriminator aims to distinguish samples as authentic or fake.
m i n G m a x D E x l o g D ( x ) E y l o g 1 G ( y ) ,
The variables y and x denote motion-distorted and corrected illustrations, respectively. Except for the core layer, encoder blocks consist of five convolving layers and n 2 feature maps, each comprising n mappings. The encoder and decoder blocks share an architectural approach, but transposed convolutions replace convolving layers. The present study employs a technique for approximating spatial transformation parameters within the context of image registration technology as explicated in the scholarly publication referenced as [35]. Subsequently, the disparity in displacement between frames is rectified. The displacement factors have the ability to alter the spatial positioning of frames within sequences that portray subject matter that is identical yet captured at distinct temporal and geographical instances.
The registration module incorporates several pairings and applies a 3D convolving layer to combine them with I L R frames. The output generated by the user is sent into the generation net, denoted as G . The research employed a generation net design, denoted as G , which was based on the SR-GAN architecture (see to Figure 2). In order to minimize the number of parameters and ensure effective generalization, the G -network employs a solitary residual block. In order to achieve the necessary level of detail, the residual network employs two sub-pixel convolving layers (reference [36]). The architectural design of the discriminator, shown as D in Figure 3, has a total of eight convolving layers. As the hierarchical layers of a network expand, a positive connection is observed between the attributes. The process of convolutional kernel reduction aims to decrease the dimensionality of features. Two improvements were implemented in order to tackle the challenges associated with SR-GAN reconstruction and network training/convergence. During the initial phase, the discriminator D did not incorporate the S i g m o i d activation function in the output layer. Furthermore, the adjustments made to the parameters were constrained to a fixed value denoted as c (0.01), which was relative to their absolute magnitude. The Generative Adversarial Network (GAN) is a widely researched subject due to its intricate and demanding nature. The present inquiry centers on the examination of inadequate security mechanisms implemented during training and the intricate convergence of the model. This assertion is substantiated by the scholarly references cited as [20,30,37]. The observed anomaly can be attributed to the limited degree of overlap between the distributions of genuine and counterfeit items. Neglecting the statistical metric known as JS divergence, which serves to compare distributions, has the potential to impede the convergence of networks. Arjovsky et al. [38] conducted a study which demonstrated that the Wasserstein distance is a reliable metric for quantifying the separation across distributions, even in scenarios where there is minimal overlap.

4. The Techniques Employed for the Reconstruction of High-Resolution Magnetic Resonance (MR) Images

Starting with poorly sampled subspaces, the methodology reconstructs magnetic resonance (MR) pictures using low-resolution techniques. Refer to Figure 1 for details. Blur, noise removal, and motion estimation layers are used in the reconstruction process as shown in Figure 2, Figure 3 and Figure 4.
The Wasserstein Generative Adversarial Network (WGAN) improves confrontation network building as shown in reference [30]. The Earth Mover’s distance, or Wasserstein distance, measures the difference between two probability distributions. It assesses the least work needed to alter one distribution:
W ( P r e f , P g e n ) = 1 K s u b | | f | | L K E ( x , y ) P r e f [ f ( x ) ] E x P g e n [ f ( x ) ]
The equation uses P d a t a and P g e n to represent all possible joint probability distributions between P r e f and P G . The discriminatory function of the adversarial network is called f as shown in the equation. This limits the discriminator’s input sample derivative to a range. The variable in the domain D undergoes a modification operation limited to the range c to c. This approach prioritizes generator gradient update and addresses the disappearing gradient. The function f follows the equation:
L = E x P r e f [ f W ( x ) ] E x P G [ f W ( x ) ]
Increasing L allows for more accurate calculation of the Wasserstein distance between probability distributions P r e f and P g e n . The former word refers to actual information, whereas the latter to synthetic information. The discriminator and generator loss functions are precisely defined:
D l o s s = E x P g e n [ f W ( x ) ] E x P r e f [ f W ( x ) ]
G l o s s = E x P g e n [ f W ( x ) ]
The training method is defined by the discriminator loss function D l o s s . The Wasserstein distance between real and generated data distributions must decrease to evaluate Generative Adversarial Network (GAN) training. The distance magnitude is negatively correlated with this measure.
This approach maximizes the efficiency of the training procedure for the generator, represented as G . This work analyzes the correlation between the input sequence I t L R (values 1 to N) and its corresponding counterpart I t . Feedforward convolutional neural networks fulfill the goal. The neural network is trained with the parameter Ψ G . The neural network parameters Ψ G = U 1 : L ; b 1 : L are found by minimizing the loss function l G in the Super-Resolution generation network, as described in reference [9]. The network has L layers.
Ψ G * = a r g m i n Ψ G 1 N t = 1 N l G ( G Ψ G I t L R , I t H R )
This study uses a loss function l G based on past academic research and cited in [9].
l G = l M S E + 10 6 l G e n
The SR-GAN model’s net loss function includes the generator and discriminator blocks’ loss functions, l G and l D .
l D = 1 N n = 1 N l o g 1 D Ψ D ( G Ψ G I S R ) l o g D Ψ D ( I H R )
The discriminator–generator reconstruction equation is given. A generator G Ψ G I S R is used to rebuild the initial picture I H R . Reconstructed images are denoted by D Ψ D ( G Ψ G I S R ) and D Ψ D ( I H R ) . The variable represents the target photo count. The variables l G , l M S E , and l G e n are defined as follows:
l M S E = 1 r 2 H W x = 1 r W y = 1 r H ( I x , y H R G Ψ G I L R ) x , y 2
l G = n = 1 N l o g D Ψ D G Ψ G ( I L R )
The researchers added a registration loss component to the model’s loss function to improve high-frequency texture information recovery. The predicted difference between spatial transformation computations and empirical observations is called “RLT” (Relative Localization Tolerance). The main goal is to minimize complicated information loss when geometric translation is applied to consecutive frames. This method aids HR scan repair. The RLT loss function is shown visually:
R L T = i = ± 1 I t + i L R I t L R 2
The equation presented above demonstrates the outcome of implementing the registration network on the image denoted as I t + i L R . The resulting image is I t + i L R . The equation for the center of gravity length l G is:
l G = l M S E + 10 6 l G + ϱ R L T
The R L T weight coefficient ϱ was empirically estimated as 0.001 using test data. Deleting variables l G e n and l D can affect the loss function of Wasserstein Generative Adversarial Networks (WGANs):
l G e n = 1 N n = 1 N D Ψ D G Ψ G ( I S R ) ) D Ψ D ( I H R )

4.1. The Process of Registering Magnetic Resonance (MR) Scans

Multi-scale strategies, such as the registration net, have been shown to be effective in conventional techniques [39]. This method requires the target frame ( I t L R ) and the surrounding frame ( I t R : t + R L R ) as input. Pyramidal registration trains spatial transformation parameters for image motion correction. For a triple scan input, the registration layer records two sets of scans. The net’s parameters are optimized by decreasing the MSE between converted and target frames. The parameter is denoted by ω δ , t + 1 * . This learning method enhances neural network motion correction on the picture dataset:
ω δ , t + 1 * = a r g m i n ω δ , t + 1 I t L R I t L R 2 .
After completing the registration procedure, the variable I t L R represents the registration layer outcome.
The photo sequence is known. The registration network layer design is shown in Figure 5. Studies in references [17,40,41] have shown the effectiveness of classic deformable registration modeling methodologies using a multi-scale framework.
The deformable medical image registration method reported in this study possesses significant theoretical and clinical relevance. The registration accuracy and efficiency of traditional approaches do not reach the standards required for clinical use. The present research introduces a novel paradigm for deformable adversarial registration, which effectively eliminates the need for ground-truth deformation. The residual registration network suggested in this study, which is based on the Nested U-Net architecture, demonstrates exceptional capabilities in feature extraction and robustness. The integration of several constraints that consider the extracted anatomical segmentation information by the discriminator can facilitate the model’s ability to adjust to diverse modal registration tasks. This paper presents a novel approach to deformable picture registration with deep learning, which operates in an end-to-end manner. The proposed registration system utilizes a Generative Adversarial Network (GAN) as its fundamental architecture. Multiple loss constraint functions are incorporated to facilitate unsupervised training of the network. Consequently, the suggested framework does not rely on ground-truth deformation for accurate registration. The proposed registration framework incorporates a pair of deep neural networks. The registration block employs a hierarchical U-Net model for the purpose of predicting the displacement vector field from the moving picture to the fixed image. Additionally, a residual module is used to mitigate the issue of over-fitting. During the training phase, the discriminator employs the conventional convolutional neural network (CNN) model to assess the alignment quality of anatomical segmentation in two lung pictures. It then offers feedback on misalignment to aid in the training of the registration module. Additionally, the utilization of the deformable grid enhances the rate at which the algorithm converges. In this way, this study uses a method to find a spanning tree with low edge costs. Nodes i in set P represent discrete elements like pixels or groupings of pixels. The system links nodes to motion field labels hidden in the system. The motion fields are represented by w i l = { f i l , g i l , h i l } . The optimization energy function includes two components: the data cost (S) and the pair-wise regularization cost ( R ( w i l , w i m ) ). The pair-wise regularization cost applies to all connected nodes l to m:
E ( w i ) = i P S ( w i l ) + χ l , m N R ( w i l , w i m )
Pixel similarity in two photos is estimated by the cost function. The parameter χ is critical for evaluating the influence of the regularization term and weighting. The first component in Equation (16) corresponds to the data term, whereas the latter component reflects the regularization parameter.
The observed behavior remains unchanged despite the surrounding entity displacements. The variable χ is used to determine the influence of the regularization term and apply weights. Equation (16) has two components: the data term and the regularization parameter.

4.2. The Network for Removing Blur in Magnetic Resonance Images (MRIs)

This study seeks to reconstruct a clear and exact representation I S from a degraded picture I B without prior blurring knowledge. The deblurring process uses a convolutional neural network, known as the Generator ( G ρ G ). An estimation determines the best I S image for each I B value. The training step includes the critic network ( D ρ D ) and adversarial training for both networks. Content and adversarial losses produce the composite loss function:
L = L G A N + λ · L X
The constant value of λ was 100 in all experiments. This study uses different conditioning than Isola et al. [42]. Interestingly, input–output discrepancies are not penalized. Adversarial loss is essential to machine learning. Loss functions encourage the development of outputs that closely resemble actual creatures or a reference model, see Figure 6. This method works in visual representation, auditory input, language comprehension, and other sectors. Scholars and practitioners say adversarial loss improves machine learning model precision and resilience. This improves reliability and efficiency. Loss determination:
L G A N = n = 1 N D ρ D G ρ G I B
Commonly used data loss functions include MAE and MSE. Using the above functions as the sole optimization target produces indeterminate photo anomalies. According to [42], the observed variances are due to the average value of possible solutions in pixel space. The perceptual loss function employs the L 2 -loss approach to quantify the dissimilarity of the synthesized MRI scan and the reference MR scan’s CNN feature maps through mathematical expressions. The terms are stated as follows:
L X = 1 U k , n B k , n x = 1 U k , n y = 1 B k , n k , n I S x , y k , n G ρ G I B x , y 2
The feature map k , n is a representation of the output obtained from the n-th convolution operation within a pre-trained network specifically developed for MRI analysis [43]. The acquisition of the feature map occurs after activation and before the k-th maxpooling layer. The variables U k , n and B k , n represent the sizes of the feature maps.

4.3. Noise Reduction Techniques for Magnetic Resonance (MR) Images

MRI magnitude pictures are the main representation, making denoising difficult. The process of generating magnitude images involves the separation of real and imaginary components as described in the study by the author cited as reference [44]. Magnitude pictures are subject to noise caused by the Rician distribution, which exhibits a higher level of complexity compared to additive noise. Model precision determines the denoising results. Deep learning (DL) may solve the problem. This is because it can ignore the core physical process and adapt it through sample-based learning.
MRI noise mitigation aims to improve diagnostic pictures by reducing noise. The variable x represents noise-damaged magnetic resonance (MR) images, while y represents noise-free ones. Consider two matrices x and y with real-valued entries and identical dimensions of m × n . The entities are linked as follows:
x = ϱ y
Function ϱ maps the noise generating process. Deep learning is notoriously opaque, unaffected by noisy data. Improving the efficiency of noise reduction in magnetic resonance imaging (MRI) requires a more efficient search for the best approximation of the function ϱ 1 . Denoising removes noise from a signal or dataset:
a r g min f y ^ y
The variable y ^ predicts the value of y, obtained from the function f ( x ) , providing the most accurate forecast of ϱ ’s inverse.
Statisticians can conclude that samples x and y come from different data distributions. In this instance, x represents the probability distribution of a distorted image ( P n ), while y represents an undistorted image ( P g e n ). Denoising alters distribution through a mapping algorithm. The function f aligns samples from the distribution P n with the distribution P g e n , which is identical to P r .
The discriminative model is designed to distinguish generative model samples from real data samples. The generative model uses the input sample to create a new sample that closely matches the data distribution:
L W G A N D = E y P r D y + E x P n log D y + + E x P n D G x + ψ E x ^ P x ^ x ^ D x ^ 2 1 2
Finally, the gradient penalty term ( ψ ) acts as a penalty coefficient. To determine the probability distribution P x ^ , points are uniformly sampled along straight lines from both the real data distribution ( P r ) and the generator distribution ( P g e n ). The generator loss function, written as G , is represented mathematically:
L W G A N G = E x P r log D y + E x P n log 1 D G x
Pixel-level adjustments often use the Mean Squared Error (MSE) loss algorithm. The main goal is to reduce pixel-level differences between the source and synthesized images. The following procedure yields the computation:
L M S E = 1 a b c | | G ( x ) y | | 2
where a, b, and c represent image dimensions. A recent study found that the Mean Squared Error (MSE) loss function can increase the Peak Signal-to-Noise Ratio. However, decreased specificity, especially for common features, may affect clinical diagnosis.
According to references [45,46,47], the proposed loss function successfully addresses the issue at hand by integrating perceptual loss. An existing neural network can extract significant information from real and counterfeit images. Perceptual similarity measures the difference between reference and synthetic pictures. The next section explains the perceptual loss function in detail:
L p e r c e p t u a l = 1 a b c | | ω G ( x ) ω y | | F 2
The variable ω represents the feature extractor, while a, b, and c indicate the feature map dimensions. The V-G-G-19 network is used in this study to extract visual properties [48]. The nineteen-layer V-G-G-19 convolutional neural network has sixteen convolving layers and three fully connected layers. Features are extracted just in the first sixteen levels. The following steps are needed to implement VGG network-based perceptual loss:
L V G G = 1 a b c | | V G G G ( x ) V G G y | | F 2
The generator G is linked to a composite loss function that includes MSE, V-G-G, and discriminator losses. The discriminator network architecture, denoted as D , is shown in Figure 7. The model has three convolving layers with 32, 64, or 128 filters. The convolution layers had 3 × 3 × 3 homogeneous kernels. The top layer fuses completely, producing a result. The pre-trained V-G-G-19 network extracts features. For information, refer to the primary source document [48]. Transfer learning eliminates the need to retrain neural networks for magnetic resonance (MR) scans, according to Pan and Yang [49]:
L R E D W G A N = δ 1 L M S E + δ 2 L V G G + δ 3 L W G A N G
The suggested RED-WGAN network configuration is shown in Figure 7. The system has three primary components: a generation net ( G ), a discriminator network ( D ), and a feature extractor (V-G-G network). Similar short connections link the convolutional and deconvolutional layers. Except for the last layer, all layers perform three-dimensional convolution, Leaky-ReLU activation, and batch normalization. The last layer uses Leaky-ReLU activation and 3D convolution alone. The present study employs a kernel shape of 3 × 3 × 3 together with a filter series 32-64-128-256-128-64-32-1.

5. Results

The proposed algorithmology was evaluated for its effectiveness through laboratory studies and an in vivo assessment. The main objective of this study was to evaluate and contrast the effectiveness of several super-resolution techniques and k-space sampling approaches. The efficacy of a novel technique in recreating super-resolution images was assessed by analyzing its performance relative to various cutting-edge alternatives. Additional tests were undertaken to evaluate the influence of the k-space decimation ratio on the resultant magnetic resonance (MR) images. Furthermore, a comparative analysis was conducted to examine different patterns of k-space sampling in magnetic resonance imaging (MRI). The results obtained are depicted in Figure 8 and Figure 9.
The application of compressed sensing techniques, together with the utilization of hermitian symmetry and partial Fourier principles, has been found to lead to a reduction in the time required to fill the k-space as compared to other sampling systems for k-space. The data shown in Table 1 and Table 2 provide proof for this claim.
It is imperative to recognize this particular discovery. The experimental studies entailed compressing the raw data signals by employing sampling rates that corresponded to 20–40–60–80–100 % of the completely sampled k-spaces. The primary objective of this study is to investigate the integration of super-resolution picture reconstruction with sparse sampling techniques in the context of MRI scanner k-space. The blades of the PROPELLER underwent compression in a precise manner by employing 30 and 15 radial paths in the frequency domain, respectively. The aim of this study was to develop a sparsity model that could enhance the reconstruction process from projections with lower angles. The study specifically centered on the process of reconstructing images using a limited subset of fifty projections, which were condensed within a π / 2 aperture. In addition, nonrigid and deformable transformations were utilized to distort the meshes of the ground truth magnetic resonance pictures. In order to achieve accurate replication of magnetic resonance (MR) pictures, several techniques were employed on the processed images, including the implementation of a Gaussian blur kernel, the addition of noise, and downsampling. Due to the unavailability of a publicly accessible dataset including motion-distorted photographs, we are compelled to rely on intentionally generated data in order to evaluate the effectiveness of our proposed methodology. The photos included in this research were obtained from the de-identified database of the Medical University in Poznan, as well as from the Multimodal Brain Tumor Image Segmentation Benchmark dataset (reference number [56]). The methods described in the cited reference [8] were utilized to introduce motion artifacts into motionless photographs from the dataset. The dataset was employed to evaluate the effectiveness of motion correction approaches as shown in references [3,4]. The procedure entails the incorporation of motion into motion-free k-space data for every blade of the PROPELLER. Following this, particular portions of the k-space data are isolated, and these segmented k-space data are merged together to create the anticipated sample patterns. The dataset denoted as [56] is relevant to the domain of images. The production of simulated k-space multishot MRI data was successfully achieved, using the methods described in [9].
The fastMRI dataset [57], which is a publicly accessible raw k-space dataset, was utilized in the second experiment. The results collected from the dataset have demonstrated the superiority of the suggested approach over the current leading competitors.

6. Discussion

Medical imaging technologies often face obstacles related to low spatial resolution, contrast issues, visual noise scattering, and blurring. These challenges can significantly hinder the accuracy and precision of medical diagnoses. The aforementioned issues arise as a result of the intricate characteristics inherent in bodily organs. This study introduces a novel algorithm for the reconstruction of super-resolution images, which combines the utilization of the Wasserstein Generative Adversarial Network and deformable motion registration techniques. In addition, the methodology provided demonstrates its appropriateness for the processing of compressed sensing magnetic resonance sequences. The suggested model showcases the effective usage of data from consecutive magnetic resonance (MR) pictures and demonstrates consistent training by incorporating the Wasserstein Generative Adversarial Network (WGAN) technique. The results of the reconstruction analysis demonstrate improved values for the Peak Signal-to-Noise Ratio (PSNR), Mean Absolute Error (MAE), and image enhancement measure (IEM) metrics compared to previous approaches. Additionally, the method achieves notable improvements in capturing fine texture details. The algorithm demonstrates a notable level of robustness, and the results it generates display a consistent and trustworthy pattern. The author’s approach yields improved metrics compared to existing advanced multi-frame algorithms, with varying degrees of enhancement. Moreover, the improvement in high-frequency detail reconstruction is notably augmented following local amplification. The examined model effectively obtains the essential inter-frame information as evidenced by its evaluation on a publicly accessible dataset. The results suggest that the reconstructed images exhibit improved high-frequency features in comparison to the previously sophisticated multi-frame approaches. This is corroborated by the observed rise in the PSNR, MAE, and IEM metrics, as documented in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20, Table 21, Table 22, Table 23, Table 24, Table 25, Table 26, Table 27, Table 28 and Table 29.
Numerous medical imaging modalities face considerable obstacles in the reduction in scanning duration as indicated by the data presented in Table 1 and Table 30. The theory of compressed sensing (CS) offers a potential method for reconstructing sparse signals by projecting them into a low-dimensional linear subspace. This approach is appealing since it provides theoretical assurances. The GANs are employed to encode image-specific prior knowledge. The methodology described in this study utilizes iterative computations in both the image and k-space domains, incorporating the use of Wasserstein Generative Adversarial Networks (WGANs) and k-space correction algorithms. The suggested approach introduces a k-space correction block to enhance the refinement learning of the GAN network. This block aims to improve the network’s ability to resist creating superfluous data, hence accelerating the reconstruction process. Additionally, a k-space correction module is suggested to restrict the generator’s output to essential lines, thus enabling the reconstruction of solely the missing lines. This enhancement improves the convergence profile and guarantees expedited rebuilding. As a result, the presented algorithm has the capacity to reduce the mistake related to k-space correction. The method provided in this study produces reconstructed images that exhibit superior image quality and a decrease in aliasing artifacts compared to other techniques as supported by the data presented in the tables. The method described in this study demonstrates enhanced efficacy in mitigating aliasing artifacts as compared to both contemporary techniques and other noniterative approaches. In addition, the method being suggested demonstrates a higher level of PSNR performance in comparison to alternative methods, regardless of the sampling rates employed for both Cartesian and radial sampling masks.
Moreover, the empirical data employed in this study consist of magnetic resonance (MR) images containing real values, which are distinct from the actual k-space data acquired during MRI scanning, complex-valued magnetic resonance (MR) pictures. Therefore, it is crucial to build a false connection between the input and output layers of Generative Adversarial Networks (GANs). Preprocessing is necessary for data that possess complicated values. In the future, there will be a heightened focus on the assessment and validation of radiologists’ clinical effectiveness, particularly in relation to T1-weighted pictures and other magnetic resonance (MR) imaging techniques. This study introduces and illustrates some improvements that are designed to improve the quality of images while also minimizing the time required for acquisition. Although misregistration distortions are present, the presented algorithm has the ability to reduce artifacts that arise from data with a high degree of sparsity. The approach integrates multiple sub-techniques, including compressed sensing, raw data sparsity, and super-resolution reconstruction, in order to enhance either the effectiveness or the quality of k-space filling. The improvement in magnetic resonance (MR) picture quality is accompanied by a decrease in image complexity. Enhancing the sampling rate of high-frequency components results in a more accurate representation of edges. The method being evaluated possesses the capability to be implemented in MR scanners without requiring any modifications to the hardware.
When performing the second experiment, the fastMRI dataset, which is a raw k-space dataset that is available to the general public, was utilized. It is proved through the results that were obtained from the dataset that the technique that is suggested is superior to the major rivals that are currently on the market.
The acquired findings as depicted in Figure 8 and Figure 10, Figure 11 and Figure 12 demonstrate an enhancement in both the resolution and quality. The utilization of advanced methods for identifying potentially malignant or pre-cancerous anomalies led to improved resolution and legibility, hence enhancing the ability to detect such anomalies. Moreover, the achievements were validated by employing PSNR measures [50], which accurately evaluate the quality of medical images. This study involved an evaluation of the author’s methods in comparison to several advanced super-resolution image reconstruction algorithms. The current study focused on the reconstruction of an image using a regular sampling scheme, without the application of motion correction and SRR (1). To accomplish this, the reconstruction process involved the utilization of the B-spline curve (2) and Yang’s method [50] (3). Additionally, Lim’s method (4), Zhang’s procedure (5), Zhang’s second algorithm (6) (7), and Liu et al.’s [53] procedure (8) were employed. Lim’s method is referenced as [20], Zhang’s procedure as [51], and Zhang’s second algorithm as [18]. Furthermore, the reconstruction procedure utilized Guerreiro’s methodology [54] (9), Pham et al.’s approach [55] (10), and Shi’s method [17] (11) mentioned in the citation [20], as well as the author’s own method (12).
The signed rank test was used to test the null hypothesis that imaging scores at different acceleration rates have the same central tendency. Study statistical tests were run in R-Project. Peak Signal-to-Noise Ratio (PSNR) mean values were compared between two groups using statistical analysis. The statistical study examined unpaired data using the student’s t-test for independent groups and paired data using a paired t-test. Peak Signal-to-Noise Ratio (PSNR) statistical significance was determined using Student’s t-test. Probability tests showed that the algorithm is robust. The tables above show the results. The obtained p-values are statistically significant.
Furthermore, a random noise stability test was conducted on the neural networks that were used. A noise stability test of the neural network was conducted to evaluate the robustness of the MR image reconstruction process in the presence of realistic noise. The local robustness of the Gaussian noise in the trained neural network was empirically determined by measuring the greatest ratio between variations in the output space and variations in the input space. Neuroimaging techniques were employed in this trial. An initial batch of 1000 brain scans were utilized as magnetic resonance (MR) pictures, and their associated k-space data were produced.
The second set of 1000 MR images had additive white Gaussian noise randomly applied to their associated k-space data, with a signal-to-noise ratio of 20–30 dB. The phase images of the brain scans remained consistent throughout the pairings of brain scans. The noise-corrupted trained model was used to recreate both k-space data. An output-to-input variation ratio of 2.5 was observed when the Gaussian noise ranged from 20 to 30.
This study proposes a new magnetic resonance image reconstruction framework to improve resolution. Compressed raw data and an advanced GAN architecture are used in the suggested system. For low-resolution magnetic resonance (MR) image preprocessing, it includes denoising and deformable motion estimation modules. The network processes low-resolution and noisy magnetic resonance (MR) data and reconstructs high-resolution MR images with super-resolution. The technology helps solve a problem where CMR artifacts and noise lower the Peak Signal-to-Noise Ratio (PSNR), reducing the GAN efficacy. Our technique has higher-quality ratios than existing methods, indicating that it yields better picture reconstruction. Thus, this can help doctors diagnose more accurately.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Freeman, W.T.; Jones, T.R.; Pasztor, E.C. Example-based super-resolution. IEEE Eng. Med. Biol. Mag. 2002, 22, 56–65. [Google Scholar] [CrossRef]
  2. Tai, Y.W.; Liu, S.; Brown, M.S.; Lin, S. Super resolution using edge prior and single image detail synthesis. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010. [Google Scholar]
  3. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image Super-Resolution Via Sparse Representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef]
  4. Chang, H.; Yeung, D.Y.; Xiong, Y. Super-resolution through neighbor embedding. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Washington, DC, USA, 27 June–2 July 2004; Volume 1. [Google Scholar]
  5. Lidke, K.A.; Rieger, B.; Jovin, T.M.; Heintzmann, R. Superresolution by localization of quantum dots using blinking statistics. Opt. Express 2005, 13, 7052–7062. [Google Scholar] [CrossRef]
  6. Wahab, A.W.A.; Bagiwa, M.A.; Idris, M.Y.I.; Khan, S.; Razak, Z.; Ariffin, M.R.K. Passive video forgery detection techniques: A survey. In Proceedings of the 2014 10th International Conference on Information Assurance and Security, Okinawa, Japan, 28–30 November 2014; pp. 29–34. [Google Scholar]
  7. Bagiwa, M.A.; Wahab, A.W.A.; Idris, M.Y.I.; Khan, S.; Choo, K.-K.R. Chroma key background detection for digital video using statistical correlation of blurring artifact. Digit. Investig. 2016, 19, 29–43. [Google Scholar] [CrossRef]
  8. Wang, L.; Li, D.; Zhu, Y.; Tian, L.; Shan, Y. Dual Super-Resolution Learning for Semantic Segmentation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3773–3782. [Google Scholar]
  9. Liu, D.; Wang, Z.; Fan, Y.; Liu, X.; Wang, Z.; Chang, S.; Huang, T. Robust video super-resolution with learned temporal dynamics. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  10. Tao, X.; Gao, H.; Liao, R.; Wang, J.; Jia, J. Detail revealing deep video super-resolution. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4482–4490. [Google Scholar]
  11. Jiang, K.; Wang, Z.; Yi, P.; Wang, G.; Lu, T.; Jiang, J. Edge-Enhanced GAN for Remote Sensing Image Superresolution. IEEE Trans. Geosci. Remote. Sens. 2019, 57, 5799–5812. [Google Scholar] [CrossRef]
  12. Qian, G.; Gu, J.; Ren, J.S.; Dong, C.; Zhao, F.; Lin, J. Trinity of Pixel Enhancement: A Joint Solution for Demosaicking, Denoising and Super-Resolution. arXiv 2019, arXiv:1905.02538. [Google Scholar]
  13. Mousavi, A.; Patel, A.B.; Baraniuk, R.G. A deep learning approach to structured signal recovery. In Proceedings of the 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), Champaign, IL, USA, 30 September–2 October 2015; pp. 1336–1343. [Google Scholar]
  14. Kulkarni, K.; Lohit, S.; Turaga, P.; Kerviche, R.; Ashok, A. Reconnet: Non-iterative reconstruction of images from compressively sensed measurements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 449–458. [Google Scholar]
  15. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  16. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2014; pp. 184–199. [Google Scholar]
  17. Shi, W.; Caballero, J.; Huszar, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
  18. Kim, J.; Lee, J.K.; Lee, K.M. Deeply-Recursive Convolutional Network for Image Super-Resolution. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1637–1645. [Google Scholar]
  19. Tai, Y.; Yang, J.; Liu, X. Image Super-Resolution via Deep Recursive Residual Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  20. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 1132–1140. [Google Scholar]
  21. Yu, J.; Fan, Y.; Yang, J.; Xu, N.; Wang, Z.; Wang, X.; Huang, T. Wide activation for efficient and accurate image super-resolution. arXiv 2018, arXiv:1808.08718. [Google Scholar]
  22. Wang, Y.; Bai, H.; Zhao, L.; Zhao, Y. Cascaded reconstruction network for compressive image sensing. EURASIP J. Image Video Process. 2018, 2018, 77. [Google Scholar] [CrossRef]
  23. Huang, H.; Nie, G.; Zheng, Y.; Fu, Y. Image restoration from patch-based compressed sensing measurement. Neurocomputing 2019, 340, 145–157. [Google Scholar] [CrossRef]
  24. Xie, X.; Wang, C.; Du, J.; Shi, G. Full image recover for block-based compressive sensing. In Proceedings of the 2018 IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 1–6. [Google Scholar]
  25. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; Volume 27. [Google Scholar]
  26. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  27. Yao, H.; Dai, F.; Zhang, S.; Zhang, Y.; Tian, Q.; Xu, C. Dr2-net: Deep residual reconstruction network for image compressive sensing. Neurocomputing 2019, 359, 483–493. [Google Scholar] [CrossRef]
  28. Raj, A.; Li, Y.; Bresler, Y. GAN-Based Projector for Faster Recovery with Convergence Guarantees in Linear Inverse Problems. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5602–5611. [Google Scholar]
  29. Tong, T.; Li, G.; Liu, X.; Gao, Q. Image Super-Resolution Using Dense Skip Connections. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4809–4817. [Google Scholar]
  30. Hu, X.; Mu, H.; Zhang, X.; Wang, Z.; Tan, T.; Sun, J. Meta-SR: A Magnification-Arbitrary Network for Super-Resolution. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 1575–1584. [Google Scholar]
  31. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  32. Hyun, C.M.; Kim, H.P.; Lee, S.M.; Lee, S.; Seo, J.K. Deep learning for undersampled MRI reconstruction. Phys. Med. Biol. 2018, 63, 135007. [Google Scholar] [CrossRef] [PubMed]
  33. Du, J.; Xie, X.; Wang, C.; Shi, G. Perceptual compressive sensing. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Guangzhou, China, 23–26 November 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 268–279. [Google Scholar]
  34. Zhang, Z.; Gao, D.; Xie, X.; Shi, G. Dual-Channel Reconstruction Network for Image Compressive Sensing. Sensors 2019, 19, 2549. [Google Scholar] [CrossRef] [PubMed]
  35. Gu, J.; Lu, H.; Zuo, W.; Dong, C. Blind Super-Resolution with Iterative Kernel Correction. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 1604–1613. [Google Scholar]
  36. Maeda, S. Unpaired Image Super-Resolution Using Pseudo-Supervision. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 288–297. [Google Scholar]
  37. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  38. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein Generative Adversarial Networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017. [Google Scholar]
  39. Malczewski, K. Super-Resolution with compressively sensed MR/PET signals at its input. Inform. Med. Unlocked 2020, 18, 100302. [Google Scholar] [CrossRef]
  40. Dong, C.; Loy, C.C.; Tang, X. Accelerating the Super-Resolution Convolutional Neural Network. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2016; pp. 391–407. [Google Scholar]
  41. Zhang, Z.; Wang, Z.; Lin, Z.; Qi, H. Image Super-Resolution by Neural Texture Transfer. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 7974–7983. [Google Scholar]
  42. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  43. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the 15th European Conference on Computer Vision, Munich, Germany, 8–14 September 2018. [Google Scholar]
  44. Andersen, A.H. On the Rician distribution of noisy MRI data. Magn. Reson. Med. 1996, 36, 331–333. [Google Scholar] [CrossRef]
  45. Bruna, J.; Sprechmann, P.; Lecun, Y. Super-resolution with deep convolutional sufficient statistics. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  46. Gatys, L.A.; Ecker, A.S.; Bethge, M. Texture synthesis using convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, QC, Canada, 7–12 December 2015; pp. 262–270. [Google Scholar]
  47. Johnson, J.; Alahi, A.; Li, F.F. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 694–711. [Google Scholar]
  48. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  49. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  50. Yang, F.; Ding, M.; Zhang, X. Non-Rigid Multi-Modal 3D Medical Image Registration Based on Foveated Modality Independent Neighborhood Descriptor. Sensors 2019, 19, 4675. [Google Scholar] [CrossRef]
  51. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 2472–2481. [Google Scholar]
  52. Mahapatra, D.; Bozorgtabar, B.; Garnavi, R. Image super-resolution using progressive generative adversarial networks for medical image analysis. Comput. Med. Imaging Graph. 2019, 71, 30–39. [Google Scholar] [CrossRef]
  53. Wang, J.; Levman, J.; Pinaya, W.H.L.; Tudosiu, P.; Cardoso, M.J.; Marinescu, R. InverseSR: 3D Brain MRI Super-Resolution Using a Latent Diffusion Model. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023), Vancouver, BC, Canada, 8–12 October 2023. [Google Scholar]
  54. Guerreiro, J.; Tomás, P.; Garcia, N.; Aidos, H. Super-resolution of magnetic resonance images using Generative Adversarial Networks. Comput. Med. Imaging Graph. 2023, 108, 102280. [Google Scholar] [CrossRef]
  55. Pham, C.H.; Tor-Díez, C.; Meunier, H.; Bednarek, N.; Fablet, R.; Passat, N.; Rousseau, F. Multiscale brain MRI super-resolution using deep 3D convolutional networks. Comput. Med. Imaging Graph. 2019, 77, 101647. [Google Scholar] [CrossRef] [PubMed]
  56. Bagiwa, M.A.; Wahab, A.W.A.; Idris, M.Y.I.; Khan, S. Digital Video Inpainting Detection Using Correlation of Hessian Matrix. Malays. J. Comput. Sci. 2016, 29, 179–195. [Google Scholar] [CrossRef]
  57. Knoll, F.; Zbontar, J.; Sriram, A.; Muckley, M.J.; Bruno, M.; Defazio, A.; Parente, M.; Geras, K.J.; Katsnelson, J.; Chandarana, H.; et al. fastMRI: A publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning. Radiol. Artif. Intell. 2020, 2, e190007. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Single k-space blade-based low-resolution MR images and their corresponding registration layer.
Figure 1. Single k-space blade-based low-resolution MR images and their corresponding registration layer.
Applsci 14 01351 g001
Figure 2. A diagram illustrating the structure and components of the generation net.
Figure 2. A diagram illustrating the structure and components of the generation net.
Applsci 14 01351 g002
Figure 3. A flowchart depicting the architecture and functionality of the discriminator network.
Figure 3. A flowchart depicting the architecture and functionality of the discriminator network.
Applsci 14 01351 g003
Figure 4. The algorithm for reconstructing super-resolution magnetic resonance images (MRI).
Figure 4. The algorithm for reconstructing super-resolution magnetic resonance images (MRI).
Applsci 14 01351 g004
Figure 5. The estimation of motion fields.
Figure 5. The estimation of motion fields.
Applsci 14 01351 g005
Figure 6. Deblurring net.
Figure 6. Deblurring net.
Applsci 14 01351 g006
Figure 7. Denoising net.
Figure 7. Denoising net.
Applsci 14 01351 g007
Figure 8. The subsequent depiction presents a clinical trial. The current study focuses on the reconstruction of an image using various techniques. In the first row, the regular sampling scheme is utilized without motion correction and SRR applied (1) (PSNR = 20.16 dB). The reconstruction process involves the use of a B-spline curve (2) (PSNR = 23.21 dB), Yang’s method [50] (3) (PSNR = 26.36 dB), Lim’s method (4) (PSNR = 29.11 dB) as referenced in [20], Zhang’s procedure (5) (PSNR = 28.71 dB) as referenced in [51], and Zhang’s second algorithm (6) (PSNR = 29.75 dB) as referenced in [43]. In the second row, Mahapatra’s method [52] (7) (PSNR = 30.03 dB) and Wang et al.’s procedure (8) (PSNR = 30.00 dB) are employed [53]. Furthermore, the reconstruction procedure utilizes Guerreiro’s et al.’s approach [54] (9) (PSNR = 30.03 dB), Pham et al.’s method [55] (10) (PSNR = 31.41 dB), Shi’s method [17] (11) (PSNR = 31.66 dB), as well as the author’s method (12) (PSNR = 32.99 dB) [20]. In addition, a sampling strategy and motion correction techniques are employed in order to achieve the super-resolution objectives. The aforementioned procedures are implemented without the inclusion of supplementary data. The compression ratio is 50%.
Figure 8. The subsequent depiction presents a clinical trial. The current study focuses on the reconstruction of an image using various techniques. In the first row, the regular sampling scheme is utilized without motion correction and SRR applied (1) (PSNR = 20.16 dB). The reconstruction process involves the use of a B-spline curve (2) (PSNR = 23.21 dB), Yang’s method [50] (3) (PSNR = 26.36 dB), Lim’s method (4) (PSNR = 29.11 dB) as referenced in [20], Zhang’s procedure (5) (PSNR = 28.71 dB) as referenced in [51], and Zhang’s second algorithm (6) (PSNR = 29.75 dB) as referenced in [43]. In the second row, Mahapatra’s method [52] (7) (PSNR = 30.03 dB) and Wang et al.’s procedure (8) (PSNR = 30.00 dB) are employed [53]. Furthermore, the reconstruction procedure utilizes Guerreiro’s et al.’s approach [54] (9) (PSNR = 30.03 dB), Pham et al.’s method [55] (10) (PSNR = 31.41 dB), Shi’s method [17] (11) (PSNR = 31.66 dB), as well as the author’s method (12) (PSNR = 32.99 dB) [20]. In addition, a sampling strategy and motion correction techniques are employed in order to achieve the super-resolution objectives. The aforementioned procedures are implemented without the inclusion of supplementary data. The compression ratio is 50%.
Applsci 14 01351 g008
Figure 9. The effectiveness of the algorithm being examined was assessed across different compression ratios of the k-space related to the figure mentioned in question (Figure 8).
Figure 9. The effectiveness of the algorithm being examined was assessed across different compression ratios of the k-space related to the figure mentioned in question (Figure 8).
Applsci 14 01351 g009
Figure 10. The results of the second phase of the clinical brain imaging trial. The current study focuses on the reconstruction of an image using various techniques. In the first row, the regular sampling scheme is utilized without motion correction and SRR applied (1) (PSNR = 21.26 dB). The reconstruction process involves the use of the B-spline curve (2) (PSNR = 23.29 dB), Yang’s method [50] (3) (PSNR = 26.41 dB), Lim’s method (4) (PSNR = 29.22 dB) as referenced in [20], Zhang’s procedure (5) (PSNR = 28.71 dB) as referenced in [51], and Zhang’s second algorithm (6) as referenced in [43] (PSNR = 29.89 dB). In the second row, Mahapatra’s method [52] (7) (PSNR = 29.14 dB) and Wang et al.’s [53] procedure (8) (PSNR = 30.11 dB) are employed. Furthermore, the reconstruction procedure utilized Guerreiro’s approach [54] (9) (PSNR = 29.77 dB), Pham et al.’s method [55] (10) (PSNR = 23.48 dB), Shi’s method [17] (11) (12) (PSNR = 30.01 dB), as well as the author’s method (12) (PSNR = 32.61 dB). Furthermore, in order to achieve super-resolution, a suggested sampling strategy and motion correction techniques were employed. The aforementioned procedures were implemented without the inclusion of supplementary data. The compression ratio is 50%.
Figure 10. The results of the second phase of the clinical brain imaging trial. The current study focuses on the reconstruction of an image using various techniques. In the first row, the regular sampling scheme is utilized without motion correction and SRR applied (1) (PSNR = 21.26 dB). The reconstruction process involves the use of the B-spline curve (2) (PSNR = 23.29 dB), Yang’s method [50] (3) (PSNR = 26.41 dB), Lim’s method (4) (PSNR = 29.22 dB) as referenced in [20], Zhang’s procedure (5) (PSNR = 28.71 dB) as referenced in [51], and Zhang’s second algorithm (6) as referenced in [43] (PSNR = 29.89 dB). In the second row, Mahapatra’s method [52] (7) (PSNR = 29.14 dB) and Wang et al.’s [53] procedure (8) (PSNR = 30.11 dB) are employed. Furthermore, the reconstruction procedure utilized Guerreiro’s approach [54] (9) (PSNR = 29.77 dB), Pham et al.’s method [55] (10) (PSNR = 23.48 dB), Shi’s method [17] (11) (12) (PSNR = 30.01 dB), as well as the author’s method (12) (PSNR = 32.61 dB). Furthermore, in order to achieve super-resolution, a suggested sampling strategy and motion correction techniques were employed. The aforementioned procedures were implemented without the inclusion of supplementary data. The compression ratio is 50%.
Applsci 14 01351 g010
Figure 11. The results of the second phase of the clinical brain imaging trial. This study utilizes the fastMRI test dataset [57]. The current study focuses on the reconstruction of an image using various techniques. In the first row, the regular sampling scheme is utilized without motion correction and with SRR applied (1) (PSNR = 20.11 dB). The reconstruction process involves the use of the B-spline curve (2) (PSNR = 23.31 dB), Yang’s method [50] (3) (PSNR = 27.01 dB), Lim’s method (4) (PSNR = 29.12 dB) as referenced in [20], Zhang’s procedure (5) (PSNR = 28.66 dB) as referenced in [51], and Zhang’s second algorithm (6) (PSNR = 29.71 dB) as referenced in [43]. In the second row, Mahapatra’s method [52] (7) (PSNR = 29.32 dB) and Wang et al.’s [53] procedure (8) (PSNR = 30.62 dB) are employed. Furthermore, the reconstruction procedure utilizes Guerreiro’s approach [54] (9) (PSNR = 28.82 dB), Pham et al.’s method [55] (10) (PSNR = 26.58 dB), Shi’s method [17] (11) (PSNR = 31.22 dB), and the author’s method (12) [20]. Furthermore, in order to achieve super-resolution, a suggested sampling strategy and motion correction techniques are employed. The aforementioned procedures are implemented without the inclusion of supplementary data. The compression ratio is 50%.
Figure 11. The results of the second phase of the clinical brain imaging trial. This study utilizes the fastMRI test dataset [57]. The current study focuses on the reconstruction of an image using various techniques. In the first row, the regular sampling scheme is utilized without motion correction and with SRR applied (1) (PSNR = 20.11 dB). The reconstruction process involves the use of the B-spline curve (2) (PSNR = 23.31 dB), Yang’s method [50] (3) (PSNR = 27.01 dB), Lim’s method (4) (PSNR = 29.12 dB) as referenced in [20], Zhang’s procedure (5) (PSNR = 28.66 dB) as referenced in [51], and Zhang’s second algorithm (6) (PSNR = 29.71 dB) as referenced in [43]. In the second row, Mahapatra’s method [52] (7) (PSNR = 29.32 dB) and Wang et al.’s [53] procedure (8) (PSNR = 30.62 dB) are employed. Furthermore, the reconstruction procedure utilizes Guerreiro’s approach [54] (9) (PSNR = 28.82 dB), Pham et al.’s method [55] (10) (PSNR = 26.58 dB), Shi’s method [17] (11) (PSNR = 31.22 dB), and the author’s method (12) [20]. Furthermore, in order to achieve super-resolution, a suggested sampling strategy and motion correction techniques are employed. The aforementioned procedures are implemented without the inclusion of supplementary data. The compression ratio is 50%.
Applsci 14 01351 g011
Figure 12. The results of the second phase of the clinical brain imaging trial. This study utilizes the fastMRI test dataset [57]. The current study focuses on the reconstruction of an image using various techniques. In the first row, the regular sampling scheme is utilized without motion correction and with SRR applied (1) (PSNR = 22.89 dB). The reconstruction process involves the use of the B-spline curve (2) (PSNR = 23.22 dB), Yang’s method [50] (3) (PSNR = 26.30 dB), Lim’s method (4) (PSNR = 29.71 dB) as referenced in [20], Zhang’s procedure (5) (PSNR = 28.54 dB) as referenced in [51], and Zhang’s second algorithm (6) (PSNR = 29.77 dB) as referenced in [43]. In the second row, Mahapatra’s method [52] (7) (PSNR = 29.23 dB) and Wang et al.’s [53] procedure (8) (PSNR = 30.21 dB) are employed. Furthermore, the reconstruction procedure utilizes Guerreiro’s approach [54] (9) (PSNR = 30.77 dB), Pham et al.’s method [55] (10) (PSNR = 23.65 dB), Shi’s method [17] (11) (PSNR = 30.80 dB), and the author’s method (12) (PSNR = 34.02 dB) [20]. Furthermore, in order to achieve super-resolution, a suggested sampling strategy and motion correction techniques are employed. The aforementioned procedures are implemented without the inclusion of supplementary data. The compression ratio is 50%.
Figure 12. The results of the second phase of the clinical brain imaging trial. This study utilizes the fastMRI test dataset [57]. The current study focuses on the reconstruction of an image using various techniques. In the first row, the regular sampling scheme is utilized without motion correction and with SRR applied (1) (PSNR = 22.89 dB). The reconstruction process involves the use of the B-spline curve (2) (PSNR = 23.22 dB), Yang’s method [50] (3) (PSNR = 26.30 dB), Lim’s method (4) (PSNR = 29.71 dB) as referenced in [20], Zhang’s procedure (5) (PSNR = 28.54 dB) as referenced in [51], and Zhang’s second algorithm (6) (PSNR = 29.77 dB) as referenced in [43]. In the second row, Mahapatra’s method [52] (7) (PSNR = 29.23 dB) and Wang et al.’s [53] procedure (8) (PSNR = 30.21 dB) are employed. Furthermore, the reconstruction procedure utilizes Guerreiro’s approach [54] (9) (PSNR = 30.77 dB), Pham et al.’s method [55] (10) (PSNR = 23.65 dB), Shi’s method [17] (11) (PSNR = 30.80 dB), and the author’s method (12) (PSNR = 34.02 dB) [20]. Furthermore, in order to achieve super-resolution, a suggested sampling strategy and motion correction techniques are employed. The aforementioned procedures are implemented without the inclusion of supplementary data. The compression ratio is 50%.
Applsci 14 01351 g012
Table 1. The current study investigates the variations in scanning parameters employed for the production of Figure 8.
Table 1. The current study investigates the variations in scanning parameters employed for the production of Figure 8.
Raw Data Sampling SchemeTRTEFOVVoxel Size (mm)Data Collecting Time (s)p-Value
PROPELLER 3.012001802900.96/0.96/1.003590.149
SENSE-ASSET12001802900.96/0.96/1.003550.215
GRAPPA-ARC12001802900.96/0.96/1.003970.129
The proposed algorithm12001802900.96/0.96/1.001120.103
Table 2. The effectiveness of the algorithm being examined was assessed for different compression ratios of the k-space related to the figure mentioned in question Figure 8. The acronym MAE represents the term Mean Average Error.
Table 2. The effectiveness of the algorithm being examined was assessed for different compression ratios of the k-space related to the figure mentioned in question Figure 8. The acronym MAE represents the term Mean Average Error.
The Residual Proportion of the Initial k-Space Samples [%]PSNR [dB]MAEIEM
1014.8923.671.23
2018.7619.821.79
3020.1119.211.82
4025.6618.011.88
5026.4117.662.42
6028.8817.013.71
7029.0116.233.77
8029.3916.013.82
9030.3915.663.99
10032.9913.194.34
Table 3. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) metrics for the image depicted in Figure 8 are provided in this document. The word “M values” refers to the mean value of the calculated Peak Signal-to-Noise Ratios (PSNRs).
Table 3. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) metrics for the image depicted in Figure 8 are provided in this document. The word “M values” refers to the mean value of the calculated Peak Signal-to-Noise Ratios (PSNRs).
SRR ProcedureNMSDtp
SR-no, MC-no10020.160.011.0910.273
SR-no, MC-yes10024.210.01−0.7390.438
cubic spline, MC-yes10023.210.010.1810.271
Yang [50]10026.360.01−0.3110.431
Lim [20]10029.110.01−0.3630.411
Zhang [51]10028.710.01−0.4110.437
Zhang no. 2 [4310029.750.01−0.5210.514
Mahapatra et al. [52]10030.030.04−0.3390.612
Wang et al. [53]10030.000.03−0.0630.531
Guerreiro et al. [54]10029.710.020.4430.625
Pham et al. [55]10031.410.05−1.1010.341
Shi et al. [17]10031.660.01−0.9310.368
The proposed algorithm10032.990.01−0.5680.358
Table 4. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 8 are presented.
Table 4. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 8 are presented.
SRR ProcedureNMSDtp
SR-no, MC-no10029.890.011.2540.121
SR-no, MC-yes10026.320.010.6220.513
cubic spline, MC-yes10023.450.010.1430.283
Yang [50]10016.410.01−0.3240.432
Lim [20]10016.440.01−0.3540.443
Zhang [51]10016.310.01−0.4610.336
Zhang no. 2 [43]10015.260.01−0.5110.551
Mahapatra et al. [52]10015.210.03−0.2380.744
Wang et al. [53]10014.310.04−0.0130.232
Guerreiro et al. [54]10015.810.010.5420.543
Pham et al. [55]10019.010.03−1.0020.444
Shi et al. [17]10015.030.02−0.8860.128
The presented algorithm10014.030.01−1.0020.553
Table 5. The statistical data regarding the IEM measures for Figure 8 are shown.
Table 5. The statistical data regarding the IEM measures for Figure 8 are shown.
SRR ProcedureNMSDtp
SR-no, MC-no1001.510.011.1320.261
SR-no, MC-yes1001.520.010.2920.079
cubic spline, MC-yes1001.620.011.8460.063
Yang [50]1002.210.010.1490.501
Lim [20]1002.330.01−0.3350.341
Zhang [51]1002.120.01−0.3830.329
Zhang no. 2 [43]1003.410.01−0.4160.441
Mahapatra et al. [52]1003.410.03−0.2440.661
Wang et al. [53]1004.220.04−0.0130.112
Guerreiro et al. [54]1003.190.020.5410.341
Pham et al. [55]1003.020.01−1.0120.237
Shi et al. [17]1003.220.04−0.8460.236
The proposed algorithm1004.360.010.470.544
Table 6. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) for the Figure 8. The symbol M represents the average value of the observed Peak Signal-to-Noise Ratios (PSNRs).
Table 6. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) for the Figure 8. The symbol M represents the average value of the observed Peak Signal-to-Noise Ratios (PSNRs).
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique10029.810.01−1.8820.061
The proposed algorithm10032.990.001.1490.253
Table 7. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 8 are presented.
Table 7. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 8 are presented.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique10018.120.01−1.8810.053
The proposed algorithm10013.190.001.1490.223
Table 8. The statistical data regarding the IEM measures for Figure 8 are shown.
Table 8. The statistical data regarding the IEM measures for Figure 8 are shown.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique1002.040.010.1390.869
The proposed algorithm1004.340.00−1.2680.262
Table 9. The efficacy of the author’s approach is evaluated across various compression ratios of the k-space in relation to Figure 10.
Table 9. The efficacy of the author’s approach is evaluated across various compression ratios of the k-space in relation to Figure 10.
The Remaining Proportion of the Original k-Space Samples [%]PSNR [dB]MAEIEM
1014.8923.611.21
2018.5619.841.73
3020.1619.281.81
4025.5818.081.99
5026.4317.612.77
6028.1617.043.68
7029.1716.213.69
8029.2616.043.71
9030.2115.623.82
10032.6113.114.30
Table 10. The statistical data on the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 10.
Table 10. The statistical data on the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 10.
SRR ProcedureNMSDtp
SR-no, MC-no10021.260.011.0910.271
SR-no, MC-yes10024.510.01−0.7790.432
cubic spline, MC-yes10023.290.010.1340.214
Yang [50]10026.410.01−0.3110.499
Lim [20]10029.220.01−0.3120.412
Zhang [51]10028.710.01−0.4110.417
Zhang no. 2 [43]10029.890.02−0.5130.544
Mahapatra et al. [52]10029.140.01−0.2340.883
Wang et al. [53]10030.110.04−0.020.112
Guerreiro et al. [54]10029.770.010.5210.239
Pham et al. [55]10023.480.04−1.0120.131
Shi et al. [17]10030.010.030.7710.124
The proposed algorithm10032.610.01−0.5480.158
Table 11. The statistical data on the Mean Absolute Error (MAE) metrics for Figure 10.
Table 11. The statistical data on the Mean Absolute Error (MAE) metrics for Figure 10.
SRR ProcedureNMMAESDt
SR-no, MC-no10029.120.011.6140.101
SR-no, MC-yes10025.210.010.6120.303
cubic spline, MC-yes10022.310.010.1330.221
Yang [50]10016.230.01−0.3540.431
Lim [20]10016.440.01−0.3840.411
Zhang [51]10016.220.01−0.4160.431
Zhang no. 2 [43]10015.550.01−0.5650.554
Mahapatra et al. [52]10014.310.04−0.1360.644
Wang et al. [53]10015.210.020.1120.199
Guerreiro et al. [54]10016.120.030.8230.422
Pham et al. [55]10019.920.02−0.9010.390
Shi et al. [17]10018.410.01−0.7740.111
The presented algorithm10014.010.01−1.0020.101
Table 12. The statistical data regarding the IEM measures for Figure 10 are shown.
Table 12. The statistical data regarding the IEM measures for Figure 10 are shown.
SRR ProcedureNMSDtp
SR-no, MC-no1001.520.011.1120.202
SR-no, MC-yes1001.610.010.2920.171
cubic spline, MC-yes1001.630.011.8290.093
Yang [50]1002.230.010.1790.244
Lim [20]1002.340.01−0.3150.414
Zhang [51]1002.230.01−0.3710.439
Zhang no. 2 [43]1002.390.01−0.4100.401
Mahapatra et al. [52]1004.020.04−0.1400.553
Wang et al. [53]1004.020.01−0.0130.816
Guerreiro et al. [54]1003.660.030.5410.411
Pham et al. [55]1002.660.02−1.0120.817
Shi et al. [17]1002.670.02−0.8460.128
The proposed algorithm1004.360.01−0.410.271
Table 13. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 10.
Table 13. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 10.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique10029.740.01−1.8610.066
The proposed algorithm10032.610.001.1290.251
Table 14. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 10.
Table 14. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 10.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique10018.220.01−1.8810.063
The proposed algorithm10013.110.001.1390.233
Table 15. The statistical data regarding the IEM measures for Figure 10 are shown.
Table 15. The statistical data regarding the IEM measures for Figure 10 are shown.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique1002.040.010.1390.889
The proposed algorithm1004.300.00−1.2180.212
Table 16. The efficacy of the author’s approach is evaluated across various compression ratios of the k-space in relation to Figure 11.
Table 16. The efficacy of the author’s approach is evaluated across various compression ratios of the k-space in relation to Figure 11.
The Remaining Proportion of the Original k-Space Samples [%]PSNR [dB]MAEIEM
1013.7224.551.13
2016.9119.171.69
3019.4419.421.80
4024.5219.491.98
5025.9117.922.59
6027.2117.113.48
7029.2216.333.54
8028.2916.253.61
9031.7115.543.77
10033.7013.364.51
Table 17. The statistical data on the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 11.
Table 17. The statistical data on the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 11.
SRR ProcedureNMSDtp
SR-no, MC-no10020.110.011.2930.266
SR-no, MC-yes10024.330.01−0.7880.431
cubic spline, MC-yes10023.310.010.1010.102
Yang [50]10027.010.01−0.2990.321
Lim [20]10029.120.01−0.3450.498
Zhang [51]10028.660.01−0.3980.422
Zhang no. 2 [43]10029.710.01−0.4120328
Mahapatra et al. [52]10029.320.01−0.1310.667
Wang et al. [53]10030.620.02−0.010.229
Guerreiro et al. [54]10028.820.010.3410.232
Pham et al. [55] et al.10026.580.04−0.0120.111
Shi et al. [17]10031.220.010.6610.234
The proposed algorithm10033.700.01−0.3100.123
Table 18. The statistical data on the Mean Absolute Error (MAE) metrics for Figure 11.
Table 18. The statistical data on the Mean Absolute Error (MAE) metrics for Figure 11.
SRR ProcedureNMMAESDt
SR-no, MC-no10023.010.011.0120.223
SR-no, MC-yes10025.440.010.5540.297
cubic spline, MC-yes10023.440.010.0910.321
Yang [50]10014.210.01−0.2790.401
Lim [20]10015.010.01−0.2990.302
Zhang [51]10016.100.01−0.3580.236
Zhang no. 2 [43]10015.400.01−0.1250.223
Mahapatra et al. [52]10015.220.02−0.1290.501
Wang et al. [53]10015.660.010.1820.171
Guerreiro et al. [54]10016.330.020.7750.124
Pham et al. [55] et al.10018.830.020.7020.260
Shi et al. [17]10017.430.010.6640.101
The presented algorithm10013.360.01−1.0120.123
Table 19. The statistical data regarding the IEM measures for Figure 11 are shown.
Table 19. The statistical data regarding the IEM measures for Figure 11 are shown.
SRR ProcedureNMSDtp
SR-no, MC-no1001.320.011.1010.112
SR-no, MC-yes1001.440.010.2130.223
cubic spline, MC-yes1001.590.011.8660.092
Yang [50]1002.420.010.700.123
Lim [20]1002.550.01−0.2320.311
Zhang [51]1002.720.01−0.3190.137
Zhang no. 2 [43]1002.410.01−0.4120.202
Mahapatra et al. [52]1004.220.04−0.1460.451
Wang et al. [53]1004.110.01−0.0120.711
Guerreiro et al. [54]1003.780.030.3420.326
Pham et al. [55] et al.1002.880.021.0110.661
Shi et al. [17]1002.990.020.7270.127
The proposed algorithm1004.510.01−0.410.271
Table 20. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 11.
Table 20. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 11.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique10029.500.01−1.7700.032
The proposed algorithm10033.700.001.0990.212
Table 21. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 11.
Table 21. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 11.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique10017.100.01−1.7730.021
The proposed algorithm10013.360.001.1390.233
Table 22. The statistical data regarding the IEM measures for Figure 11 are shown.
Table 22. The statistical data regarding the IEM measures for Figure 11 are shown.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique1002.340.010.2220.730
The proposed algorithm1004.510.00−1.1990.201
Table 23. The efficacy of the author’s approach is evaluated across various compression ratios of the k-space in relation to Figure 12.
Table 23. The efficacy of the author’s approach is evaluated across various compression ratios of the k-space in relation to Figure 12.
The Remaining Proportion of the Original k-Space Samples [%]PSNR [dB]MAEIEM
1014.2223.661.20
2018.0119.701.61
3020.6619.211.66
4025.3918.001.78
5026.2217.122.85
6028.2117.413.55
7029.0416.623.60
8029.5116.723.61
9032.0015.553.77
10034.0213.014.27
Table 24. The statistical data on the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 12.
Table 24. The statistical data on the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 12.
SRR ProcedureNMSDtp
SR-no, MC-no10022.890.011.0650.172
SR-no, MC-yes10024.010.01−0.6650.555
cubic spline, MC-yes10023.220.010.2310211
Yang [50]10026.300.01−0.3110.499
Lim [20]10029.710.01−0.3120.412
Zhang [51]10028.540.01−0.4210.402
Zhang no. 2 [43]10029.770.02−0.5130.544
Mahapatra et al. [52]10029.230.01−0.2340.883
Wang et al. [53]10030.210.04−0.020.112
Guerreiro et al. [54]10030.770.010.1210.239
Pham et al. [55] et al.10023.650.04−1.0120.131
Shi et al. [17]10030.800.030.5720.114
The proposed algorithm10034.020.01−0.5480.158
Table 25. The statistical data on the Mean Absolute Error (MAE) metrics for Figure 12.
Table 25. The statistical data on the Mean Absolute Error (MAE) metrics for Figure 12.
SRR ProcedureNMMAESDt
SR-no, MC-no10029.100.011.6120.101
SR-no, MC-yes10025.010.010.4110.264
cubic spline, MC-yes10022.290.010.1220.224
Yang [50]10016.000.01−0.3540.433
Lim [20]10016.020.01−0.3840.419
Zhang [51]10016.290.01−0.4160.431
Zhang no. 2 [43]10015.570.01−0.2650.534
Mahapatra et al. [52]10014.440.040.1350.604
Wang et al. [53]10015.260.020.1110.109
Guerreiro et al. [54]10016.410.030.7210.421
Pham et al. [55] et al.10019.900.020.7220.395
Shi et al. [17]10018.400.01−0.7740.131
The presented algorithm10013.010.01−1.0120.101
Table 26. The statistical data regarding the IEM measures for Figure 12 are shown.
Table 26. The statistical data regarding the IEM measures for Figure 12 are shown.
SRR ProcedureNMSDtp
SR-no, MC-no1001.400.011.1120.202
SR-no, MC-yes1001.550.010.2920.172
cubic spline, MC-yes1001.610.011.8290.071
Yang [50]1002.230.010.1790.221
Lim [20]1002.370.01−0.3150.435
Zhang [51]1002.660.01−0.3710.441
Zhang no. 2 [43]1002.400.01−0.4100.424
Mahapatra et al. [52]1004.110.04−0.1400.545
Wang et al. [53]1004.010.010.0130.876
Guerreiro et al. [54]1003.760.03−0.5410.211
Pham et al. [55] et al.1002.870.02−1.0120.717
Shi et al. [17]1002.990.020.8460.128
The proposed algorithm1004.270.01−0.410.271
Table 27. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 12.
Table 27. The statistical data regarding the Peak Signal-to-Noise Ratio (PSNR) metrics for Figure 12.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique10029.540.01−1.7250.106
The proposed algorithm10034.020.001.1240.110
Table 28. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 12.
Table 28. The statistical data regarding the Mean Absolute Error (MAE) metrics for Figure 12.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique10017.020.01−1.7010.058
The proposed algorithm10013.010.001.1300.233
Table 29. The statistical data regarding the IEM measures for Figure 12 are shown.
Table 29. The statistical data regarding the IEM measures for Figure 12 are shown.
The Trajectory Used for Sampling in k-SpaceNMSDt(99)p
The unaltered multi-blade k-space sampling technique1002.310.010.2210.775
The proposed algorithm1004.270.00−1.1700.177
Table 30. The scanning parameters for Figure 10 are compared.
Table 30. The scanning parameters for Figure 10 are compared.
Scanning PatternTRTEFOVVoxel (mm)Total Scan Duration (s)p
P-R-O-P-E-L-L-E-R12001802900.96/0.96/1.003660.142
SENSE/ASSET12001802900.96/0.96/1.003490.214
GRAPPA/ARC12001802900.96/0.96/1.004010.128
The proposed algorithm12001802900.96/0.96/1.001120.103
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Malczewski, K. A Framework for Reconstructing Super-Resolution Magnetic Resonance Images from Sparse Raw Data Using Multilevel Generative Methods. Appl. Sci. 2024, 14, 1351. https://doi.org/10.3390/app14041351

AMA Style

Malczewski K. A Framework for Reconstructing Super-Resolution Magnetic Resonance Images from Sparse Raw Data Using Multilevel Generative Methods. Applied Sciences. 2024; 14(4):1351. https://doi.org/10.3390/app14041351

Chicago/Turabian Style

Malczewski, Krzysztof. 2024. "A Framework for Reconstructing Super-Resolution Magnetic Resonance Images from Sparse Raw Data Using Multilevel Generative Methods" Applied Sciences 14, no. 4: 1351. https://doi.org/10.3390/app14041351

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop