Next Article in Journal
A Wireless Sensor System for Diabetic Retinopathy Grading Using MobileViT-Plus and ResNet-Based Hybrid Deep Learning Framework
Next Article in Special Issue
Magnetic Resonance Roadmap in Detecting and Staging Endometriosis: Usual and Unusual Localizations
Previous Article in Journal
Comparisons of Emotional Responses, Flow Experiences, and Operational Performances in Traditional Parametric Computer-Aided Design Modeling and Virtual-Reality Free-Form Modeling
Previous Article in Special Issue
Radiological Features of B3 Lesions in Mutation Carrier Patients: A Single-Center Retrospective Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Development and Evaluation of Deep Learning-Based Reconstruction Using Preclinical 7T Magnetic Resonance Imaging

1
Department of Medical Physics and Engineering, Division of Health Sciences, Osaka University Graduate School of Medicine, Suita 560-0871, Osaka, Japan
2
Department of Advanced Medical Technologies, National Cardiovascular and Cerebral Research Center, Suita 564-8565, Osaka, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(11), 6567; https://doi.org/10.3390/app13116567
Submission received: 20 March 2023 / Revised: 22 May 2023 / Accepted: 27 May 2023 / Published: 29 May 2023
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)

Abstract

:
This study investigated a method for improving the quality of images with a low number of excitations (NEXs) based on deep learning using T2-weighted magnetic resonance imaging (MRI) of the heads of normal Wistar rats to achieve higher image quality and a shorter acquisition time. A 7T MRI was used to acquire T2-weighted images of the whole brain with NEXs = 2, 4, 8, and 12. As a preprocessing step, non-rigid registration of the acquired low NEX images (NEXs = 2, 4, 8) and NEXs = 12 images was performed. A residual dense network (RDN) was used for training. A low NEX image was used as the input to the RDN, and the NEX12 image was used as the correct image. For quantitative evaluation, we measured the signal-to-noise ratio (SNR), peak SNR, and structural similarity index measure of the original image and the image obtained by RDN. The NEX2 results are presented as an example. The SNR of the cortex was 10.4 for NEX2, whereas the SNR of the image reconstructed with RDN for NEX2 was 32.1. (The SNR NEX12 was 19.6) In addition, the PSNR in NEX2 was significantly increased to 35.4 ± 2.0 compared to the input image and to 37.6 ± 2.9 compared to the reconstructed image (p = 0.05). The SSIM in NEX2 was 0.78 ± 0.05 compared to the input image and 0.91 ± 0.05 compared to the reconstructed image (p = 0.0003). Furthermore, NEX2 succeeded in reducing the shooting time by 83%. Therefore, in preclinical 7T MRI, supervised learning between the NEXs using RDNs can potentially improve the image quality of low NEX images and shorten the acquisition time.

1. Introduction

Magnetic resonance imaging (MRI) is used in various settings and has become an indispensable medical device. However, long scanning times are required to obtain high-definition images, and improvements in this area are desired. Deep learning has attracted considerable attention as a solution to this problem. Deep learning is a machine-learning methodology that uses a multilayer artificial neural network and has achieved high accuracy in the field of image recognition. It is also being considered for medical applications, and there is great hope of improving MRI image quality and reducing imaging time.
Deep learning is widely used for noise reduction in medical images [1,2]. The first example of applying deep learning to MRI is noise reduction in MRI images. Currently, various network structures have been developed in deep learning (e.g., U-shaped fully convolutional network (U-Net), denoising convolutional neural network (DnCNN), and shrinkage convolutional neural network (SCNN)) [3,4,5]. Moreover, the accuracy of noise reduction is increasing. In addition, research on segmentation for fine structure rendering [6,7] and image quality improvement, such as super-resolution [8], is also being conducted by applying deep learning. All these studies have yielded improved results compared to conventional methods using deep learning. However, most of these studies were conducted using clinical MRI (1.5T, 3T), and there have been few studies on image quality improvement using deep learning in preclinical high-filed MRI.
In recent years, the residual dense network (RDN) [9] has attracted considerable attention as a network for deep learning that introduces a residual dense block (RDB) and extracts rich features via densely connected convolutional layers. Furthermore, it allows for a direct connection from the previous RDB state to all layers of the current RDB, resulting in a contiguous memory mechanism. It is also equipped with processes such as local and global feature fusion, allowing for the proper learning of global hierarchical features. In previous studies, high image-quality improvement accuracy has been achieved using natural images. In addition, since previous studies that adapted RDN for CT images of coronary arteries have shown good results in SNR, CNR, and visual evaluation, we thought that preclinical 7T MRI would show similarly good results. Therefore, we decided to use RDN in this study. Additionally, we did not find any examples of the use of RDN for image quality improvement in preclinical 7T MRI.
This study aimed to investigate whether it is possible to improve image quality and reduce imaging time using deep learning in preclinical 7T MRI.

2. Materials and Methods

2.1. Animal Preparation

All experimental protocols were approved by the Research Ethics Committee of our university. All experimental procedures involving animals and their care were performed in accordance with the Osaka University Guidelines for Animal Experimentation and the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Animal experiments were performed in 8~10 week-old Wistar rats purchased from Japan SLC (Hamamatsu, Japan). All rats were housed in a controlled vivarium environment (24 °C; 12:12 h light/dark cycle) and were fed a standard pellet diet and water ad libitum. T2-weighted images of 43 normal Wistar rats were obtained using preclinical 7T MRI; sequential images with 2, 4, 8, and 12 number of excitations (NEXs) were obtained for a total imaging time of approximately 60 min.

2.2. Magnetic Resonance Imaging

Magnetic resonance images of the animal brains were acquired using a horizontal 7-T scanner (PharmaScan 70/16 US; Bruker Biospin, Ettlingen, Germany) equipped with an inner diameter of 40 mm. To obtain MR images, the rats were positioned in a stereotaxic frame with their mouths to prevent movement during acquisition. The body temperature of the rats was maintained at 36.5 °C with regulated water flow and continuously monitored using a physiological monitoring system (SA Instruments Inc., Stony Brook, NY, USA). All brain MR experiments were performed with the rats under general anesthesia induced with isoflurane (3.0% for induction and 2.0% for maintenance).
T2-weighted images were acquired with fast spin echo (FSE) with the following parameters: TR = 3200; TE = 32.7 ms; RARE factor = 8; slice thickness = 0.5 mm; field of view = 32 × 32 mm2; matrix size = 300 × 300; slice number = 1; slice orientation = trans-axial; resolution = 167 μm × 167 μm; and scan time = 6 min. NEX2, 4, 8, and 12 serial imaging in the whole brain was performed.
The signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were measured for quantitative image quality evaluations. The SNR was calculated using ImageJ software by enclosing the cortex, white matter, hippocampus, and thalamus of the brain in the four ROIs. The PSNR and SSIM were calculated and compared using Python for the input image (low NEX image) and teacher image and for the reconstructed image and teacher image using deep learning.

2.3. Deep Learning Reconstruction

RDN [9] was used as the network for deep learning. Low NEX images (NEX2, 4, and 8) were used as input images, and high NEX images (NEX12) were used as teacher images to improve the image quality. The teacher image was used as the ground truth. The NEX2 images were the noisiest, and the noise became less noticeable with the NEX8. Since the noise was not artificially added, it was not known how much noise was contained in the image. We chose NEX12 as the teacher image because we determined that the image quality of NEX12 would be sufficient clinically in our laboratory. The rat dataset consisted of 33 cases for training, 5 for validation, and 5 for testing. The dataset for the teacher images was randomly selected from 5 of the 43 rats acquired. The parameters were as follows: RDB blocks, 20; convolution layers, 6; channels, 32; kernel size, 3 × 3; patch size, 48 × 48; epochs, 200; batch size, 16; learning rate, 0.0001, loss, L1loss; activation, PReLU; and optimizer, Adam. The experimental environment was as follows: OS, Ubuntu 20.04 LTS; CPU, Intel Core i9-11900; GPU, NVIDIA GeForce RTX 3090; RAM, 128 GB; and PyTorch 1.11.0, Python 3.9.12. The network structure of the RDN is shown in Figure 1.

2.4. Statistical Analysis

The data are presented as the mean ± standard deviation. Paired tests were performed to evaluate the reconstruction results. For the SNR, the values of each low NEX image were compared to the values of the reconstructed image, and for the PSNR and SSIM, the values of the input and teacher images were compared to the reconstructed and teacher images. All analyses were performed using Prism 9 software (GraphPad Software, San Diego, CA, USA). p < 0.05 is considered statistically significant.

3. Results

The results of the input image, teacher image, and image reconstructed using the RDN are shown in Figure 2. The input images were the NEX2, 4, and 8 patterns, the teacher image was NEX12, and the reconstructed images were NEX2 to 12, 4 to 12, and 8 to 12.
The SNR of the cortex was 10.4 ± 0.82 for NEX2 and 32.1 ± 6.1 for NEX2 to 12, showing a significant increase (p < 0.0001) with reconstruction (Figure 3A). The SNRs were also 15.0 ± 1.6 for NEX4, 45.0 ± 14.9 for NEX4 to 12 (p < 0.0001), 19.7 ± 4.7 for NEX8, and 46.7 ± 20.0 for NEX8 to 12 (p = 0.0005) (Figure 3A). The SNR for NEX12, the teacher image, was 19.6 ± 7.9 (Figure 3A).
Similarly, SNRs for white matter were 9.1 ± 2.8 for NEX2, 24.9 ± 11.8 for NEX2 to 12 (p = 0.003), 10.7 ± 3.2 for NEX4, 23.9 ± 13.9 for NEX4 to 12 (p = 0.008), 12.3 ± 3.8 for NEX8, and 16.8 ± 7.6 for NEX8 to 12 (p = 0.009). Therefore, the SNR was significantly increased by reconstruction at each NEX (Figure 3B). The SNR for NEX12 was 15.5 ± 5.6 (Figure 3B).
The hippocampal SNR was 11.4 ± 2.5 for NEX2, 41.5 ± 19.2 for NEX2 to 12 (p = 0.0005), 13.1 ± 2.5 for NEX4, 59.1 ± 51.2 for NEX4 to 12 (p = 0.02), 9.8 ± 5.9 for NEX8, and 53.7 ± 25.9 for NEX8 to 12 (p = 0.0007), with a significant increase in the SNR at each NEX (Figure 3C). The SNR of NEX12 was 22.7 ± 7.1 (Figure 3C).
The SNR for the thalamus was 9.0 ± 0.64 for NEX2, 18.0 ± 2.4 for NEX2 to 12 (p < 0.0001), 11.8 ± 0.96 for NEX4, 22.5 ± 7.7 for NEX4 to 12 (p = 0.001), 15.3 ± 1.3 for NEX8, and 24.6 ± 4.9 for NEX8 to 12 (p < 0.0001), with a significant increase in the SNR at each NEX (Figure 3D). The SNR for NEX12 was 17.0 ± 1.5 (Figure 3D). Reconstruction with the RDNs resulted in a significant improvement in the SNR at all measured sites.
The PSNR in NEX2 was significantly increased to 35.4 ± 2.0 compared to the input image and 37.6 ± 2.9 compared to the reconstructed image (p = 0.05) (Figure 4A). On the other hand, for NEX4, it was 37.9 ± 1.9 compared to the input image and 38.7 ± 2.9 compared to the reconstructed image (p = 0.30). For NEX8, it was 41.2 ± 2.6 compared to the input image and 41.7 ± 1.2 compared to the reconstructed image (p = 0.48), and no significant difference was observed (Figure 4A).
The SSIM in NEX2 was 0.78 ± 0.05 compared to the input image and 0.91 ± 0.05 compared to the reconstructed image (p = 0.0003). In NEX4, it was 0.9 ± 0.04 compared to the input image and 0.9 ± 0.03 compared to the reconstructed image (p = 0.005), indicating that NEX2 and 4 showed significant increases in the SSIM (Figure 4B). On the other hand, NEX8 showed 0.9 ± 0.03 compared to the input image and 0.96 ± 0.01 (p = 0.14) compared to the reconstructed image, which was not significantly different (Figure 4B).
The MRI imaging times used in this study were 3 min 56 s for NEX2, 7 min 53 s for NEX4, 15 min 47 s for NEX8, and 23 min 40 s for NEX12. The reconstruction time by RDN was 2.11 s per case; therefore, the imaging time was successfully reduced by 83% for NEX2, 67% for NEX4, and 33% for NEX8.
The above SNR results are summarized in Table 1, and the PSNR and SSIM results are in Table 2. The values in the table are the mean and SD of five test data animals for each NEX.
The presence of artifacts in the input image was reflected in the reconstructed image (Figure 5).

4. Discussion

In this study, the RDN improved the SNR in all measured areas in preclinical 7T MRI images. Moreover, the PSNR and SSIM also improved after reconstruction compared to the input images. The RDN also succeeded in reducing the shooting time.
Algorithms are often used in image quality improvement studies in MRI using deep learning, such as DnCNN; however, more accurate algorithms are currently being investigated. We considered applying RDNs, which are highly accurate in the field of image quality improvement, to 7T MRI. In previous studies, there are many references in which the application of deep learning improved the image quality of MRI images, and some studies have evaluated it with the PSNR and SSIM [10]. The present study shows the same trend as previous studies. Therefore, we believe that RDN is effective in improving the image quality and reducing the imaging time in preclinical 7T MRI.
In a study by Koonjoo et al. [11], an end-to-end deep neural network approach (AUTOMAP) was used in a low-field MRI to improve the image quality. In human brain MRI images obtained at 6.5 mT, the SNR (brain) was higher with AUTOMAP than with inverse fast Fourier transform. In this study, we investigated image quality improvement using RDNs in a 7T MRI network. The SNRs of the cortex, white matter, hippocampus, and thalamus were higher than those of the input images. However, the performances of AUTOMAP and RDN could not be compared because the previous study was performed on low-field MRI, and there was a large difference between the SNRs obtained in the two studies.
In a study by Kidoh et al. [3], deep learning-based reconstruction of brain MRI images (3T) of five people with different added noises was conducted, and the PSNR and SSIM were measured as quantitative evaluations. DnCNN and SCNN were used as network comparators, and both the PSNR and SSIM values were higher for deep learning-based reconstruction than for DnCNN and SCNN. In the present study, the use of RDN significantly increased the PSNR at NEX2 and the SSIM at NEX2 and NEX4, which is consistent with previous studies. We consider that RDN could be one of the methods used to improve the image quality in 7T MRI. Meanwhile, in this study, the presence of artifacts in the input image was reflected in the reconstructed image. This is thought to be due to the fact that the learning model performed noise reduction while retaining the artifacts. Therefore, in the network used in this study, the quality of the input image was guaranteed to some extent.
This study has several limitations. First, the study used only a normal model; because MRI is a device for imaging various pathological conditions, it is necessary to examine how it can be reconstructed in a pathological model in the future. We intend to evaluate this by conducting a similar study on a brain tumor model. Second, no visual evaluation was conducted. As this study only conducted a quantitative image quality evaluation, we do not know whether the image quality is high enough to be used in clinical practice. We believe that additional clinical evaluations, such as visual evaluations by medical professionals, are required in the future. A quantitative evaluation of the contrast, such as the contrast-to-noise ratio (CNR), may be useful and should be incorporated in the future. In fact, if the quality of the input image was not guaranteed to some extent, the reconstructed image differed significantly from the teacher image. Therefore, we believe that first reducing the noise with DnCNN and then reconstructing the image will improve the accuracy of low NEX image reconstruction. Third, the experiment was conducted using a single network. Because various algorithms are currently being developed, it is necessary to consider algorithms that can improve artifacts, which was an issue in this study. We believe that the denoising approach using deep learning-based reconstruction (dDLR) [3], which has shown good results in previous studies, could be adapted to this study. Fourth, the number of datasets used in this study was small. Since all datasets were obtained within our laboratory, the number of datasets was limited by the duration of the study. We would like to further increase the number of datasets for further validation. Finally, in future research, we would like to incorporate clinical evaluations and perspectives, such as pathological models and visual evaluation. In addition, we would like to adapt imaging methods other than T2-weighted images for clinical application.

5. Conclusions and Future Work

In preclinical 7T MRI, supervised learning between the NEXs using RDNs enables higher image quality of low-NEX images and a shorter acquisition time.

Author Contributions

Conceptualization, N.T., T.K., J.U. and S.S.; methodology, N.T., T.K. and S.S.; software, N.T., T.K. and S.S.; investigation, N.T., T.K. and S.S.; data curation, N.T., T.K. and S.S.; writing—original draft preparation, N.T. and S.S.; writing—review and editing, N.T. and S.S.; supervision, S.S.; project administration, S.S.; funding acquisition, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was the result of using the research equipment shared in the MEXT Project to promote the public utilization of advanced research infrastructure (Program for Advanced Research Equipment Platforms MRI Platform), grant numbers JPMXS0450400022, JPMXS0450400023.

Institutional Review Board Statement

All experimental protocols were approved by the Research Ethics Committee of Osaka University. All experimental procedures involving animals and their care were performed in accordance with the University Guidelines for Animal Experimentation and the National Institutes of Health Guide for the Care and Use of Laboratory Animals. The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Research Ethics Committee of Osaka University (R02-05-0, 20 November 2019).

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kobayashi, T.; Nishii, T.; Umehara, K.; Ota, J.; Ohta, Y.; Fukuda, T.; Ishida, T. Deep learning-based noise reduction for coronary CT angiography: Using four-dimensional noise-reduction images as the ground truth. Acta Radiol. 2022, 64, 1831–1840. [Google Scholar] [CrossRef] [PubMed]
  2. Nishii, T.; Kobayashi, T.; Tanaka, H.; Kotoku, A.; Ohta, Y.; Morita, Y.; Umehara, K.; Ota, J.; Horinouchi, H.; Ishida, T.; et al. Deep Learning-based Post Hoc CT Denoising for Myocardial Delayed Enhancement. Radiology 2022, 305, 82–91. [Google Scholar] [CrossRef] [PubMed]
  3. Kidoh, M.; Shinoda, K.; Kitajima, M.; Isogawa, K.; Nambu, M.; Uetani, H.; Morita, K.; Nakaura, T.; Tateishi, M.; Yamashita, Y.; et al. Deep Learning Based Noise Reduction for Brain MR Imaging: Tests on Phantoms and Healthy Volunteers. Magn. Reson. Med. Sci. 2020, 19, 195–206. [Google Scholar] [CrossRef] [PubMed]
  4. Kaye, E.A.; Aherne, E.A.; Duzgol, C.; Haggstrom, I.; Kobler, E.; Mazaheri, Y.; Fung, M.M.; Zhang, Z.; Otazo, R.; Vargas, H.A.; et al. Accelerating Prostate Diffusion-weighted MRI Using a Guided Denoising Convolutional Neural Network: Retrospective Feasibility Study. Radiol. Artif. Intell. 2020, 2, e200007. [Google Scholar] [CrossRef] [PubMed]
  5. Muro, I.; Shimizu, S.; Tsukamoto, H. Improvement of Motion Artifacts in Brain MRI Using Deep Learning by Simulation Training Data. Nihon Hoshasen Gijutsu Gakkai Zasshi 2022, 78, 13–22. [Google Scholar] [CrossRef] [PubMed]
  6. Kim, J.; Patriat, R.; Kaplan, J.; Solomon, O.; Harel, N. Deep Cerebellar Nuclei Segmentation via Semi-Supervised Deep Context-Aware Learning from 7T Diffusion MRI. IEEE Access 2020, 8, 101550–101568. [Google Scholar] [CrossRef] [PubMed]
  7. Yamanakkanavar, N.; Choi, J.Y.; Lee, B. MRI Segmentation and Classification of Human Brain Using Deep Learning for Diagnosis of Alzheimer’s Disease: A Survey. Sensors 2020, 20, 3243. [Google Scholar] [CrossRef] [PubMed]
  8. Zhang, H.; Shinomiya, Y.; Yoshida, S. 3D MRI Reconstruction Based on 2D Generative Adversarial Network Super-Resolution. Sensors 2021, 21, 2978. [Google Scholar] [CrossRef] [PubMed]
  9. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2480–2495. [Google Scholar] [CrossRef] [PubMed]
  10. Liu, H.; Liu, J.; Li, J.; Pan, J.S.; Yu, X. DL-MRI: A Unified Framework of Deep Learning-Based MRI Super Resolution. J. Healthc. Eng. 2021, 2021, 5594649. [Google Scholar] [CrossRef] [PubMed]
  11. Koonjoo, N.; Zhu, B.; Bagnall, G.C.; Bhutto, D.; Rosen, M.S. Boosting the signal-to-noise of low-field MRI with deep learning image reconstruction. Sci. Rep. 2021, 11, 8248. [Google Scholar] [CrossRef] [PubMed]
Figure 1. This is a diagram of the network structure of the residual dense network. The network is a combination of a convolution layer and residual dense block (RDB). Input images (NEX2, 4, 8) were given to the model that trained the teacher images (NEX12), and reconstructed images for each NEX were obtained. Zhang, Yulun, et al. IEEE transactions on pattern analysis and machine intelligence 43.7 (2020): 2480–2495.
Figure 1. This is a diagram of the network structure of the residual dense network. The network is a combination of a convolution layer and residual dense block (RDB). Input images (NEX2, 4, 8) were given to the model that trained the teacher images (NEX12), and reconstructed images for each NEX were obtained. Zhang, Yulun, et al. IEEE transactions on pattern analysis and machine intelligence 43.7 (2020): 2480–2495.
Applsci 13 06567 g001
Figure 2. Input images with low NEX ((A): NEX2, (B): NEX4, (C): NEX8), teacher image ((D): NEX12), and reconstructed images (EG), where (E) is the image obtained by reconstructing NEX2, (F) is NEX4, and (G) is NEX8 with RDN. NEX, number of excitations.
Figure 2. Input images with low NEX ((A): NEX2, (B): NEX4, (C): NEX8), teacher image ((D): NEX12), and reconstructed images (EG), where (E) is the image obtained by reconstructing NEX2, (F) is NEX4, and (G) is NEX8 with RDN. NEX, number of excitations.
Applsci 13 06567 g002
Figure 3. SNRs for the cortex (A), white matter (B), hippocampus (C), and thalamus (D). The vertical axis is the SNR, and N on the horizontal axis represents the NEX. (N2 is NEX2, and N2to12 represents NEX2 reconstructed to NEX12). The SNR was significantly increased at each measurement site (*: p < 0.05, **: p < 0.01, ***: p < 0.001). SNR, signal-to-noise ratio; NEX, number of excitations.
Figure 3. SNRs for the cortex (A), white matter (B), hippocampus (C), and thalamus (D). The vertical axis is the SNR, and N on the horizontal axis represents the NEX. (N2 is NEX2, and N2to12 represents NEX2 reconstructed to NEX12). The SNR was significantly increased at each measurement site (*: p < 0.05, **: p < 0.01, ***: p < 0.001). SNR, signal-to-noise ratio; NEX, number of excitations.
Applsci 13 06567 g003aApplsci 13 06567 g003b
Figure 4. Graphs of PSNR (A) and SSIM (B). The right side of each NEX was the value obtained by comparing the reconstructed image to the teacher image, and the left side was the value obtained by comparing the input image to the teacher image. The PSNR was significantly increased at NEX2, and SSIM showed a significant increase at NEX2 and 4 (*: p < 0.05, **: p < 0.01, ***: p < 0.001). PSNR, peak signal-to-noise ratio; SSIM, structural similarity index; NEX, number of excitations; n.s, not significant.
Figure 4. Graphs of PSNR (A) and SSIM (B). The right side of each NEX was the value obtained by comparing the reconstructed image to the teacher image, and the left side was the value obtained by comparing the input image to the teacher image. The PSNR was significantly increased at NEX2, and SSIM showed a significant increase at NEX2 and 4 (*: p < 0.05, **: p < 0.01, ***: p < 0.001). PSNR, peak signal-to-noise ratio; SSIM, structural similarity index; NEX, number of excitations; n.s, not significant.
Applsci 13 06567 g004
Figure 5. NEX2 input image (A) and its reconstructed image (B). The artifacts in the area surrounded by white lines are also reflected in the reconstructed image. NEX, number of excitations.
Figure 5. NEX2 input image (A) and its reconstructed image (B). The artifacts in the area surrounded by white lines are also reflected in the reconstructed image. NEX, number of excitations.
Applsci 13 06567 g005
Table 1. The table summarizes SNR and SD for the cortex, white matter, hippocampus, and thalamus.
Table 1. The table summarizes SNR and SD for the cortex, white matter, hippocampus, and thalamus.
SNRCortexWhite MatterHippocampusThalamus
NEX210.38 ± 0.829.06 ± 2.8111.37 ± 2.478.98 ± 0.64
NEX414.95 ± 1.6310.7 ± 3.2413.12 ± 2.5111.79 ± 0.96
NEX819.69 ± 4.6812.29 ± 3.8119.82 ± 5.9115.29 ± 1.25
NEX1219.59 ± 7.8615.54 ± 5.5922.74 ± 7.1116.98 ± 1.49
NEX2to1232.08 ± 6.0924.86 ± 11.7941.49 ± 19.1718.01 ± 2.36
NEX4to1245.03 ± 14.9423.85 ± 13.9259.08 ± 51.1922.54 ± 7.68
NEX8to1246.7 1 ± 20.0016.78 ± 7.6053.71 ± 25.8624.55 ± 4.93
Table 2. The table summarizes the mean and SD of the PSNR and SSIM, where L_NEX is the value calculated for the low NEX images (NEX2, 4, 8) and the teacher image (NEX12) and RDN is the value calculated for the reconstructed images (NEX2to12, 4to12, 8to12) and the teacher image (NEX12).
Table 2. The table summarizes the mean and SD of the PSNR and SSIM, where L_NEX is the value calculated for the low NEX images (NEX2, 4, 8) and the teacher image (NEX12) and RDN is the value calculated for the reconstructed images (NEX2to12, 4to12, 8to12) and the teacher image (NEX12).
PSNR_L_NEXPSNR_RDNSSIM_L_NEXSSIM_RDN
NEX235.41 ± 1.9537.64 ± 2.870.78 ± 0.050.91 ± 0.05
NEX437.89 ± 1.9438.71 ± 2.880.88 ± 0.040.93 ± 0.03
NEX841.21 ± 2.5941.70 ± 1.240.95 ± 0.030.96 ± 0.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tsuji, N.; Kobayashi, T.; Ueda, J.; Saito, S. Development and Evaluation of Deep Learning-Based Reconstruction Using Preclinical 7T Magnetic Resonance Imaging. Appl. Sci. 2023, 13, 6567. https://doi.org/10.3390/app13116567

AMA Style

Tsuji N, Kobayashi T, Ueda J, Saito S. Development and Evaluation of Deep Learning-Based Reconstruction Using Preclinical 7T Magnetic Resonance Imaging. Applied Sciences. 2023; 13(11):6567. https://doi.org/10.3390/app13116567

Chicago/Turabian Style

Tsuji, Naoki, Takuma Kobayashi, Junpei Ueda, and Shigeyoshi Saito. 2023. "Development and Evaluation of Deep Learning-Based Reconstruction Using Preclinical 7T Magnetic Resonance Imaging" Applied Sciences 13, no. 11: 6567. https://doi.org/10.3390/app13116567

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop