Next Article in Journal
Trends in Multiplex Immunoassays for In Vitro Diagnostics and Point-of-Care Testing
Previous Article in Journal
Clinical Characteristics, Treatment Modalities, and Potential Contributing and Prognostic Factors in Patients with Bone Metastases from Gynecological Cancers: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Usefulness of a Metal Artifact Reduction Algorithm in Digital Tomosynthesis Using a Combination of Hybrid Generative Adversarial Networks

School of Allied Health Sciences, Kitasato University, Sagamihara 252-0373, Kanagawa, Japan
*
Author to whom correspondence should be addressed.
Diagnostics 2021, 11(9), 1629; https://doi.org/10.3390/diagnostics11091629
Submission received: 22 July 2021 / Revised: 30 August 2021 / Accepted: 30 August 2021 / Published: 6 September 2021
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
In this study, a novel combination of hybrid generative adversarial networks (GANs) comprising cycle-consistent GAN, pix2pix, and (mask pyramid network) MPN (CGpM-metal artifact reduction [MAR]), was developed using projection data to reduce metal artifacts and the radiation dose during digital tomosynthesis. The CGpM-MAR algorithm was compared with the conventional filtered back projection (FBP) without MAR, FBP with MAR, and convolutional neural network MAR. The MAR rates were compared using the artifact index (AI) and Gumbel distribution of the largest variation analysis using a prosthesis phantom at various radiation doses. The novel CGpM-MAR yielded an adequately effective overall performance in terms of AI. The resulting images yielded good results independently of the type of metal used in the prosthesis phantom (p < 0.05) and good artifact removal at 55% radiation-dose reduction. Furthermore, the CGpM-MAR represented the minimum in the model with the largest variation at 55% radiation-dose reduction. Regarding the AI and Gumbel distribution analysis, the novel CGpM-MAR yielded superior MAR when compared with the conventional reconstruction algorithms with and without MAR at 55% radiation-dose reduction and presented features most similar to the reference FBP. CGpM-MAR presents a promising method for metal artifact and radiation-dose reduction in clinical practice.

1. Introduction

The causes of metal artifacts are quite complicated. Depending on the shape and density of the metal objects, the appearance of metal artifacts can vary significantly. In medical applications, metal objects can include metallic orthopedic hardware (e.g., surgical pins and clips) or equipment attached to the patient’s body (e.g., biopsy needles) [1]. A metal object can produce beam hardening, partial volume, aliasing, small-angle scatter, under-range data acquisition electrons, or overflow of the dynamic range in the reconstruction process. A previous study on digital tomosynthesis (DT) proposed several artifact compensation approaches to minimizing metal artifacts [2,3,4].
Dual-energy DT (DE-DT) has recently become available. One of the inherent capabilities of DE-DT is the generation of synthesized monochromatic images obtained at different energy levels (keV) from a single data acquisition [3,4]. By generating monochromatic images at higher energy levels (e.g., 140 keV), beam-hardening artifacts can be suppressed. However, DT devices that enable dual-energy acquisition in clinical practice are rare; generally, DT acquisition is performed through single-energy acquisition. Therefore, from the viewpoint of versatility, an improved method must be developed to achieve metal artifact reduction (MAR) in polychromatic radiography.
Based on highly attenuating metal objects, nearly all X-ray photons are attenuated, and few reach the detector, which results in under-range data acquisition electrons. Combined with the electronic noise in the data acquisition system, near-zero or negative readings are often recorded in the measured signal after offset correction. The non-perfect treatment of these signals prior to the logarithmic operation will bias the projection estimation. When this occurs, image artifacts similar to photon starvation would appear. Combined with the bias and beam-hardening effects caused by metals, both shading and streaking artifacts appear in the reconstructed image. The effect of dose differences on metal artifacts related to such corrupted readings has been reported [5,6].
Many studies have been conducted to overcome metal-induced image artifacts [1,7,8,9,10,11]. For projection samples that pass through highly attenuating metal objects, the measured values become unreliable and must be replaced or significantly modified. Projection in-painting is one of the approaches to replacing erroneous data [8,9,10,11]. Generally, this approach first identifies the projection channels corrupted by the metallic objects. The next step is to replace these channel readings with estimated projection signals generated by the nonmetallic portion of the scanned object. The new projection samples allow the reconstruction of a metal-free image volume of the scanned object.
This approach is associated with several challenges as follows [12]: inconsistency between the synthetic and actual projections, loss of low-contract objects in the inpainted projection due to influence of the synthetic projection, and a degraded spatial resolution in the final reconstruction images. To meet these challenges, advanced image-processing techniques are often used. However, all inpainting types of algorithms have limitations. Using a large amount of missing information, the signal estimation process will not be able to fully restore the information, which leads to residual artifacts.
The development of deep learning in the reconstruction of medical images has led to recent advances in MAR featuring neural networks. Park et al. [13] used U-Net [14] in the sinogram region to process artifacts associated with beam hardening in polychromatic radiographic computed tomography (CT). Zhang et al. [15] suggested that a convolutional neural network (CNN) [16] generates a priority image with less artifacts to correct the metal-corrupted regions of the sinogram. While these methods have shown reasonable results in MAR, they have limited ability to process new artifacts remaining in the reconstructed CT image. Motivated by the success of deep learning in solving inappropriate inverse problems in image processing [17,18,19], researchers have most recently formulated MAR as an image restoration problem, and its resolution improves the quality of reconstructed CT images, image-to-image translation networks [20,21,22,23,24], as well as conditional generative adversarial networks (cGAN or pix2pix) [25], thereby further reducing metal artifacts [26]. GAN improves the recognition of lesions or tissues in medical images. For example, Han et al. have reported that the combination of noise-to-image and image-to-image improves the detection accuracy of brain tumors [27]. Sandfort et al. have also reported that cycle-GAN [28] improves tissue segmentation accuracy on CT images [29].
In general, the metal mask or metal trace regions are usually small and occupy a small portion of the whole image. The network input would weaken the metal trace information, owing to the down-sampling operations of the network. Therefore, we used the mask pyramid U-Net (mask pyramid network [MPN]) [30] to retain the model trace information at each layer to explicitly enable the network to extract more discriminative features to restore the missing information in the metal trace region. As GAN has been reported to be useful for noise reduction [31], including MAR, it can be useful for image quality improvement (artifact reduction) in DT.
In this paper, we present a novel projection-based cross-domain learning framework for generalizable MAR. Distinct from previous image restoration-based solutions, MAR was formulated as a deep learning- and projection-based completion task and training for a deep neural network, that is, a combination of multiple GANs (cycle, pix2pix, and MPN) to restore the unreliable projections within the metal region. The prior metal-free image would provide a good estimation of the missing projections [23]. To ease the pix2pix learning and improve the completion quality, we trained another neural network, cycle-GAN, to generate a good prior image with fewer metal artifacts and guide the pix2pix learning by standardizing the low-dose projection of the prior image. Moreover, we designed a novel mask pyramid projection learning strategy that would fully utilize the prior projection guidance to improve the continuity of projection completion, thereby alleviating the new artifacts in the reconstructed DT images. The final DT image was then reconstructed from the completed projection using the conventional filtered back projection (FBP) algorithm [32]. Compared with the previous MAR approaches for DT [2,3,4], the whole framework was trained efficiently for the complementary learning of prior processed images according to the method of each network so that the prior image generation and deep projection completion procedures can be learned in a collaborative manner and benefit from each other. Our recommended MAR algorithm (combination of hybrid GAN: cycle-GAN_pix2pix_MPN [CGpM-MAR]) is described in the Methods section.
In addition, we investigated the causal relationship between dose reduction and quality of MAR. Our findings suggest the possibility of reducing exposure dose and improving image quality using the CGpM-MAR algorithm. The developmental process and basic evaluation of the method are presented in this study.

2. Materials

2.1. Phantom Specifications

To evaluate image quality (with the implant and artificial bone introducing the artifacts and contrast, respectively), we immersed a prosthesis phantom containing an implant in the center of a polymethyl methacrylate case (custom-made product, Kyoto Kagaku Co., Tokyo, Japan) filled with water (case dimensions: φ 200 × 300 mm). Given that water is often used as a substitute for soft tissue in phantom experiments, the area of the phantom filled with water thus simulated soft tissue. The phantom was an artificial bone (orthopedic humeral model: normal anatomy; foam cortical shell; canal diameter: 9 mm; overall length: 300 mm, Pacific Research Laboratories Inc., Vashon, WA, USA). The TRIGEN Humeral Nails Proximal Straight System (Model: 38153000; titanium alloy; diameter: 8 mm; overall length: 160 mm, Smith & Nephew Orthopaedics KK Inc., Tokyo, Japan) was used. In the prosthesis phantom, we used an internal fracture fixation (intramedullary fracture fixation) to simulate a humeral proximal fracture.

2.2. DT System

The DT system (SonialVision Safire II, Shimadzu Co., Kyoto, Japan) contained an X-ray tube (anode, made of tungsten with rhenium and molybdenum; real filter; aluminum [1.1 mm]; additional aluminum [0.9 mm]) with a 0.4 mm focal spot and amorphous selenium (362.88 × 362.88 mm) digital flat-panel detector (detector element, 0.15 × 0.15 mm). The distances between source (focal spot)-to-isocenter and source-to-detector were 924 and 1100 mm, respectively (anti-scatter grid, focused type; grid ratio, 12:1). Tomography was performed linearly with a total acquisition time of 6.4 s (reference radiation dose: 80 kVp, 250 mA, and 20 ms/view; effective dose, in accordance with the International Commission on Radiological Protection [ICRP]: 0.69 mSv [ICRP 103]; 37% reduced radiation dose: 80 kVp, 250 mA, 12.8 ms/view, and 0.43 mSv; 55% radiation-dose reduction: 80 kVp, 250 mA, 9.6 ms/view, and 0.31 mSv; and 71% reduced radiation dose: 80 kVp, 250 mA, 6.4 ms/view, and 0.20 mSv). The acquisition angle was 40° and 74 projection images (1024 × 1024 matrix). The effective dose was calculated using Monte Carlo-based software (PCXMC version 2.0, STUK-Radiation and Nuclear Safety Authority, Helsinki, Finland) [33]. The reference radiation dose was the dose generally used in clinical practice (the clinical task was to assess the prosthesis). To produce reconstructed tomograms of the required height, we used a 512 × 512 matrix with 32 bits (single-precision floating number) per image.

3. Methods

3.1. Overview of CGpM-MAR

The novel CGpM-MAR algorithm was implemented in the case of projection-space data to reduce metal artifacts while reducing the radiation dose during DT. This method is based on a combination of multi-training networks (cycle-GAN and pix2pix) with MPN and a mask pyramid learning strategy to fully utilize prior projection guidance completion, and thus, alleviate new artifacts in a projection space involving hybrid and subjectively reconstructed MAR images.
Cycle-GAN allows translations between domains (reference and low doses) that are not fully associated and can solve problems of image quality deterioration due to low-dose acquisition. By applying another style to the image during the pix2pix translation process to get prior projection images, final MPN learning could be introduced to recover more structural and anatomically plausible information from the metallic component.
The overview of the recommended CGpM-MAR) algorithm is as follows:
Step 1
[Cycle-GAN]: Translates low-dose images to reference-dose images.
Step 2
[Prior images for linear interpolation (LI) [8] processing and pix2pix GAN]: Generates a metal-free image by extracting the metal region and by LI interpolation. A prior image with a deep neural network (pix2pix) is generated to facilitate the projection completion.
Step 3
[MPN]: The mask information is obtained across the network’s encoding layers, and a mask fusion loss reduces the early saturation of adversarial training.
The flowchart shows the interrelations of cycle-GAN, pix2pix, and MPN, which form the core of CGpM-MAR (Figure 1).

3.2. CGpM-MAR

The variable definitions required for the outline of the algorithm are shown below:
B  reference-dose projection domain
A  low-dose projection domain
a  training samples { a i } i = 1 Q where a A
b  training samples { b j } j = 1 P where b B
a ~ p ( a ) b ~ p ( b ) data distribution
A B   mapping   A B
B A   mapping   B A
DA  distinguish between images { a }
  and translated images B A { b }
DB  discriminate between { b } nd A B { a }
D  discriminator
G  generator

3.2.1. Cycle-GAN

The purpose of cycle-GAN is to translate low-dose projections into reference-dose projections. In the learning process, a low-dose projection can be effectively translated into a reference-dose projection using two discriminators and a generator. Cycle-GAN is used to learn mapping functions between two domains A and B that were given training samples. The network objective contains three types of terms: Adversarial loss, cyclic consistency loss, and identity loss. The training dataset included 148 projection images and each corresponding image related to the input image pair (A (74), B (74)) were randomly selected from the generated acquisition data as the training set.
Input_1 →B (reference)
Input_2 →A (objective)
The adversarial loss is given as
G A N ( A B , D B , A , B ) = E b ~ p ( b ) [ log D B ( b ) ] + E a ~ p ( a ) [ log ( 1 D B ( A B ( a ) ) ]
The cyclic consistency loss is given as
a A B ( a ) B A ( A B ( a ) ) a
b B A ( b ) A B ( B A ( b ) ) b
c y c ( A B , B A ) = E b ~ p ( b ) [ A B ( B A ( b ) ) b 1 ] + E a ~ p ( a ) [ B A ( A B ( a ) ) a 1 ]
The identity loss is given as
i d e n t i t y = E a ~ p ( a ) [ B A ( a ) a 1 ] + E b ~ p ( b ) [ A B ( b ) b 1 ]
The total loss is given as
( A B , B A , D A , D B ) = G A N ( A B , D B , A , B ) + G A N ( B A , D A , B , A ) + λ c y c ( A B , B A ) + i d e n t i t y
where λ controls the relative importance of the two objectives. In this study, λ was set to 10 at an initial learning rate of 0.0002 [28].
cycle-GAN was developed to solve the following problem:
A B , B A = arg min A B , B A max D A , D B ( A B , B A , D A , D B )
In cycle-GAN, the Adam optimization algorithm [34] was used with a batch size of 1. The architecture of the building components is shown in Appendix A (Table A1).

3.2.2. Linear Interpolation

We generated a prior image with a deep neural network to facilitate the projection completion procedure [23], as the metal-free prior image would provide a good estimation for the original projection. We first applied the LI [8] to produce an initial estimation of the metal trace region and acquire the LI-corrected projection Ptemp_LI for the following procedures:
Step_1:
extract metal from A B { 0 , 1 } i × j (i: high, j: width) → M s k
Step_2:
A B M s k Ptemp
Step_3:
LI processing for PtempPtemp_LI

3.2.3. Pix2pix

In this procedure, the original projection with metal artifacts was used as the input and to train a neural network to generate the prior image with fewer metal artifacts. In particular, with relatively large metal objects, the metal artifacts in the original projection would be strong, and the neural network would have difficulty reducing the metal artifacts. Therefore, besides the original projection, we also used the LI-corrected image into the prior image generation procedure [23] and used the pix2pix neural network, to refine the LI-corrected image by backpropagation learning.
G and D trained adversarially with GAN are expressed as follows:
Input_1 → A B (objective)
Input_2 →Ptemp_LI (reference).
The training dataset included 148 projection images, and each corresponding images related to the input image pair ( A B (74), Ptemp_LI (74)) were randomly selected as the training set from the cycle-GAN processed data:
min D G A N _ c = E A B , P t e m p _ L I [ log D ( A B , P t e m p _ L I ) ] + E A B , z [ log ( 1 D ( A B , G ( A B , z ) ) ]
min G G A N _ L 1 = E A B , P t e m p _ L I , z [ P t e m p _ L I G ( A B , z ) 1 ]
where z is the random noise vector (Gaussian noise).
The pix2pix was developed to solve the following problem:
A B = arg min G max D G A N _ c ( G , D ) + γ min G G A N _ L 1 ( G )
where γ controls the relative importance of the two objectives G A N _ c and G A N _ L 1 . In this study, γ was set to 100, the initial learning rate was set to 0.0002, and the momentum parameters were set to β1 = 0.5, β2 = 0.999 [25].
In pix2pix, we used the Adam optimization algorithm [34] with a batch size of 1. The architecture of the building components is shown in Appendix A (Table A2).

3.2.4. MPN

We used a novel mask pyramid projection learning strategy to fully utilize the prior projection guidance to improve the continuity of projection completion and thus alleviate the new artifacts in the reconstructed DT images. We used the MPN to retain the metal trace information in each layer explicitly so that the network can extract more discriminative features to restore missing information in the metal trace region.
Input_1 →Ptemp (objective)
Input_2 → A B (reference)
Input_3 → M s k (objective)
The training dataset included 222 projection images and each corresponding related to the input image (Ptemp (74), A B (74), M s k (74)) were randomly selected as the training set from the data processed in metal extract, LI, and pix2pix.
In computing the loss function, the network only considered the content of the metal mask. The content loss is given as follows:
min G c = E P t e m p , A B [ A B ^ A B 1 ]
where   A B ^ = M s k G ( P t e m p ) + ( 1 M s k ) P t e m p
The output score matrix from the discriminator was modulated by the metal mask M s k so that the discriminator can selectively ignore the unmasked regions. The adversarial part of the mask fusion loss is given as:
min D G A N _ M = E A B [ S ( M s k ) ( 1 D ( A B ) ) 2 ] + E P t e m p [ S ( M s k ) D ( A B ) 2 ]
min G G A N _ M = E P t e m p [ S ( M s k ) ( 1 D ( A B ^ ) ) 2 ]
where S takes metal mask M s k as the input, and each block of S is coupled with an encoding block in D.
The total mask fusion loss is given as
A B f i n a l = G A N _ M + η c
where η is the balance the between G A N _ M and η . In this study, η was set to 100, the initial learning rate was set to 0.0002, and the momentum parameters were set to β 1 = 0.5 , β 2 = 0.999 [30].
In MPN, we used the Adam optimization algorithm [34] with a batch size of 1. The architecture of the building components are shown in Appendix A (Table A3).

3.3. Evaluations

3.3.1. Optimization Parameters of the Epochs

The optimizations epochs in each network (cycle-GAN, pix2pix, and MPN) were evaluated based on the mean square error (MSE) [35] and structural similarity (SSIM) [36] for the projection image (straightforward on the detector). The MSE of the identified projection image can be obtained as follows:
M S E = 1 m n i = 0 m 1 j = 0 n 1 [ r e f ( i , j ) l o w ( i , j ) ] 2
where r e f ( i , j ) is the (i, j)th entry of the reference-dose projection image, and l o w ( i , j ) is the (i, j)th entry of the low-dose projection image in each epoch.
The SSIM index between pixel values i and j was calculated as follows:
S S I M ( i , j ) = [ l ( i , j ) ] ω [ c ( i , j ) ] ξ [ s ( i , j ) ] ψ
where 𝑙 is the luminance, 𝑐 is the contrast, and 𝑠 is the structure ( ω = ξ = ψ = 1.0 ) .
The mean SSIM (MSSIM) was then used to evaluate the overall image quality as follows:
M S S I M [ r e f ( i , j ) , l o w ( i , j ) ] = 1 K q = 1 K S S I M ( i q , j q )
where 𝑖q and 𝑗q are the image contents at the qth pixel, and K is the number of pixels in the image.
Optimization was evaluated based on the MSE and MSSIM. The lowest MSE, highest MSSIM, and epochs were selected as the optimum parameters.

3.3.2. Evaluation of Image Quality

The DT system-derived real projection data were used for image reconstruction. MATLAB (version 9.7.0.1216025, MathWorks, Natick, MA, USA) was used to reconstruct and process images. The artifact index (AI) values [37] for metal artifacts containing low-frequency components in the CGpM-MAR and the conventional algorithms (FBP (reference-dose FBP image; reference FBP, low-dose FBP image; original FBP), DT-MAR [2], and CNNMAR [15] with reconstruction from the original projections) were compared to assess the decrease in metal artifacts on each in-focus plane image. A weighting factor of 0.6 was used in DT-MAR [ ( F B P 0.4 ) + ( B P 0.6 ) ] (FBP; FBP reconstructed images and BP; back projection imgaes) [2]. We further ascertained the metal artifacts containing high-frequency components. Gumbel distributions [3,38] are statistical models for determining the influence of high-frequency artifacts. The characteristics of the CGpM-MAR and conventional algorithms were evaluated based on the MAR. CGpM-MAR was evaluated using the optimized parameters generated based on the application image.

3.3.3. AI

The AI of the identified metal artifacts was calculated as follows:
A I n = | R O I a r t i f a c t _ n R O I B G |
where n = 1, 2, …, 6, or 8 defines the formula for R O I a r t i f a c t _ n (regions of interest (ROI)_location_1; R O I a r t i f a c t _ 1 , R O I a r t i f a c t _ 2 , …, R O I a r t i f a c t _ 6 , ROI_location_2; R O I a r t i f a c t _ 1 , R O I a r t i f a c t _ 2 , …, R O I a r t i f a c t _ 8 ; Figure 2a) that represents the corresponding ROI for the relative standard deviations (SDs) of real features (metal artifacts) in the in-focus plane. R O I B G is the relative SD of the background in the in-focus plane. To evaluate each feature (metal artifacts) and background, the ROI was set at 3 × 9 pixels (ROI_location_1; streak artifact area) and 3 × 7 pixels (ROI_location_2; dark artifact area).
In this study, we compared the AI values of the reconstruction algorithms at different radiation doses between two groups (ROI_location_1 and ROI_location_2). The numbers of samples in the groups were 18 (ROI_location_1) and 24 (ROI_location_2). Statistical analyses were performed using IBM SPSS Statistics for Windows (version 24.0, SPSS Inc., Chicago, IL, USA). Probability (p) values < 0.05 were considered statistically significant.

3.3.4. Statistical Model with Gumbel Distributions

The analysis method is outlined as follows: First, a rectangular window with a width of 8 pixels and a length (X-ray sweep direction) of 24 pixels was placed on each in-focus plane image to reduce metal artifacts, as shown in Figure 2b. Second, the parallel-line profiles of the pixel values at 1-pixel intervals resulted in 23 parallel-line pixel-value profiles (each sampling size: 23). Third, the pixel-value profiles were graphed, and the maximal variations between the adjacent pixel values were determined and analyzed based on the Gumbel distribution. Finally, the cumulative probability function was measured using the symmetry rank method with order statistics:
cumulative   probability   Γ ( x δ ) = δ 0.5 q , ( δ = 1 , q )
where q is the sampling size with order statistics.
To evaluate linearity, the Pearson correlation coefficient was determined and analyzed (p values < 0.01) using IBM SPSS Statistics for Windows (version 24.0, SPSS Inc., Chicago, IL, USA).

4. Results

4.1. Optimization Parameters

After measuring the MSE and SSIM of each training network at different radiation doses, the optimal epoch was selected at the lowest MSE and highest SSIM (0.43 mSv: cycle-GAN, 1600 epochs; pix2pix, 300 epochs; MPN, 330 epochs; 0.31 mSv: cycle-GAN, 1600 epochs; pix2pix, 280 epochs; MPN, 250 epochs; and 0.2 mSv: cycle-GAN, 1900 epochs; pix2pix, 250 epochs; and MPN, 400 epochs). Using the optimization verification results, each training network image was generated by setting epochs for CGpM-MAR and then evaluated and compared with those of the images obtained using conventional algorithms (Figure 3). The training was performed on a GPU (Geforce RTX 2080 Ti; 11 GB of memory, NVIDIA Co., Santa Clara, CA, USA). The total calculation time required to process the CGpM-MAR algorithm was 116.84 h (cycle-GAN [epochs 1900]; 79.63 h, pix2pix [epochs 300]; 10.05 h, MPN [epochs 400]; 27.16 h).

4.2. Image Quality

Figure 4 shows the reconstructed images of the prosthesis phantom acquired with the CGpM-MAR algorithm and each of the established algorithms for reconstruction with and without MAR processing at 55% reduction of the radiation dose of up to 0.31 mSv. Remarkably, the DT images produced using the CGpM-MAR algorithm showed decreased metal artifacts in the radiographic sweep direction (i.e., vertical direction), specifically in the peripheral regions of the prosthesis phantom. On the other hand, images produced with the help of DT-MAR demonstrated noise and metal artifacts. Comparison of the difference between CGpM-MAR and the conventional algorithm resulted in the smallest DT-MAR. CNNMAR showed a certain reduction in artifacts, but the streak artifacts generated from around the metal were remarkable.
Figure 5 presents the placement of the ROI in the prosthesis phantom and a plot of the AI results. CGpM-MAR yielded the lowest metal artifact characteristic values and features most similar to the reference FB,P regardless of the MAR processing status at 55% radiation-dose reduction, which is up to 0.31 mSv (ROI_location_1; 0.43 mSv: average AI ± standard error: 0.1020 ± 0.0179, 0.31 mSv: 0.1416 ± 0.0173, 0.20 mSv: 0.1651 ± 0.0142, ROI_location_2; 0.43 mSv: 0.1295 ± 0.0036, 0.31 mSv: 0.1226 ± 0.0051, and 0.20 mSv: 0.4308 ± 0.0313). For conventional imaging algorithms with and without MAR processing, metal artifact generation depended on the type of reconstruction algorithm ([reference FBP] ROI_location_1; 0.1103 ± 0.0267, ROI_location_2; 0.3021 ± 0.0356, [original FBP_0.43 mSv] ROI_location_1; 0.2155 ± 0.0082, ROI_location_2; 0.3893 ± 0.0309, [original FBP_0.31 mSv] ROI_location_1; 0.1998 ± 0.0182, ROI_location_2; 0.3850 ± 0.0239, [original FBP_0.20 mSv] ROI_location_1; 0.1924 ± 0.0165, ROI_location_2; 0.4264 ± 0.0283, [DT-MAR_0.43 mSv] ROI_location_1; 0.1496 ± 0.0246, ROI_location_2; 0.1469 ± 0.0067, [DT-MAR_0.31 mSv] ROI_location_1; 0.1579 ± 0.0305, ROI_location_2; 0.1439 ± 0.0058, [DT-MAR_0.20 mSv] ROI_location_1; 0.1578 ± 0.0297, ROI_location_2; 0.1411 ± 0.0062, [CNNMAR_0.43 mSv] ROI_location_1; 0.1493 ± 0.0191, ROI_location_2; 0.3473 ± 0.0388, [CNNMAR_0.31 mSv] ROI_location_1; 0.1768 ± 0.0273, ROI_location_2; 0.3559 ± 0.0402, [CNNMAR_0.20 mSv] ROI_location_1; 0.2142 ± 0.0165, ROI_location_2; 0.3512 ± 0.0407).
In ROI_location_1, the differences in the metal artifacts, except for CGpM-MAR compared with the conventional low-dose FBP without MAR processing, were statistically significant (p < 0.05; Table 1). In ROI_location_2, the differences in metal artifacts between the CGpM-MAR and low-dose with and without MAR processing reconstruction algorithms were statistically significant (p < 0.05; Table 2). These AI results showed that the CGpM-MAR algorithm most effectively reduced metal artifacts in areas with a large number of artifacts (ROI_locations_2). In addition, CGpM-MAR had the closest AI value to the reference FBP.
Figure 6 shows a Gumbel plot of the relationships between the largest variations and estimated cumulative probabilities. Here, the largest variations are distributed linearly (reference FBP r = 0.925 [p < 0.01], 0.43 mSv: original FBP r = 0.909 [p < 0.01]; DT-MAR r = 0.935 [p < 0.01]; CNNMAR r = 0.919 [p < 0.01]; CGpM-MAR r = 0.919 [p < 0.01], 0.31 mSv: original FBP r = 0.961 [p < 0.01]; DT-MAR r = 0.982 [p < 0.01]; CNNMAR r = 0.930 [p < 0.01]; CGpM-MAR r = 0.976 [p < 0.01], 0.20 mSv: original FBP r = 0.986 [p < 0.01]; DT-MAR r = 0.981 [p < 0.01]; CNNMAR r = 0.918 [p < 0.01]; CGpM-MAR r = 0.986 [p < 0.01]). These findings further verified the Gumbel distribution as a reasonable statistical model for describing the largest variations in each largest difference between adjacent pixel-value profiles. Furthermore, the largest variations in the Gumbel plot showed the CGpM-MAR had the minimum number of high-frequency artifacts and features most similar to the reference FBP at 55% radiation-dose reduction, which is up to 0.31 mSv. Furthermore, the original FBP and CNNMAR algorithms exhibited similar distributions at 55% radiation-dose reduction of up to 0.31 mSv. Whereas, the original FBP algorithm distribution differed at 71% radiation-dose reduction of up to 0.20 mSv. The AI results and Gumbel distribution indicate that CGpM-MAR had characteristics similar to the reference FBP at doses of up to 0.31 mSv (55% reduction).

5. Discussion

This study revealed that the CGpM-MAR algorithm yielded an adequate overall performance, reducing the radiation dose by 55%. The combination of multi-training network images produced using this algorithm yielded good results independently of the type of metal present in the prosthesis phantom. In addition, this algorithm successfully removed low- and high-frequency artifacts from the images. CGpM-MAR was particularly useful in reducing a large number of artifacts. Therefore, this algorithm is a promising new option for prosthetic imaging, as it generated artifact-reduced images and reduced radiation doses that were far superior to those obtained from images processed using conventional algorithms. The flexibility of CGpM-MAR in the choice of imaging parameters, which is based on the desired final images and prosthetic imaging conditions, promises increased usability.
The projection-space combination of multi-training approaches described here can be used to generate images to formulate the MAR as a deep learning algorithm for projection completion problems to improve the generalization and robustness of the framework. Since directly regressing accurate missing projection data is difficult to undertake [23], we propose to incorporate the prior projection image generation procedure and adopt a combination of multiple networks and a projection completion strategy. This method can improve the continuity of the projection values at the boundary of metal traces and alleviate the new artifacts, which are common drawbacks of projection completion-based MAR methods. Therefore, we believe that our novel CGpM-MAR could effectively reduce metal artifacts in actual practice.
The ability of CGpM-MAR to obtain MAR images and reduce the radiation dose by approximately 55% (Figure 5 and Figure 6) may be due to the benefits of the first process, cycle-GAN. Training an image-to-image translation framework requires fully associated images, which is often difficult to learn. Cycle-GAN allows the translation between domains that are not fully associated and can therefore solve this problem. Cycle-GAN has three types of losses: First, the cycle consistency loss calculates the difference between the original image and the original domain after translation into another domain and the original domain. Second, adversarial loss guarantees that the image is real. Third, identity loss preserves the quantization of the pixel space of the image. The two generators use a U-Net [14] structure, and the two discriminators have a Patch-GAN-based structure [39] for learning. By applying another style to the image during the translation process, the low-dose projection image can then be applied to the reference-dose projection image.
The MPN used as the final learning process in CGpM-MAR contributed to the MAR. The reason was that adversarial learning could be introduced into the projection to recover more structural and anatomically plausible information from the metallic domain. In addition, a new MPN has been developed, which extracts geometric information of different scales and mask fusion loss that penalizes premature saturation, making learning more robust for different shapes of metal implants, were introduced.
In DT-MAR processing without deep learning methods [2], voxels containing artifacts tended to have higher values than their neighboring artifact-free voxels, which affected the prosthetic appearance, wherein former voxels stood out against the background of the latter. Accordingly, these residual artifacts are conspicuous when images subjected to the FBP method are compared with non-artifact-reduced images. Therefore, DT-MAR processing based on the polychromatic radiographic imaging method has a limited ability to reduce metal artifacts.
The usefulness of image quality improvement for reducing noise and metal artifacts in DT using deep learning has recently been reported [4,31]. Although noise and radiation-dose reductions using deep learning in the DT of the breast are possible, no studies have reported on the reduction of radiation dose related to MAR. The radiation-dose reduction is approximately 20% in MAR without deep learning [6], thus, applying deep learning can further improve MAR and radiation-dose reduction.
Our CGpM-MAR algorithm has some limitations. First, this study was to reduce DT metal artifacts by single-energy (polychromatic) acquisition depending on the versatility of the processing. When DE-DT acquisition becomes widespread in the future, we would like to apply CGpM-MAR in monochromatic radiographic imaging. Second, because CGpM-MAR combines multiple deep learning processes, generating the final image takes time. Therefore, improvements in hardware processing are desired to speed up the process. Third, the model used for learning used a phantom. By acquiring data, such as prosthesis size and the positional relationship between prosthesis and normal structure, under various conditions according to clinical use, image quality can possibly be improved further. Moreover, further knowledge of various other structural patterns is needed in processing complicated structures.
Although the results of this study is limited to prosthesis phantom, the evaluations were performed in a state closest to actual biological composition in vivo. Furthermore, this approach will accelerate clinical application in terms of radiation-dose reduction and image quality improvement, which are issues in X-ray imaging. We believe that the CGpM-MAR algorithm will optimize the acquisition protocol in future X-ray imaging and radiation-dose reduction technology and improve the accuracy of medical images.
Recently, some works studied GAN applications with and without MAR [40,41]. These studies applied GAN technology to the reconstructed image to improve the accuracy of the tomographic image in the in-plane and longitudinal directions. Although the images processed were projection images, we predict that further improvements to the image quality (MAR) of the reconstructed image can be made by theoretically using GAN in the three-dimensional direction or in multiple slice generation from the input slice images. In addition, investigating the simultaneous reduction of GAN and metal artifacts in three-dimensional data would be an interesting future research direction.

6. Conclusions

This prosthesis phantom study revealed that a 55% reduction in radiation dose is feasible with our novel CGpM-MAR algorithm. Our algorithm was particularly useful in reducing dark artifacts and yielded relatively better statistical results (P < 0.05) in terms of metal artifact reduction than conventional reconstruction algorithms. Therefore, the CGpM-MAR algorithm can be integrated into the clinical application workflow to accelerate image acquisition and reconstruction, and reduce metal artifacts while maintaining excellent image quality in radiologic imaging with prostheses.

Author Contributions

Conceptualization, T.G.; data curation, T.G.; formal analysis, T.G.; funding acquisition, T.G.; investigation, T.G.; methodology, T.G.; project administration, T.G.; resources, T.G.; supervision, T.G.; writing—original draft, T.G.; writing—review and editing, T.G.; software, H.H. and R.S.; validation, Y.W. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a grant from Kitasato University School of Allied Health Sciences grant number [Grant-in-Aid for Research Project, No. 2021-1008].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All relevant data are within the manuscript and its non-published material files.

Acknowledgments

We wish to thank Kazuaki Suwa and Yuuki Watanabe at Department of Radiology Dokkyo Medical University Koshigaya Hospital for support on experiment.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Architecture of the building components. Channel, Kernel, Stride, Padding, and Activation denote the configurations of the convolution layers in the blocks.
BNBatch Normalization
PreluParametric Rectified Linear Unit
LReluLeaky Rectified Linear Unit
TanhHyperbolic Tangent function
*U-net connect
#Average Pool layer (metal mask)
$U-net connect
&Discriminator final output layer (count 5) ☉ Discriminator Average Pool layer (final output)
Table A1. Cycle-GAN.
Table A1. Cycle-GAN.
NetworkBlockCountChannelKernelStridePaddingActivation
Generatorencoder1324 × 421BN + Prelu
264
3128
4256
residual12563 × 311BN + Prelu
2
3
4
5
6
decoder12564 × 42sameBN + Prelu
2
3
41-
Discriminator 1324 × 421LRelu
264BN + LRelu
3128
4256
511 × 110-
Table A2. Pix2pix.
Table A2. Pix2pix.
NetworkBlockCountChannelKernelStridePaddingActivation
Generatorencoder11284 × 42sameLRelu
22561LRelu + BN
3512
41024
51024
61024
71024
decoder110244 × 42sameLRelu + BN
21024 + 1024 *
31024 + 1024 *
41024 + 1024 *
5512 + 512 *
6256 + 256 *
7128 + 128 *
1---Tanh
Discriminator 11284 × 421LRelu
2256LRelu + BN
3512
41024
5110Sigmoid
Table A3. MPN.
Table A3. MPN.
NetworkBlockCountChannelKernelStridePaddingActivation
Generatorencoder1128 + 1 #4 × 421LRelu
2256 + 2 #BN + LRelu
3512 + 4 #
41024 + 6 #
51024 + 8 #
61024 + 10 #
71024 + 12 #
#1–714 × 421-
decoder11024 + 13 $4 × 42sameBN
21024 + (1024 + 24) $BN + LRelu
31024 + (1024 + 21) $
41024 + (1024 + 17) $
5512 + (512 + 9) $
6256 + (256 + 5) $
7128 + (128 + 1) $
1---Tanh
Discriminator 11284 × 421BN
2256BN + LRelu
3512
41024
5110
#1–414 × 421-
5110-
& 1---Sigmoid

References

  1. Glover, G.H.; Pelc, N.J. An algorithm for the reduction of metal clip artifacts in CT reconstructions. Med. Phys. 1981, 8, 799–807. [Google Scholar] [CrossRef] [PubMed]
  2. Gomi, T.; Hirano, H.; Umeda, T. Evaluation of the X-ray digital linear tomosynthesis reconstruction processing method for metal artifact reduction. Comput. Med. Imaging Graph. 2009, 33, 267–274. [Google Scholar] [CrossRef]
  3. Gomi, T.; Sakai, R.; Goto, M.; Hara, H.; Watanabe, Y. Development of a novel algorithm for metal artifact reduction in digital tomosynthesis using projection-based dual-energy material decomposition for arthroplasty: A phantom study. Phys. Med. 2018, 53, 4–16. [Google Scholar] [CrossRef]
  4. Gomi, T.; Sakai, R.; Hara, H.; Watanabe, Y.; Mizukami, S. Development of a denoising convolutional neural network-based algorithm for metal artifact reduction in digital tomosynthesis for arthroplasty: A phantom study. PLoS ONE 2019, 14, e0222406. [Google Scholar] [CrossRef]
  5. Gomi, T.; Sakai, R.; Goto, M.; Hara, H.; Watanabe, Y.; Umeda, T. Evaluation of digital tomosynthesis reconstruction algorithms used to reduce metal artifacts for arthroplasty: A phantom study. Phys. Med. 2017, 42, 28–38. [Google Scholar] [CrossRef]
  6. Gomi, T.; Sakai, R.; Goto, M.; Watanabe, Y.; Takeda, T.; Umeda, T. Comparison of Reconstruction Algorithms for Decreasing the Exposure Dose During Digital Tomosynthesis for Arthroplasty: A Phantom Study. J. Digit. Imaging 2016, 29, 488–495. [Google Scholar] [CrossRef] [Green Version]
  7. Robertson, D.D.; Weiss, P.J.; Fishman, E.K.; Magid, D.; Walker, P.S. Evaluation of CT techniques for reducing artifacts in the presence of metallic orthopedic implants. J. Comput. Assist. Tomogr. 1988, 12, 236–241. [Google Scholar] [CrossRef] [PubMed]
  8. Kalender, W.A.; Hebel, R.; Ebersberger, J. Reduction of CT artifacts caused by metallic implants. Radiology 1987, 164, 576–577. [Google Scholar] [CrossRef]
  9. Bal, M.; Spies, L. Metal artifact reduction in CT using tissue-class modeling and adaptive prefiltering. Med. Phys. 2006, 33, 2852–2859. [Google Scholar] [CrossRef] [Green Version]
  10. Prell, D.; Kyriakou, Y.; Beister, M.; Kalender, W.A. A novel forward projection-based metal artifact reduction method for flat-detector computed tomography. Phys. Med. Biol. 2009, 54, 6575–6591. [Google Scholar] [CrossRef] [PubMed]
  11. Meyer, E.; Raupach, R.; Lell, M.; Schmidt, B.; Kachelriess, M. Normalized metal artifact reduction (NMAR) in computed tomography. Med. Phys. 2010, 37, 5482–5493. [Google Scholar] [CrossRef]
  12. Katsura, M.; Sato, J.; Akahane, M.; Kunimatsu, A.; Abe, O. Current and Novel Techniques for Metal Artifact Reduction at CT: Practical Guide for Radiologists. Radiographics 2018, 38, 450–461. [Google Scholar] [CrossRef] [Green Version]
  13. Park, H.S.; Lee, S.M.; Kim, H.P.; Seo, J.K.; Chung, Y.E. CT sinogram-consistency learning for metal-induced beam hardening correction. Med. Phys. 2018, 45, 5376–5384. [Google Scholar] [CrossRef] [Green Version]
  14. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, Y.; Yu, H. Convolutional Neural Network Based Metal Artifact Reduction in X-Ray Computed Tomography. IEEE Trans. Med. Imaging 2018, 37, 1370–1381. [Google Scholar] [CrossRef]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  17. Christian, L.; Lucas, T.; Ferenc, H.; Jose, C.; Andrew, C.; Alejandro, A.; Andrew, A.; Alykhan, T.; Johannes, T.; Zehan, W.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; Volume 1, pp. 105–114. [Google Scholar] [CrossRef] [Green Version]
  18. Dmitry, U.; Andrea, V.; Victor, L. Deep Image Prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 9446–9454. [Google Scholar]
  19. Jaakko, L.; Jacob, M.; Jon, H.; Samuli, L.; Tero, K.; Miika, A.; Timo, A. Noise2Noise: Learning Image Restoration without Clean Data. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 2965–2974. [Google Scholar]
  20. Lars, G.; Qingsong, Y.; Yan, X.; Bernhard, C.; Yannan, J.; Bruno, D.M.; Ge, W. Reducing Metal Streak Artifacts in CT Images via Deep Learning: Pilot Results. In Proceedings of the 14th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, Xi’an, China, 18–23 June 2017; pp. 611–614. [Google Scholar] [CrossRef]
  21. Liang, K.; Zhang, L.; Yang, H.; Yang, Y.; Chen, Z.; Xing, Y. Metal artifact reduction for practical dental computed tomography by improving interpolation-based reconstruction with deep learning. Med. Phys. 2019, 46, e823–e834. [Google Scholar] [CrossRef] [PubMed]
  22. Liao, H.; Lin, W.A.; Zhou, S.K.; Luo, J. ADN: Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction. IEEE Trans. Med. Imaging 2019. [Google Scholar] [CrossRef] [Green Version]
  23. Yu, L.; Zhang, Z.; Li, X.; Xing, L. Deep Sinogram Completion With Image Prior for Metal Artifact Reduction in CT Images. IEEE Trans. Med. Imaging 2021, 40, 228–238. [Google Scholar] [CrossRef]
  24. Wei-An, L.; Haofu, L.; Cheng, P.; Xiaohang, S.; Jingdan, Z.; Jiebo, L.; Rama, C.; Shaohua, K.Z. DuDoNet: Dual Domain Network for CT Metal Artifact Reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 10512–10521. [Google Scholar]
  25. Phillip, I.; Jun-Yan, Z.; Tinghui, Z.; Alexei, A.E. Image-To-Image Translation With Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  26. Wang, J.; Zhao, Y.; Noble, J.H.; Dawant, B.M. Conditional Generative Adversarial Networks for Metal Artifact Reduction in CT Images of the Ear. Med. Image Comput. Comput. Assist. Interv. 2018, 11070, 3–11. [Google Scholar] [CrossRef] [PubMed]
  27. Han, C.; Rundo, L.; Araki, R.; Nagano, Y.; Furukawa, Y.; Mauri, G.; Nakayama, H.; Hayashi, H. Combining Noise-to-Image and Image-to-Image GANs: Brain MR Image Augmentation for Tumor Detection. IEEE Access 2019, 7, 156966–156977. [Google Scholar] [CrossRef]
  28. Zhu, J.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; Volume 2242–2251. [Google Scholar] [CrossRef] [Green Version]
  29. Sandfort, V.; Yan, K.; Pickhardt, P.J.; Summers, R.M. Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Sci. Rep. 2019, 9, 16884. [Google Scholar] [CrossRef] [PubMed]
  30. Haofu, L.; Wei-An, L.; Zhimin, H.; Levon, V.; William, J.S.; Kevin, Z.S.; Jiebo, L. Generative Mask Pyramid Network for CT/CBCT Metal Artifact Reduction with Joint Projection-Sinogram Correction. In Proceedings of the Medical Image Computing and Computer Assisted Intervention MICCAI, Shenzhen, China, 13–17 October 2019; Springer: Cham, Switzerland, 2019; pp. 77–85. [Google Scholar] [CrossRef] [Green Version]
  31. Gao, M.; Fessler, J.A.; Chan, H.P. Deep Convolutional Neural Network with Adversarial Training for Denoising Digital Breast Tomosynthesis Images. IEEE Trans. Med. Imaging 2021. [Google Scholar] [CrossRef]
  32. Dobbins, J.T., 3rd; Godfrey, D.J. Digital x-ray tomosynthesis: Current state of the art and clinical potential. Phys. Med. Biol. 2003, 48, R65–R106. [Google Scholar] [CrossRef]
  33. Tapiovaara, M.; Siiskonen, T. A Monte Carlo Program for Calculating Patient Doses in Medical X-ray Examinations, 2nd ed.; STUK-Radiation and Nuclear Safety Authority: Helsinki, Finland, 2008; Report No. STUK-A231.
  34. Kingma, D.; Adam, J.B. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  35. Zhou, W.; Bovik, A.C. A universal image quality index. IEEE Signal. Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  36. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  37. Wang, Y.; Qian, B.; Li, B.; Qin, G.; Zhou, Z.; Qiu, Y.; Sun, X.; Zhu, B. Metal artifacts reduction using monochromatic images from spectral CT: Evaluation of pedicle screws in patients with scoliosis. Eur. J. Radiol. 2013, 82, e360–e366. [Google Scholar] [CrossRef] [PubMed]
  38. Imai, K.; Ikeda, M.; Enchi, Y.; Niimi, T. Statistical characteristics of streak artifacts on CT images: Relationship between streak artifacts and mA s values. Med. Phys. 2009, 36, 492–499. [Google Scholar] [CrossRef] [PubMed]
  39. Larsen, A.B.L.; Sønderby, S.K.; Winther, O. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of the 33rd International Conference on Machine Learning, PMLR, New York, NY, USA, 20–22 June 2016; Volume 48, pp. 1558–1566. [Google Scholar]
  40. Han, C.; Rundo, L.; Murao, K.; Noguchi, T.; Shimahara, Y.; Milacski, Z.A.; Koshino, S.; Sala, E.; Nakayama, H.; Satoh, S. MADGAN: Unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction. BMC Bioinform. 2021, 22, 31. [Google Scholar] [CrossRef] [PubMed]
  41. Wang, J.; Noble, J.H.; Dawant, B.M. Metal artifact reduction for the segmentation of the intra cochlear anatomy in CT images of the ear with 3D-conditional GANs. Med. Image Anal. 2019, 58, 101553. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the combined hybrid GAN, cycle-GAN_pix2pix_MPN algorithm (CGpM-MAR). The interrelations among the algorithms are also shown.
Figure 1. Flowchart of the combined hybrid GAN, cycle-GAN_pix2pix_MPN algorithm (CGpM-MAR). The interrelations among the algorithms are also shown.
Diagnostics 11 01629 g001
Figure 2. Assessment of improvements in image quality using the artifact index (AI), a statistical model with a Gumbel distribution of the selected features. The in-focus plane images shows the (a) artifact and background areas of the AI and (b) measurements and high-frequency artifact with prosthetic areas of the Gumbel analysis.
Figure 2. Assessment of improvements in image quality using the artifact index (AI), a statistical model with a Gumbel distribution of the selected features. The in-focus plane images shows the (a) artifact and background areas of the AI and (b) measurements and high-frequency artifact with prosthetic areas of the Gumbel analysis.
Diagnostics 11 01629 g002
Figure 3. Optimization results for parameter (epochs) determination for each network algorithm at different radiation-dose levels: (a) cycle-GAN 0.43 mSv; (b) cycle-GAN 0.31 mSv; (c) cycle-GAN 0.20 mSv; (d) pix2pix2 0.43 mSv; (e) pix2pix 0.31 mSv; (f) pix2pix 0.20 mSv; (g) MPN 0.43 mSv; (h) MPN 0.31 mSv; and (i) MPN 0.20 mSv.
Figure 3. Optimization results for parameter (epochs) determination for each network algorithm at different radiation-dose levels: (a) cycle-GAN 0.43 mSv; (b) cycle-GAN 0.31 mSv; (c) cycle-GAN 0.20 mSv; (d) pix2pix2 0.43 mSv; (e) pix2pix 0.31 mSv; (f) pix2pix 0.20 mSv; (g) MPN 0.43 mSv; (h) MPN 0.31 mSv; and (i) MPN 0.20 mSv.
Diagnostics 11 01629 g003
Figure 4. Comparisons between the combined hybrid GAN, cycle-GAN_pix2pix_MPN algorithm[CGpM-MAR, and the conventional reconstruction algorithms with and without metal artifact reduction (MAR; reference FBP [showing window: 0.15–0.44], 0.43 mSv; CGpM-MAR [0.27–0.34], original FBP [0.15–0.44], DT-MAR [0–0.72], CNNMAR [0–0.20], 0.31 mSv; CGpM-MAR [0.24–0.32], original FBP [0.15–0.44], DT-MAR [0–0.72], CNNMAR [0–0.20], 0.20 mSv; CGpM-MAR [0.05–0.22], original FBP [0.15–0.44], DT-MAR [0–0.72], CNNMAR [0–0.20]) in the in-focus plane. Metallic implants were replaced with constant values (white) after processing with CGpM-MAR and CNNMAR. The display variety of the prosthesis phantom was changed for visual comparison of the contrast and background gray levels. The X-ray source was moved along the image vertically. In the displayed areas, the artifact indexes were determined.
Figure 4. Comparisons between the combined hybrid GAN, cycle-GAN_pix2pix_MPN algorithm[CGpM-MAR, and the conventional reconstruction algorithms with and without metal artifact reduction (MAR; reference FBP [showing window: 0.15–0.44], 0.43 mSv; CGpM-MAR [0.27–0.34], original FBP [0.15–0.44], DT-MAR [0–0.72], CNNMAR [0–0.20], 0.31 mSv; CGpM-MAR [0.24–0.32], original FBP [0.15–0.44], DT-MAR [0–0.72], CNNMAR [0–0.20], 0.20 mSv; CGpM-MAR [0.05–0.22], original FBP [0.15–0.44], DT-MAR [0–0.72], CNNMAR [0–0.20]) in the in-focus plane. Metallic implants were replaced with constant values (white) after processing with CGpM-MAR and CNNMAR. The display variety of the prosthesis phantom was changed for visual comparison of the contrast and background gray levels. The X-ray source was moved along the image vertically. In the displayed areas, the artifact indexes were determined.
Diagnostics 11 01629 g004
Figure 5. Comparisons of the artifact indexes (AIs) determined for in-focus plane images obtained using the cycle-GAN_pix2pix_MPN (CGpM-MAR) and other traditional reconstruction algorithms with and without metal artifact reduction (MAR) processing. Metal artifacts originating from the AIs of 6 (ROI_location_1) and 8 (ROI_location_2) selected metal artifact areas (features) and one background area are presented in the in-focus plane. Reference FBP, 0.69 mSv; (a) ROI_location_,1 0.43 mSv; (b) ROI_location_1, 0.31 mSv; (c) ROI_location_1, 0.20 mSv; (d) ROI_location_2, 0.43 mSv; (e) ROI_location_2, 0.31 mSv; and (f) ROI_location_2, 0.20 mSv.
Figure 5. Comparisons of the artifact indexes (AIs) determined for in-focus plane images obtained using the cycle-GAN_pix2pix_MPN (CGpM-MAR) and other traditional reconstruction algorithms with and without metal artifact reduction (MAR) processing. Metal artifacts originating from the AIs of 6 (ROI_location_1) and 8 (ROI_location_2) selected metal artifact areas (features) and one background area are presented in the in-focus plane. Reference FBP, 0.69 mSv; (a) ROI_location_,1 0.43 mSv; (b) ROI_location_1, 0.31 mSv; (c) ROI_location_1, 0.20 mSv; (d) ROI_location_2, 0.43 mSv; (e) ROI_location_2, 0.31 mSv; and (f) ROI_location_2, 0.20 mSv.
Diagnostics 11 01629 g005
Figure 6. The largest variations extracted from 48 pixel-value profiles are plotted. The relatively large variation in pixel values can be attributed to the high-frequency artifacts. [(a) 0.20 mSv; (b) 0.31 mSv, and (c) 0.43 mSv].
Figure 6. The largest variations extracted from 48 pixel-value profiles are plotted. The relatively large variation in pixel values can be attributed to the high-frequency artifacts. [(a) 0.20 mSv; (b) 0.31 mSv, and (c) 0.43 mSv].
Diagnostics 11 01629 g006
Table 1. Metal artifact reduction performances of tomosynthesis reconstruction algorithms. (streak artifact area).
Table 1. Metal artifact reduction performances of tomosynthesis reconstruction algorithms. (streak artifact area).
ROI_Location_1
VariableDifferenceStandard Errorp95% CI *
Lower LimitUpper Limit
CGpM-MAR vs. Original-FBP−0.06640.017170.002−0.1118−0.0211
 CGpM-MAR vs. DT-MAR−0.01900.017170.686−0.06440.0263
 CGpM-MAR vs. CNNMAR−0.04410.017170.060−0.08950.0013
Original FBP vs. DT-MAR0.04740.017170.0370.00210.0928
 Original FBP vs. CNNMAR0.02240.017170.565−0.02300.0677
 Original FBP vs. CGpM-MAR0.06640.017170.0020.02110.1118
 DT-MAR vs. Original-FBP−0.04740.017170.037−0.0928−0.0021
 DT-MAR vs. CNNMAR0.13890.06010.468−0.07040.0203
 DT-MAR vs. CGpM-MAR0.86920.05370.686−0.02630.0644
CNNMAR vs. Original-FBP−0.02240.017170.565−0.06770.0230
 CNNMAR vs. DT-MAR0.02510.017170.468−0.02030.0704
 CNNMAR vs. CGpM-MAR0.04410.017170.060−0.00130.0895
Source of variationdf *sums of squaresmean squareFp
Algorithm30.0450.0155.7090.002
Dose20.0100.0051.8270.170
Algorithm × Dose60.0170.0031.0860.381
Error600.1590.003--
* CI: confidence interval; dependent variable: artificial index value. Tukey–Kramer test; p < 0.05 indicates a significant difference (without metal artifact reduction processing). * df: degree of freedom; dependent variable: artificial index value. Tukey–Kramer test; p < 0.05 indicates a significant difference (without metal artifact reduction processing).
Table 2. Metal artifact reduction performances of tomosynthesis reconstruction algorithms. (dark artifact area).
Table 2. Metal artifact reduction performances of tomosynthesis reconstruction algorithms. (dark artifact area).
ROI_Location_2
VariableDifferenceStandard Errorp95% CI *
Lower LimitUpper Limit
CGpM-MAR vs. Original-FBP−0.17260.021420.000−0.2288−0.1165
 CGpM-MAR vs. DT-MAR0.08360.021420.0010.02740.1397
 CGpM-MAR vs. CNNMAR−0.12380.021420.000−0.1800−0.0677
Original FBP vs. DT-MAR0.25620.021420.0000.20010.3124
 Original-FBP vs. CNNMAR0.04880.021420.111−0.00740.1049
 Original-FBP vs. CGpM-MAR0.17260.021420.0000.11650.2288
 DT-MAR vs. Original-FBP−0.25620.021420.000−0.3124−0.2001
 DT-MAR vs. CNNMAR−0.20740.021420.000−0.2636−0.1513
 DT-MAR vs. CGpM-MAR−0.08360.021420.001−0.1397−0.0274
CNNMAR vs. Original-FBP−0.04880.021420.111−0.10490.0074
 CNNMAR vs. DT-MAR0.20740.021420.0000.15130.2636
 CNNMAR vs. CGpM-MAR0.12380.021420.0000.06770.1800
Source of variationdf *sums of squaresmean squareFp
Algorithm30.9790.32659.2730.000
Dose20.1540.07713.9600.000
Algorithm × Dose60.3510.05810.6180.000
Error840.4620.006--
* CI: confidence interval; dependent variable: artificial index value. Tukey–Kramer test; p < 0.05 indicates a significant difference (without metal artifact reduction processing). * df: degree of freedom; dependent variable: artificial index value. Tukey–Kramer test; p < 0.05 indicates a significant difference (without metal artifact reduction processing).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gomi, T.; Sakai, R.; Hara, H.; Watanabe, Y.; Mizukami, S. Usefulness of a Metal Artifact Reduction Algorithm in Digital Tomosynthesis Using a Combination of Hybrid Generative Adversarial Networks. Diagnostics 2021, 11, 1629. https://doi.org/10.3390/diagnostics11091629

AMA Style

Gomi T, Sakai R, Hara H, Watanabe Y, Mizukami S. Usefulness of a Metal Artifact Reduction Algorithm in Digital Tomosynthesis Using a Combination of Hybrid Generative Adversarial Networks. Diagnostics. 2021; 11(9):1629. https://doi.org/10.3390/diagnostics11091629

Chicago/Turabian Style

Gomi, Tsutomu, Rina Sakai, Hidetake Hara, Yusuke Watanabe, and Shinya Mizukami. 2021. "Usefulness of a Metal Artifact Reduction Algorithm in Digital Tomosynthesis Using a Combination of Hybrid Generative Adversarial Networks" Diagnostics 11, no. 9: 1629. https://doi.org/10.3390/diagnostics11091629

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop