Next Article in Journal
A Pipe-Embeddable Impedance Sensor for Monitoring Water Leaks in Distribution Networks: Design and Validation
Next Article in Special Issue
A Review of Skin-Wearable Sensors for Non-Invasive Health Monitoring Applications
Previous Article in Journal
Feature-Based Occupancy Map-Merging for Collaborative SLAM
Previous Article in Special Issue
ConMLP: MLP-Based Self-Supervised Contrastive Learning for Skeleton Data Analysis and Action Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multimodal Image Fusion for X-ray Grating Interferometry

1
School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou 325000, China
2
State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu 610059, China
3
The Engineering & Technical College of Chengdu University of Technology, Leshan 614000, China
4
Sigray, Inc., Concord, CA 94520, USA
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(6), 3115; https://doi.org/10.3390/s23063115
Submission received: 2 February 2023 / Revised: 11 March 2023 / Accepted: 13 March 2023 / Published: 14 March 2023

Abstract

:
X-ray grating interferometry (XGI) can provide multiple image modalities. It does so by utilizing three different contrast mechanisms—attenuation, refraction (differential phase-shift), and scattering (dark-field)—in a single dataset. Combining all three imaging modalities could create new opportunities for the characterization of material structure features that conventional attenuation-based methods are unable probe. In this study, we proposed an image fusion scheme based on the non-subsampled contourlet transform and spiking cortical model (NSCT-SCM) to combine the tri-contrast images retrieved from XGI. It incorporated three main steps: (i) image denoising based on Wiener filtering, (ii) the NSCT-SCM tri-contrast fusion algorithm, and (iii) image enhancement using contrast-limited adaptive histogram equalization, adaptive sharpening, and gamma correction. The tri-contrast images of the frog toes were used to validate the proposed approach. Moreover, the proposed method was compared with three other image fusion methods by several figures of merit. The experimental evaluation results highlighted the efficiency and robustness of the proposed scheme, with less noise, higher contrast, more information, and better details.

1. Introduction

X-ray imaging techniques, such as mammography [1] and computed tomography (CT) [2], have become indispensable diagnostic tools for investigating the inner structure of materials. They can provide valuable information in many fields, from medical diagnosis to industrial inspection and security screening. Traditionally, the image contrast of these techniques depends on differences in X-ray attenuation. The attenuation contrast ( μ ) positively correlates with the material mass density ( ρ ) and atomic number (Z) ( μ ρ Z 4 ), and negatively correlates with the X-ray energy (E) ( μ 1 / E 3 ) [3]. In principle, conventional X-ray attenuation-based imaging is ideal for materials with high absorption properties. However, the attenuation contrast becomes extremely poor without a significant increase in dose deposition, while low-Z materials are under investigation with high-energy X-rays.
Recently, X-ray grating interferometry (XGI) has been introduced to mitigate the inherent limitations of imaging low-Z materials using conventional X-ray imaging techniques. Because XGI is compatible with conventional low-coherence X-ray sources and detectors, it has become the most promising scheme for translating XGI into practice [4]. Moreover, XGI is a multi-contrast imaging technique, able to provide three physically different signals with complementary image contrast: attenuation contrast (AC), differential phase contrast (DPC), and small-angle scattering, also known as dark-field contrast (DFC) [5]. The phase signal can reveal differences between materials with similar absorption properties because it is highly sensitive to the electron density variations in the object. The scattering signal can access unresolved structural variations of the sample in the micrometer scale, which is beyond the system resolution. Many studies have demonstrated that both differential phase and scattering modalities were able to offer valuable information in addition to conventional attenuation contrast, including clinical applications such as mammography [6,7] and lung imaging [8,9] in addition to non-destructive testing [10] and material science in industrial settings [11]. The scattering signal, in particular, has piqued the attention of researchers because of its effectiveness in offering quantitative or inaccessible structural information in radiographic applications [12,13,14].
Adding two more informationally-complementary contrasts to the conventional attenuation contrast can enrich the information access channels. However, the three output images represent morphological features of an object with different physical properties, which can significantly enhance the complexity of interpretation and burden a physician. Image fusion could combine the tri-contrast modalities into a single integrated image, making analysis and diagnosis less cumbersome. The simultaneous acquisition of the tri-contrast images circumvents the preregistration process for image fusion because the retrieved AC, DPC, and DFC images are temporally and spatially registered. This could be particularly advantageous for reducing artifacts in the fusion procedure and conserving the reliability of the acquired information.
Tri-contrast image fusion methods have been developing over the past few decades. Ewald Roessl et al. presented, in 2012, an image fusion algorithm to combine AC and DPC based on an assumption of a simple scaling law [15]. However, the DFC signal was not considered for the procedure. Z. Wang et al. proposed a tri-contrast fusion method based on multiple resolutions in 2013 [16]. It successfully transformed details from the original images to the fusion results. However, the study lacked objective measurements to evaluate the method’s performance. Felix Scholkmann et al. proposed an image denoising, fusion, and enhancement scheme in 2014 [17]. It had pleasing results in both dental and breast imaging applications because it introduced pre-denoising and after-enhancement. However, the fusion rule of their scheme was unable to process three input images simultaneously, making it unsuitable for trimodal application. Eduardo Coello et al. introduced a Fourier domain framework for XGI fusion in 2017 [18]. The fusion results contained abundant diagnostic features and details, attributed to the full utilization of complementary information from three XGI channels by the Fourier transform. However, they did not compare it with other image fusion algorithms.
In this work, an XGI fusion scheme, based on the non-subsampled contourlet transform (NSCT) and the spiking cortical model (SCM), was proposed to solve several drawbacks of the current tri-contrast image fusion methods mentioned above. This scheme was able to process tri-contrast images from three channels of XGI simultaneously. It incorporated the pre-denoising processes of XGI outputs, the fusion process (based on NSCT-SCM), and the post-enhancement process of the fusion results. The proposed fusion algorithm was able to extract fine details and essential information from the tri-contrast images of XGI, presenting them in a final fused image with high contrast and low noise. The similarity between the fusion result and AC, DPC, and DPC channels of XGI was modulated by several tannable parameters, facilitating the easy realization of prior knowledge and preferences for particular channels.
Moreover, the proposed fusion scheme was compared with the three XGI fusion methods mentioned above, i.e., the work of Felix Scholkmann et al. [17], the conventional NSCT image fusion algorithm, and the conventional NSCT-pulse-coupled neural network (PCNN) image fusion algorithm. The comparison was carried out in both subjective and objective evaluations. Objective measures incorporated edge strength ( E S ), spatial frequency ( S F ), standard deviation ( S D ), entropy ( H ), feature mutual information ( F M I ), feature similarity index measure ( F S I M ), fusion factor ( F F ), structural similarity index measure ( S S I M ), and power spectral density (PSD). Experimental results demonstrated the robustness and effectiveness of the proposed multimodal image fusion scheme.
The rest of this study was organized as follows: the basic principles of XGI fusion, NSCT, and SCM were presented in Section 2; the proposed NSCT-SCM XGI fusion scheme was illustrated in Section 3; the introduction of objective evaluation criteria was presented in Section 4; the experimental analysis of the proposed method was presented, together with the comparison with the other three algorithms for XGI fusion, in Section 5; and conclusions were drawn in Section 6.
Contributions of this study:
(1)
drawbacks of image fusion methods in the XGI were analyzed;
(2)
an image fusion scheme based on NSCT-SCM for the XGI was proposed;
(3)
a tunable sub-band coefficient selection strategy was proposed to serve special requirements for the XGI fusion;
(4)
the proposed NSCT-SCM image fusion scheme was applied to XGI data of frog toes and compared with current fusion methods in the XGI fusion field, exhibiting state-of-the-art performance.

2. Materials and Methods

2.1. Image Fusion for X-ray Grating Interferometry

X-ray grating interferometry simultaneously retrieves three complementary signals: AC, DPC, and DFC channels. Among these signals, AC represents the attenuation of the X-ray intensity; therefore, it provides the same information as conventional X-ray imaging, presenting it in the form of an X-ray absorption coefficient. DPC, on the other hand, is presented in the form of a refraction index, which relates to the X-ray’s local deflection. Finally, DFC is defined by the small-angle X-ray scattering at sub-pixel structures, presenting detailed information that would not be easily visible in the previous channels.
In XGI image fusion, high-frequency components of images from DPC and DFC are selected to provide greater features and details. At the same time, low-frequency components of the image from AC are preferred because of an intrinsic principle of conventional X-ray methods: making images easy for doctors or radiologists to read [18]. In addition, because the three pictures from XGI are retrieved simultaneously from the same direction by the same sensor, there is no need for additional image registration.

2.2. Non-Subsampled Contourlet Transform

Minh N. Do and Martin Vetterli proposed the contourlet transform (CT) in 2005 [19]. The following analogy demonstrates the advantages of CT; imagine there are two painters, one using a wavelet style and the other using a contourlet style. Both plan to paint a natural scene. Each painter increases the resolution of their painting from coarse to fine, step by step. When painting a smooth contour, as shown in Figure 1, the wavelet-style painter can only use square-shaped brush strokes along the contour [20]. He uses different-sized brush strokes, corresponding to the multiresolution structure of wavelets [21,22]. As the resolution grows finer, it becomes apparent that this painter needs to use a significant number of fine dots to describe the contour. However, the contourlet-style painter, in the same scenario, effectively and efficiently maintains the smoothness of the contour, attributed to their using brushstrokes with different elongated shapes, following the directions of the contour. This analogy gives a clear view of the advantages of the CT compared with the wavelet: the CT decomposes an image following its contour, which makes it less computationally complex than the wavelet.
Derived from CT, NSCT is a multi-directional, multi-scale transform that can analyze detailed information in an image [23,24]. It uses the non-subsampled pyramid filter bank (NSPFB) and the non-subsampled directional filter bank (NSDFB), and thus, it achieves the shift-invariance property. First, the input image is decomposed into two parts by NSPFB: high-pass and low-pass sub-bands. Then, the high-pass sub-band is further decomposed into serval directional sub-bands by the NSDFB. Meanwhile, the low-pass sub-band continues to implement the above decomposition as a new input. As shown in Figure 2, when the decomposing process is done, one low-pass sub-band and serval high-pass directional sub-bands are obtained from an original input image. Note that the size of each sub-band is the same as that of the original image because there is no sampling operation. Moreover, NSCT has a redundancy, given by R = j = 0 j 2 l j , where 2 l j is the number of directions at scale j .

2.3. Spiking Cortical Model

The spiking cortical model [25] is a modified model, based on Eckhorn’s neural network, that uses physiology as inspiration [26]. It has fewer parameters and better accuracy than the original model. Its time matrix can be recognized as a subjective, human sense of stimulus intensity. As a result of these physiology-inspired neural networks’ outstanding ability to extract dynamic information inside multi-dimensional signals, they have been widely used in numerous fields. Instances include feature extraction [27], pulse shape discrimination [28,29,30], image encryption [31], and image segmentation and fusion [32,33].
Considering a biological neuron in a resting state, the membrane potential of this neuron is directly charged by external stimulus. Meanwhile, this membrane potential is modulated by the postsynaptic action potential of its neighboring neurons. In comparison, the membrane potential of SCM is similar to the aforementioned biological neural activity. The membrane potential of neurons in the SCM is calculated by combining the external stimulus and the neighboring modulation. A neuron in the SCM is fired and produces a spike when its neural membrane potential rises over its threshold. The threshold is dynamic, constantly changing under the influence of membrane potential states. Based on the characteristics mentioned above, the mathematical formulae of the SCM [25] can be written as follows:
U i j n = f U i j n 1 + S i j 1 + β k l W i j k l Y k l n 1 ,  
Y i j n =   1 ,   i f   1 1 + e x p U i j t + Δ t θ i j t + Δ t > 0.5 0 ,   o t h e r w i s e ,
Θ i j n = g Θ i j n 1 + h Y i j n ,
where each neuron is denoted by a coordinate i , j ; coordinate k , l represents one of the neighboring neurons of the central neuron located at i , j ; U i j n is the membrane potential of a neuron located at i , j when the iterative count is n ; S i j is the external stimulus; Θ i j is the dynamic threshold; Y i j n is the output action potential (spike); the convolution of W and Y stands for the modulation on the center neuron, located at the i , j coordinate by its neighborhood neurons; W is the synaptic weighted matrix; β is the linking strength coefficient; f denotes the attenuation constant of the membrane potential which defines the gathering speed of it; and g represents the threshold’s attenuation constant, controlling the relative refractory period (i.e., the difficulty of activating peripheral neurons). Finally, h indicates the absolute refractory period, which prevents a neuron that has just been fired from immediately being reactivated again.

3. NSCT-SCM Fusion Scheme

The proposed image fusion scheme incorporated three steps: (i) denoising all three input images (AC, DPC, and DFC) using adaptive Wiener filtering, (ii) implementing the NSCT-SCM based image fusion algorithm to the input images, and (iii) enhancing the output fused image using contrast-limited adaptive histogram equalization (CLAHE), adaptive sharpening (AS) and gamma correction (GC). The principle of the NSCT-SCM XGI fusion scheme is introduced in Figure 3.

3.1. Step 1. Image Denoising Based on Wiener Filtering

To obtain better quality raw images, the adaptive Wiener filter was applied to reduce the noise from an image while preserving the high-frequency information and edge features. The sizes of each input image are denoted by M × N ; the AC, DPC, and DFC images are represented by I A C = I A C i , j , I D P C = I D P C i , j , and I D F C = I D F C i , j , respectively, where i = 1 , 2 , , M and j = 1 , 2 , , N . The image I D obtained after Wiener filter processing is expressed as follows [34]:
I D i , j = m + σ 2 v 2 σ 2 I i , j m ,  
m = 1 X Y i = 1 X j = 1 Y I i , j ,
σ 2 = 1 X Y i = 1 X j = 1 Y I 2 i , j μ 2 ,
where, m stands for the local mean, σ 2 denotes the local variance, and v 2 denotes the noise variance; X and Y are manual parameters which define the processing window size in the to-be-processed image I ; and μ 2 represents the average noise variance. After implementing adaptive Wiener filtering to images AC, DPC, and DFC, the output images are presented as I A C D , I D P C D and I D F C D .

3.2. Step 2. NSCT-SCM XGI Fusion Algorithm

In this step, three images ( I A C D , I D P C D and I D F C D ) were fused into one image, I F D .
  • First, the NSCT was implemented to the I A C D , I D P C D and I D F C D obtaining images’ high-frequency coefficients ( H A C D , n ,   H D P C D , n and H D F C D , n ) and low-frequency coefficients ( L A C D ,   L D P C D and L D F C D ), where n denotes the index of high-frequency coefficients, because multiple high-frequency coefficients are decomposed from a single image. Note that the size of each coefficient obtained from NSCT was the same as the input images, M × N in this case. Additionally, although only one low-frequency coefficient could be obtained from the NSCT process, multiple high-frequency coefficients could be gained from the NSCT of a single image, depending on the decomposition levels of NSDFB and NSPFB.
  • Second, high-frequency coefficients and low-frequency coefficients were fed into the SCM, generating the state of the firing of each coefficient ( T A C D , n , T D P C D , n , or T D F C D , n for the high-frequency coefficient and T A C D , L , T D P C D , L , or T D F C D , L for the low-frequency coefficient), i.e., the ignition matrix. Each ignition matrix has the same size as its input coefficient, which was M × N in this case.
  • Two separate fusion rules were provided for high-frequency and low-frequency coefficients because of the need to preserve details and features in the high-frequency sub-band and keep the low-frequency part of the fused final image closer to the AC image. It is easier for doctors or radiologists to analyze a fused tri-contrast image when its low-frequency sub-band is close to that of the AC channel. Under this condition, the final fusion results will generally resemble the effects of traditional absorption-based tomography while containing complementary information of DPC and DFC channels.
    For the low-frequency coefficients:
    L F D i , j = L A C D i , j ,   a · T A C D , L i , j > 1 a · T D P C D , L i , j   a n d   1 a · T D F C D , L i , j L D P C D i , j ,   1 a · T D P C D , L i , j > a · T A C D , L i , j   a n d   1 a · T D F C D , L i , j   L D F C D i , j ,   1 a · T D F C D , L i , j > a · T A C D , L i , j   a n d   1 a · T D P C D , L i , j   ,
    where L F D is the fused low-frequency coefficient and a is a tunable parameter that determines the similarity between the fused image and the AC image; the larger the value of a , the closer the fused image will be to the AC image.
    For the high-frequency coefficients:
    There were a total of 7 possible values for H F D , n i , j : (1) H F D , n i , j = b · H A C D , n i , j + c · H D P C D , n i , j + d · H D F C D , n i , j ; (2) H F D , n i , j = H A C D , n i , j , (3) H F D , n i , j = H D P C D , n i , j ; (4) H F D , n i , j = H D F C D , n i , j ; (5) H F D , n i , j = H A C D , n i , j + H D P C D , n i , j / 2 ; (6) H F D , n i , j = H A C D , n i , j + H D F C D , n i , j / 2 ; and (7) H F D , n i , j = H D P C D , n i , j + H D F C D , n i , j / 2 . The programming idea of the high-frequency fusion rule was such that we set a threshold T for the comparison of ignition results T A C D , L , T D P C D , L , and T D F C D , L . This comparison measured whether the information of a pixel coming from a single channel was significant enough to replace the others or whether a weighted average of the information of two or three channels was required. To be specific, when one channel was significantly larger than others, we chose the coefficient from this channel as the value of the H F D , n i , j directly. When two were significantly larger than the rest, we took the average as the value of the H F D , n i , j . When no channel was significantly larger than the others, we weighted averaged the value of all three channels as the value of the H F D , n i , j by the weight factors b , c , and d . A detailed fusion scheme of high-frequency coefficients is presented in the Supplemental Information, Section S1.
  • Finally, the inverse NSCT was implemented with respect to the low-frequency coefficients L F D , as well as the high-frequency coefficients H F D , n , obtaining the fused image I F D .

3.3. Step 3. Image Enhancement Using CLAHE, AS, and GC

Contrast-limited adaptive histogram equalization (CLAHE), adaptive sharpening (AS), and gamma correction (GC) were introduced to improve the image quality by Felix Scholkmann et al. [17]. This scheme was convenient to implement and was able to facilitate the output of better-quality images. Although it could enhance the image contrast and sharpness, it could not add further information to the fused image from the original AC, DPC, and DFC channels. Its application incorporated the following steps:
  • The image I F D was first processed by CLAHE [35], which divided it into small tiles and changed the histogram of these tiles to enhance their contrast. Additionally, a clipping limit needed to be applied to the aforementioned processing, aiming to prevent excessive noise in the image. Bilinear interpolation was implemented on the tiles to avoid image discontinuities. After the implementation, the processed image I F E n 1 was obtained.
  • Second, I F E n 1 was sharpened by the AS method, mathematically given by:
    I F E n 2 i , j = I F E n 1 i , j C 2   I F E n 1 i , j ,  
    2   I F E n 1 i , j = 2 I F E n 1 i , j i 2 + 2 I F E n 1 i , j j 2 ,
    where
    2 I F E n 1 i , j i 2 = I F E n 1 i + 1 , j + I F E n 1 i 1 , j 2 I F E n 1 i , j ,
    2 I F E n 1 i , j j 2 = I F E n 1 i , j + 1 + I F E n 1 i , j 1 2 I F E n 1 i , j ,
    where C is the weighting factor adaptively determined by calculating the image entropies with many values of C and finding the C m a x value, i.e., when the maximum entropy was obtained. The final C was calculated by C = C m a x a r g m a x H / α , where α is a constant to preserve the image becoming over-sharpened, with a fixed value of 3, empirically given by Felix Scholkmann et al. in their work [12]. After the aforementioned process, the image I F E n 2 was obtained.
  • Finally, in the GC step, the image I F E n 2 was enhanced by a sigmoid function, denoted as:
    I F E n 3 = 1 1 e x p λ 1 λ 2 I F E n 2 ,
    where λ 1 and λ 2 are two manually tunable parameters.

4. Measures of the Fusion Performance

With regard to fusion performance evaluation, there are two kinds of evaluation strategies: subjective and objective evaluations. Subjective evaluation is difficult to reproduce and highly dependent on the evaluators’ experience, making the evaluation results unstable and difficult to quantify. In this study, we chose the objective evaluation method as the primary method by which to compare the results of the proposed fusion scheme with the other fusion algorithms. Several performance measures were implemented for the fusion results in our experiment, as follows:
  • Edge strength ( E S ) [36] stands for the relative amount of edge information transferred from the input images ( I A C , I D P C , and I D F C ) into the fused result I F , denoted as:
    E S = i = 1 M j = 1 N E S A C ,   F i , j w A C i , j + E S D P C ,   F i , j w D P C i , j + E S D F C ,   F i , j w D F C i , j i = 1 M j = 1 N w A C i , j + w D P C i , j + w D F C i , j ,  
    where w A C i , j , w D P C i , j , and w D F C i , j are the weights, assigned to edge preservation values E S A C , F i , j , E S D P C , F i , j , and E S D F C , F i , j for I A C , I D P C , and, I D F C , respectively. This edge preservation value was calculated through a Sobel edge operator, detailed information of which can be found in [36]. The larger the value of E S , the better the image fusion performance.
  • Spatial frequency ( S F ) measures the number of details presented in a stimulus per degree of visual angle, and can be given as follows:
    S F = R F 2 + C F 2 ,  
    R F = 1 M N i = 0 M 1 j = 1 N 1 Z i , j Z i , j 1 2 ,
    C F = 1 M N i = 1 M 1 j = 0 N 1 Z i , j Z i 1 , j 2 ,
    where R F and C F represent the row frequency and column frequency, respectively, and Z i , j denotes the gray-value intensity of the pixel located at i , j in the image. A higher S F value of an image meant that it contained more details—and hence, led to a better fusion result.
  • Standard deviation ( S D ) is the square root of the variance, which refers to the image contrast. The higher the contrast, the greater the value of S D . S D was calculated as follows:
    S D = 1 M N i = 1 M j = 1 N Z i , j μ ˙ 2 ,  
    where μ ˙ stands for the mean intensity of the image.
  • Entropy ( H ) [37] measures how much information is contained in an image, calculated as follows:
    H = l = 0 L 1 p ¯ l l o g 2 p l ,
    where L represents the gray level of an image and p l ¯ stands for the probability of the l th gray level in the image. A larger H value signified a better image fusion performance.
  • Feature mutual information ( F M I ) [38,39] refers to how much feature information is successfully transferred from the original images ( I A C , I D P C , and I D F C ) to the fused image I F , mathematically defined as follows:
    F M I = F I I A C , I F + F I I D P C , I F + F I I D F C , I F ,  
    where F I   I A , I B stands for the amount of feature information transferred from image I A to image I B ; F I , in Formula (19), can be calculated as follows:
    F I I A C , I F = I A C , I F   p I A C , I F i , j , k , l l o g 2 p I A C , I F i , j , k , l p I A C i , j p I F k , l ,  
    F I I D P C , I F = I D P C , I F   p I D P C , I F i , j , k , l l o g 2 p I D P C , I F i , j , k , l p I D P C i , j p I F k , l ,
    F I I D F C , I F = I D F C , I F   p I D F C , I F i , j , k , l l o g 2 p I D F C , I F i , j , k , l p I D F C i , j p I F k , l ,
    where p A , B is the joint distribution function between image A and image B, and i , j and k , l denote the pixel coordinates in image A and image B, respectively. Should the value of F M I be more significant, the fusion scheme fused three images successfully, preserving more feature information from each image.
  • The feature similarity index measure ( F S I M ) [40,41] related to the similarity between two images based on the low-level features—specifically, the phase congruency ( P C ) and the image gradient magnitude ( G M ). The F S I M of two images, I A i , j and I B i , j , were calculated by:
    F S I M A , B = i = 1 M j = 1 N S A B i , j m a x P C A i , j , P C B i , j i = 1 M j = 1 N m a x P C A i , j , P C B i , j ,  
    where P C A and P C B are the P C values of I A and I B , respectively, and S A B i , j refers to the local similarity, denoted as follows:
    S A B i , j = S P C ; A B i , j α S G M ; A B i , j β ,  
    S P C ; A B i , j = 2 P C A i , j P C B i , j + T 1 2 P C A 2 i , j P C B 2 i , j + T 1 ,
    S G M ; A B i , j = 2 G M A i , j G M B i , j + T 2 2 G M A 2 i , j G M B 2 i , j + T 2 ,
    where S P C ; A B i , j and S G M ; A B i , j are similarity measurements for I A i , j and I B i , j , based on P C and G M respectively; α and β are two parameters; and T 1 and T 2 are two constants, all of which were defined in [36]. To measure the performance of the XGI fusion, the overall F S I M was calculated by averaging F S I M I A C , I F , F S I M I D P C , I F , and F S I M I D F C , I F , where I F denoted the fusion result. The higher the F S I M value, the better the fusion performance.
  • The fusion factor ( F F ) is based on mutual information ( M I ), which originally measures the statistical dependence between two random variables as a concept in information theory. It is capable of measuring how much information was transferred from the input image to the fused image, and was defined as follows:
    F F = M I I A C , I F + M I I D P C , I F + M I I D F C , I F ,
    where
    M I I A C , I F = I A C , I F P = I A C , I F l o g P = I A C , I F P = I A C P = I F ,
    M I I D P C , I F = I D P C , I F P = I D P C , I F l o g P = I D P C , I F P = I D P C P = I F ,
    M I I D F C , I F = I D F C , I F P = I D F C , I F l o g P · I D F C , I F P · I D F C P · I F ,
    where M I I A C , I F , M I I D P C , I F , and M I I D F C , I F refer to the mutual information between images I A C and I F , I D P C and I F , and I D F C and I F , respectively; P = I A , I B is the joint probability density function of two images; and P = I A is the probability density function of an image. A larger F F value means a better image fusion performance.
  • The structural similarity index measure ( S S I M ) [42] measures how much structural information was transferred from one image into another based on the human eye’s sensitivity to the structural information, given as follows:
    S S I M I A , I B = j = 1 W S S I M I A j , I B j W ,
    where S S I M I A , I B represents the S S I M value of images I A and I B ; W is the number of windows that come from the division of an image; and S S I M I A j , I B j denotes the structural similarity between images I A and I B in the j th window. This was calculated by:
    S S I M I A j , I B j = 2 μ I A j μ I B j + k 1 2 L 2 2 σ I A j I B j + k 2 2 L 2 μ I A j 2 + μ I B j 2 + k 1 2 L 2 σ I A j 2 + σ I B j 2 + k 2 2 L 2 ,
    where μ I A j , μ I B j , σ I A j 2 , and σ I B j 2 are the local means and the local variances of the j th windows in images I A and I B , respectively; σ I A j I B j 2 is the cross-covariance for the j th windows between I A and I B . An overall S S I M value for the XGI fusion was defined as follows:
    S S I M = S S I M I A C , I F + S S I M I D P C , I F + S S I M I D F C , I F 3 ,
    where I A C , I D P C , I D F C , and I F denote the three input images and the fused image, respectively. Note that larger S S I M values corresponded to better fusion performance.
  • Power spectral density (PSD) [43,44] measures the power at each signal frequency. The estimate of the PSD P j at frequency j was denoted as follows:
    P j = C j n 2 ,  
    where C j are the Fourier terms and n is the number of samples. The total area enclosed by the PSD curve and the coordinate axis denoted the information contained in an image. The PSD curve of one image within one frequency band was higher than that of the other image, which meant that the former image had more information in this frequency band. A generally higher PSD curve indicated a better image fusion performance [42].

5. Experiment

5.1. Image Fusion Parameters and Results

The fusion parameters used in this work were given by the order of the fusion steps. For step 1, the sizes of the neighborhood samples for adaptive Wiener filtering were set at 5 ,   5 . For step 2, the decomposition levels of NSCT were 4 ,   4 ,   4 ,   4 . With regard to the parameters of the SCM, defined in Equations (4)–(6), we empirically set f = 0.8 , g = 0.7 , h = 20 , W = 0.1091 ,   0.1409 ,   0.1091 ;   0.1409 ,   0 ,   0.1409 ;   0.1091 ,   0.1409 ,   0.1091 , and the total iterative counts k = 200 . Weight factor for low-frequency band: a = 0.55 . Weight factors for high-frequency bands: b = 0.41 ,   c = 0.29 ,   d = 0.30 , and T t h = 1 . For step 3, the number of tiles, by row and column, used for CLAHE was 5 ,   5 , the contrast enhancement limit parameters for CLAHE were 0 ,   1 and 0.00125 , and the CLAHE histogram’s number of bins was 500 . Finally, λ 1 and λ 2 for contrast optimization were 4.8 and 0.49 , respectively.
The data used for the fusion process came from the grating-based X-ray phase contrast imaging of frog toes [45]. These images (a total of four sets of images) were fused by our algorithm using the parameters above. These experiments were carried out on MATLAB and half the results (of two sets of images) are shown in Figure 4. The remaining results of the other two sets of images are given in the Supplemental Information Section S2.
As shown in Figure 4, many features that only appeared in the DPC or DFC channels were successfully transported to the final fusion results. The soft tissue around the bone and meshwork structure of the bone trabecula (which can only be observed in the DPC channel), as well as the high signal of the bone cortex (which is only visible in the DFC channel), were successfully transferred into the fusion results. These well-preserved features demonstrated the efficiency of the proposed fusion scheme.

5.2. Objective Evaluation and Discussion

In this section, we implemented the other three image fusion schemes on the same datasets as those in Section 5.1. These methods included the algorithm based on the shift-invariance discrete wavelet transform (SIDWT) [17], the traditional NSCT image fusion algorithm, and the conventional NSCT-PCNN image fusion algorithm [46]. Then, the performance results of all four methods were evaluated by the measures mentioned in Section 4. Half of the results (of two sets of images) are displayed in Figure 5 and Table 1 and Table 2, while the remaining results of the other 2 datasets are given in the Supplemental Information, Section S2.
With regard to the parameter settings of SIDWT, the size of the neighborhood samples used for adaptive Wiener filtering was 5 ,   5 ; the decomposition levels of the first and second fusion steps were 4 and 5 , respectively; the numbers of tiles by row and column used for CLAHE were 5 ,   5 ; the limit of CLAHE contrast enhancement was 0 , 1 :   0.0017 ; the CLAHE histogram’ number of bins was 500 ; and λ 1 and λ 2 , for he contrast optimization, were 3.9 and 0.59 , respectively. The parameter settings of the NSCT used in the NSCT-PCNN method and NSCT method were the same as those we mentioned in Section 5.1. In addition, the parameters of the PCNN were empirically set as follows: α L = 0.06931 , α θ = 0.2 , V L = 1 , V θ = 20 , θ = 0.2 , N = 200 , and linking weight W = 0.707 , 1 , 0.707 ; 1 , 0 , 1 ; 0.707 , 1 , 0.707 [46].
As shown in Figure 5, we marked areas with red squares, called the regions of interest (ROI), to reduce the impact of noise on evaluation and focus on the part of the image in which we were most interested. We observed that the soft tissue around the bone was better presented by the NSCT-SCM methods than others. Our proposed method also preserved the texture inside the bones and the details at the bone joint junctions. In contrast, the details and texture of the other methods were not satisfactory, with images that were blurrier and less sharp in comparison, indicating those methods’ tendency to compromise on information preservation. The objective evaluation criteria were further carried out on these fusion results, and the evaluation results of the ROI are given in Table 1 and Table 2. The best results for each measure are marked in bold.
As shown in Table 1 and Table 2, the results of F M I , F F , S S I M , and F S I M of all methods were at the same level, with some slight fluctuations. This indicated that all methods demonstrated the ability to output fusion results that were similar enough to the source images. However, regarding the outcomes of E S , H , S D , and S F , the proposed method generally outperformed the others, showing that NSCT-SCM was able to transfer more information and details from the source images to the fusion result than other methods. Specifically, NSCT-SCM had higher values regarding H , S D , and S F in Table 1 and H and S D in Table 2. The NSCT method also led to the best E S results, as shown in in Table 1. The NSCT-PCNN method outperformed others, with regard to E S , and the SIDWT showed the best S F value.
In addition, we calculated the PSD of each fusion result and drew the PSD curves of the fusion images, given in Figure 6.
As shown in Figure 6, the PSD curve of our proposed scheme was generally higher than the others, meaning that the fusion results of NSCT-SCM contained more information and were of better quality. In addition, although the power spectral density of the SIDWT remained at the same level as that of the proposed method in high spatial frequencies, it was significantly outperformed by the NSCT-SCM in low spatial frequencies. This result was consistent with the evaluation results of the above eight measures and the subjective evaluation results, i.e., that the fusion image of NSCT-SCM had higher contrast and finer details.

6. Conclusions

In the present work, an NSCT-SCM-based image fusion scheme was proposed for X-ray grating interferometry. It incorporated three major steps: denoising, the NSCT-SCM fusion algorithm, and enhancement. A new coefficient selection strategy was proposed for the fusion algorithm step, which selected coefficients in different ways concerning high-frequency and low-frequency coefficients. This strategy met a unique requirement of XGI: that the low-frequency coefficient should derive primarily from the AC channel in order to achieve final fusion results similar to traditional CT, and that the high-frequency coefficient should be selected in a way preserves the details and features in the DPC and DFC channels.
Furthermore, the proposed method and three other image fusion methods were implemented on X-ray grating interferometry data of frog toes to demonstrate the feasibility and robustness of the NSCT-SCM image fusion scheme. The fusion results were evaluated using both subjective and objective measures. As observed and demonstrated, the proposed method was competitive with the other image fusion methods, both visually and quantitatively. The proposed image fusion scheme output images with high contrast and explicit details, and demonstrated the potential for real-time application. In our future research, a feature-based fusion scheme will be studied to process images more similarly to human eyes and achieve better computational efficiency.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s23063115/s1.

Author Contributions

Study conception and design were performed by H.L., X.J., J.L., Y.S. and X.C. Material preparation, data collection and analysis were performed by H.L., M.L. and G.Z. The first draft of the manuscript was written by H.L., and all authors commented on previous versions of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant numbers U19A2086.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors thank Yuxin Cheng from Shanghai Institute of Applied Physics, Chinese Academy of Sciences, for valuable discussions and technical support.

Conflicts of Interest

The authors have no competing interests to declare that are relevant to the content of this article.

References

  1. Cozzi, A.; Magni, V.; Zanardo, M.; Schiaffino, S.; Sardanelli, F. Contrast-enhanced Mammography: A Systematic Review and Meta-Analysis of Diagnostic Performance. Radiology 2022, 302, 568–581. [Google Scholar] [CrossRef]
  2. Nguyen, T.N.; Abdalkader, M.; Nagel, S.; Qureshi, M.M.; Ribo, M.; Caparros, F.; Haussen, D.C.; Mohammaden, M.H.; Sheth, S.A.; Ortega-Gutierrez, S.; et al. Noncontrast Computed Tomography vs Computed Tomography Perfusion or Magnetic Resonance Imaging Selection in Late Presentation of Stroke With Large-Vessel Occlusion. JAMA Neurol. 2022, 79, 22–31. [Google Scholar] [CrossRef] [PubMed]
  3. Martz, H.E.; Logan, C.M.; Schneberk, D.J.; Shull, P.J. X-ray Imaging: Fundamentals, industrial techniques and applications; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  4. Pfeiffer, F.; Weitkamp, T.; Bunk, O.; David, C. Phase retrieval and differential phase-contrast imaging with low-brilliance X-ray sources. Nat. Phys. 2006, 2, 258–261. [Google Scholar] [CrossRef] [Green Version]
  5. Zan, G.; Vine, D.J.; Yun, W.; Lewis, S.J.Y.; Wang, Q.; Wang, G. Quantitative analysis of a micro array anode structured target for hard x-ray grating interferometry. Phys. Med. Biol. 2020, 65, 035008. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, Z.; Hauser, N.; Singer, G.; Trippel, M.; Kubik-Huch, R.A.; Schneider, C.W.; Stampanoni, M. Non-invasive classification of microcalcifications with phase-contrast X-ray mammography. Nat. Commun. 2014, 5, 3797. [Google Scholar] [CrossRef] [Green Version]
  7. Arboleda, C.; Wang, Z.; Jefimovs, K.; Koehler, T.; Van Stevendaal, U.; Kuhn, N.; David, B.; Prevrhal, S.; Lång, K.; Forte, S.; et al. Towards clinical grating-interferometry mammography. Eur. Radiol. 2020, 30, 1419–1425. [Google Scholar] [CrossRef] [Green Version]
  8. Meinel, F.G.; Schwab, F.; Yaroshenko, A.; Velroyen, A.; Bech, M.; Hellbach, K.; Fuchs, J.; Stiewe, T.; Yildirim, A.Ö.; Bamberg, F.; et al. Lung tumors on multimodal radiographs derived from grating-based X-ray imaging—A feasibility study. Phys. Med. 2014, 30, 352–357. [Google Scholar] [CrossRef]
  9. Gradl, R.; Morgan, K.S.; Dierolf, M.; Jud, C.; Hehn, L.; Günther, B.; Möller, W.; Kutschke, D.; Yang, L.; Stoeger, T.; et al. Dynamic In Vivo Chest X-ray Dark-Field Imaging in Mice. IEEE Trans. Med. Imaging 2019, 38, 649–656. [Google Scholar] [CrossRef]
  10. Glinz, J.; Thor, M.; Schulz, J.; Zabler, S.; Kastner, J.; Senck, S. Non-destructive characterisation of out-of-plane fibre waviness in carbon fibre reinforced polymers by X-ray dark-field radiography. Nondestruct. Test. Eval. 2022, 37, 497–507. [Google Scholar] [CrossRef]
  11. Sarapata, A.; Ruiz-Yaniz, M.; Zanette, I.; Rack, A.; Pfeiffer, F.; Herzen, J. Multi-contrast 3D X-ray imaging of porous and composite materials. Appl. Phys. Lett. 2015, 106, 154102. [Google Scholar] [CrossRef]
  12. Yashiro, W.; Terui, Y.; Kawabata, K.; Momose, A. On the origin of visibility contrast in x-ray Talbot interferometry. Opt. Express 2010, 18, 16890–16901. [Google Scholar] [CrossRef] [PubMed]
  13. Bech, M.; Bunk, O.; Donath, T.; Feidenhans’l, R.; David, C.; Pfeiffer, F. Quantitative x-ray dark-field computed tomography. Phys. Med. Biol. 2010, 55, 5529. [Google Scholar] [CrossRef] [PubMed]
  14. Michel, T.; Rieger, J.; Anton, G.; Bayer, F.; Beckmann, M.W.; Durst, J.; Fasching, P.A.; Haas, W.; Hartmann, A.; Pelzer, G.; et al. On a dark-field signal generated by micrometer-sized calcifications in phase-contrast mammography. Phys. Med. Biol. 2013, 58, 2713. [Google Scholar] [CrossRef] [PubMed]
  15. Ewald, R.; Thomas, K.; van Udo, S.; Gerhard, M.; Nik, H.; Zhentian, W.; Marco, S. Image fusion algorithm for differential phase contrast imaging. In Proceedings of the SPIE Medical Imaging 2012, San Diego, CA, USA, 4–9 February 2012. [Google Scholar]
  16. Wang, Z.; Clavijo, C.A.; Roessl, E.; van Stevendaal, U.; Koehler, T.; Hauser, N.; Stampanoni, M. Image fusion scheme for differential phase contrast mammography. J. Instrum. 2013, 8, C07011. [Google Scholar] [CrossRef] [Green Version]
  17. Scholkmann, F.; Revol, V.; Kaufmann, R.; Baronowski, H.; Kottler, C. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot–Lau grating interferometry. Phys. Med. Biol. 2014, 59, 1425–1440. [Google Scholar] [CrossRef]
  18. Coello, E.; Sperl, J.I.; Bequé, D.; Benz, T.; Scherer, K.; Herzen, J.; Sztrókay-Gaul, A.; Hellerhoff, K.; Pfeiffer, F.; Cozzini, C.; et al. Fourier domain image fusion for differential X-ray phase-contrast breast imaging. Eur. J. Radiol. 2017, 89, 27–32. [Google Scholar] [CrossRef]
  19. Do, M.N.; Vetterli, M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Trans. Image Process. 2005, 14, 2091–2106. [Google Scholar] [CrossRef] [Green Version]
  20. Skodras, A.; Christopoulos, C.; Ebrahimi, T. The JPEG 2000 still image compression standard. IEEE Signal Process. Mag. 2001, 18, 36–58. [Google Scholar] [CrossRef]
  21. Stéphane, M. Chapter 6—Wavelet Zoom. In A Wavelet Tour of Signal Processing, 3rd ed.; Stéphane, M., Ed.; Academic Press: Cambridge, MA, USA, 2009; pp. 205–261. [Google Scholar]
  22. Donoho, D.L.; Vetterli, M.; DeVore, R.A.; Daubechies, I. Data compression and harmonic analysis. IEEE Trans. Inf. Theory 1998, 44, 2435–2476. [Google Scholar] [CrossRef] [Green Version]
  23. Yan, C.-M.; Guo, B.-L.; Yi, M. Fast Algorithm for Nonsubsampled Contourlet Transform. Acta Autom. Sin. 2014, 40, 757–762. [Google Scholar] [CrossRef]
  24. Cunha, A.L.D.; Zhou, J.; Do, M.N. The Nonsubsampled Contourlet Transform: Theory, Design, and Applications. IEEE Trans. Image Process. 2006, 15, 3089–3101. [Google Scholar] [CrossRef] [Green Version]
  25. Zhan, K.; Zhang, H.; Ma, Y. New Spiking Cortical Model for Invariant Texture Retrieval and Image Processing. IEEE Trans. Neural Netw. 2009, 20, 1980–1986. [Google Scholar] [CrossRef] [PubMed]
  26. Liu, H.; Liu, M.; Li, D.; Zheng, W.; Yin, L.; Wang, R. Recent Advances in Pulse-Coupled Neural Networks with Applications in Image Processing. Electronics 2022, 11, 3264. [Google Scholar] [CrossRef]
  27. Zhou, G.; Tian, X.; Zhou, A. Image copy-move forgery passive detection based on improved PCNN and self-selected sub-images. Front. Comput. Sci. 2021, 16, 164705. [Google Scholar] [CrossRef]
  28. Liu, H.; Cheng, Y.; Zuo, Z.; Sun, T.; Wang, K. Discrimination of neutrons and gamma rays in plastic scintillator based on pulse-coupled neural network. Nucl. Sci. Tech. 2021, 32, 82. [Google Scholar] [CrossRef]
  29. Liu, H.; Zuo, Z.; Li, P.; Liu, B.; Chang, L.; Yan, Y. Anti-noise performance of the pulse coupled neural network applied in discrimination of neutron and gamma-ray. Nucl. Sci. Tech. 2022, 33, 75. [Google Scholar] [CrossRef]
  30. Liu, H.; Liu, M.; Xiao, Y.; Li, P.; Zuo, Z.; Zhan, Y. Discrimination of neutron and gamma ray using the ladder gradient method and analysis of filter adaptability. Nucl. Sci. Tech. 2022, 33, 159. [Google Scholar] [CrossRef]
  31. Liu, M.; Zhao, F.; Jiang, X.; Zhang, H.; Zhou, H. Parallel binary image cryptosystem via spiking neural networks variants. Int. J. Neural Syst. 2021, 32, 2150014. [Google Scholar] [CrossRef]
  32. Lian, J.; Yang, Z.; Liu, J.; Sun, W.; Zheng, L.; Du, X.; Yi, Z.; Shi, B.; Ma, Y. An Overview of Image Segmentation Based on Pulse-Coupled Neural Network. Arch. Comput. Methods Eng. 2021, 28, 387–403. [Google Scholar] [CrossRef]
  33. Tan, W.; Thitøn, W.; Xiang, P.; Zhou, H. Multi-modal brain image fusion based on multi-level edge-preserving filtering. Biomed. Signal Process. Control 2021, 64, 102280. [Google Scholar] [CrossRef]
  34. Lim, J.S. Two-Dimensional Signal and Image Processing; Englewood Cliffs: Englewood Cliffs, NJ, USA, 1990. [Google Scholar]
  35. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  36. Xydeas, C.S.; Petrović, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef] [Green Version]
  37. Hamza, A.B.; Krim, H. Jensen-renyi divergence measure: Theoretical and computational perspectives. In Proceedings of the IEEE International Symposium on Information Theory, Yokohama, Japan, 29 June–4 July 2003; p. 257. [Google Scholar]
  38. Haghighat, M.B.A.; Aghagolzadeh, A.; Seyedarabi, H. A non-reference image fusion metric based on mutual information of image features. Comput. Electr. Eng. 2011, 37, 744–756. [Google Scholar] [CrossRef]
  39. Haghighat, M.; Razian, M.A. Fast-FMI: Non-reference image fusion metric. In Proceedings of the 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), Astana, Kazakhstan, 15–17 October 2014; pp. 1–3. [Google Scholar]
  40. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Liu, Z.; Laganière, R. Phase congruence measurement for image similarity assessment. Pattern Recognit. Lett. 2007, 28, 166–172. [Google Scholar] [CrossRef]
  42. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  43. Sheehy, C.D.; McCrady, N.; Graham, J.R. Constraining the Adaptive Optics Point-Spread Function in Crowded Fields: Measuring Photometric Aperture Corrections. Astrophys. J. 2006, 647, 1517–1530. [Google Scholar] [CrossRef]
  44. Gircys, M.; Ross, B.J. Image Evolution Using 2D Power Spectra. Complexity 2019, 2019, 7293193. [Google Scholar] [CrossRef]
  45. Zan, G.; Gul, S.; Zhang, J.; Zhao, W.; Lewis, S.; Vine, D.J.; Liu, Y.; Pianetta, P.; Yun, W. High-resolution multicontrast tomography with an X-ray microarray anode–structured target source. Proc. Natl. Acad. Sci. USA 2021, 118, e2103126118. [Google Scholar] [CrossRef]
  46. Xiang, T.; Yan, L.; Gao, R. A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain. Infrared Phys. Technol. 2015, 69, 53–61. [Google Scholar] [CrossRef]
Figure 1. Describing a smooth contour by two different schemes.
Figure 1. Describing a smooth contour by two different schemes.
Sensors 23 03115 g001
Figure 2. Image decomposition process of NSCT.
Figure 2. Image decomposition process of NSCT.
Sensors 23 03115 g002
Figure 3. Principle of the NSCT-SCM XGI fusion scheme. Step I: Images are denoised using Wiener filtering. Step II: Images are decomposed into coefficient matrixes using NSCT. Then, the coefficient matrixes are proposed by SCM, outputting ignition matrixes. Finally, band mixing is implemented (three coefficient matrixes are fused into one coefficient matrix based on a coefficient selection algorithm designed on the basis of ignition matrixes), and the fused image is obtained by reconstructing the fused coefficient matrix. Step III: The fused image is enhanced to generate the final output image.
Figure 3. Principle of the NSCT-SCM XGI fusion scheme. Step I: Images are denoised using Wiener filtering. Step II: Images are decomposed into coefficient matrixes using NSCT. Then, the coefficient matrixes are proposed by SCM, outputting ignition matrixes. Finally, band mixing is implemented (three coefficient matrixes are fused into one coefficient matrix based on a coefficient selection algorithm designed on the basis of ignition matrixes), and the fused image is obtained by reconstructing the fused coefficient matrix. Step III: The fused image is enhanced to generate the final output image.
Sensors 23 03115 g003
Figure 4. Source images and fusion results. (a,e) Source images from the AC channel; (b,f) source images from the DPC channel; (c,g) source images from the DFC channel; (d,h) fusion results by NSCT-SCM. The orange arrows point out distinct differences between tri-contrast modalities.
Figure 4. Source images and fusion results. (a,e) Source images from the AC channel; (b,f) source images from the DPC channel; (c,g) source images from the DFC channel; (d,h) fusion results by NSCT-SCM. The orange arrows point out distinct differences between tri-contrast modalities.
Sensors 23 03115 g004
Figure 5. Fusion results of (a,e) NSCT, (b,f) NSCT-PCNN, (c,g) SIDWT, and (d,h) the proposed method (NSCT-SCM). The red boxes denote the region of interest used to calculated objective evaluation criteria. The orange arrows point out distinct differences between results of image fusion methods.
Figure 5. Fusion results of (a,e) NSCT, (b,f) NSCT-PCNN, (c,g) SIDWT, and (d,h) the proposed method (NSCT-SCM). The red boxes denote the region of interest used to calculated objective evaluation criteria. The orange arrows point out distinct differences between results of image fusion methods.
Sensors 23 03115 g005
Figure 6. PSD curves of (a) Figure 5a–d and (b) Figure 5e–h.
Figure 6. PSD curves of (a) Figure 5a–d and (b) Figure 5e–h.
Sensors 23 03115 g006
Table 1. The evaluation results of the ROI in Figure 5a–d.
Table 1. The evaluation results of the ROI in Figure 5a–d.
MeasuresNSCTNSCT-PCNNSIDWTProposed Method (NSCT-SCM)
E S 2.62972.28850.65271.8847
H 5.87585.69906.57557.0350
S D 0.09620.08300.12290.1615
S F 12.113614.070240.398740.6443
F M I 0.95240.95240.91810.9321
F F 13.101813.040612.964913.4200
S S I M 0.99730.99700.99740.9961
F S I M 0.93900.93810.93040.9234
Table 2. The evaluation results of the ROI in Figure 5e–h.
Table 2. The evaluation results of the ROI in Figure 5e–h.
MeasuresNSCTNSCT-PCNNSIDWTProposed Method (NSCT-SCM)
E S 1.25871.13710.39371.1191
H 6.09286.29286.92537.2230
S D 0.10770.10770.14710.1821
S F 8.32688.326830.031124.2106
F M I 0.93360.99360.85450.8943
F F 13.713313.713313.508414.2617
S S I M 0.99740.99740.99640.9968
F S I M 0.93680.93680.92140.9318
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H.; Liu, M.; Jiang, X.; Luo, J.; Song, Y.; Chu, X.; Zan, G. Multimodal Image Fusion for X-ray Grating Interferometry. Sensors 2023, 23, 3115. https://doi.org/10.3390/s23063115

AMA Style

Liu H, Liu M, Jiang X, Luo J, Song Y, Chu X, Zan G. Multimodal Image Fusion for X-ray Grating Interferometry. Sensors. 2023; 23(6):3115. https://doi.org/10.3390/s23063115

Chicago/Turabian Style

Liu, Haoran, Mingzhe Liu, Xin Jiang, Jinglei Luo, Yuming Song, Xingyue Chu, and Guibin Zan. 2023. "Multimodal Image Fusion for X-ray Grating Interferometry" Sensors 23, no. 6: 3115. https://doi.org/10.3390/s23063115

APA Style

Liu, H., Liu, M., Jiang, X., Luo, J., Song, Y., Chu, X., & Zan, G. (2023). Multimodal Image Fusion for X-ray Grating Interferometry. Sensors, 23(6), 3115. https://doi.org/10.3390/s23063115

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop