Next Article in Journal
Spatiotemporal Fusion of Remote Sensing Images with Structural Sparsity and Semi-Coupled Dictionary Learning
Next Article in Special Issue
Understanding Forest Health with Remote Sensing-Part II—A Review of Approaches and Data Models
Previous Article in Journal
Evaluation of a Phenology-Dependent Response Method for Estimating Leaf Area Index of Rice Across Climate Gradients
Previous Article in Special Issue
Understanding Forest Health with Remote Sensing -Part I—A Review of Spectral Traits, Processes and Remote-Sensing Characteristics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD) and Its Application to the Extraction of Bamboo Forest

1
Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100094, China
2
Department of Surveying, School of Civil Engineering, Beijing Jiaotong University, Beijing 100044, China
3
Institute of Geography and Environment, Baoji University of Arts and Sciences, Baoji 721013, China
4
Guangxi Key Laboratory of Plant Conservation and Restoration Ecology in Karst Terrain, Guangxi Institute of Botany, Guangxi Zhuang Autonomous Region and Chinese Academy of Sciences, Guilin 541006, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(1), 19; https://doi.org/10.3390/rs9010019
Submission received: 23 September 2016 / Revised: 27 November 2016 / Accepted: 21 December 2016 / Published: 29 December 2016
(This article belongs to the Special Issue Remote Sensing of Forest Health)

Abstract

:
Most bamboo forests grow in humid climates in low-latitude tropical or subtropical monsoon areas, and they are generally located in hilly areas. Bamboo trunks are very straight and smooth, which means that bamboo forests have low structural diversity. These features are beneficial to synthetic aperture radar (SAR) microwave penetration and they provide special information in SAR imagery. However, some factors (e.g., foreshortening) can compromise the interpretation of SAR imagery. The fusion of SAR and optical imagery is considered an effective method with which to obtain information on ground objects. However, most relevant research has been based on two types of remote sensing image. This paper proposes a new fusion scheme, which combines three types of image simultaneously, based on two fusion methods: bidimensional empirical mode decomposition (BEMD) and the Gram-Schmidt transform. The fusion of panchromatic and multispectral images based on the Gram-Schmidt transform can enhance spatial resolution while retaining multispectral information. BEMD is an adaptive decomposition method that has been applied widely in the analysis of nonlinear signals and to the nonstable signal of SAR. The fusion of SAR imagery with fused panchromatic and multispectral imagery using BEMD is based on the frequency information of the images. It was established that the proposed fusion scheme is an effective remote sensing image interpretation method, and that the value of entropy and the spatial frequency of the fused images were improved in comparison with other techniques such as the discrete wavelet, à-trous, and non-subsampled contourlet transform methods. Compared with the original image, information entropy of the fusion image based on BEMD improves about 0.13–0.38. Compared with the other three methods it improves about 0.06–0.12. The average gradient of BEMD is 4%–6% greater than for other methods. BEMD maintains spatial frequency 3.2–4.0 higher than other methods. The experimental results showed the proposed fusion scheme could improve the accuracy of bamboo forest classification. Accuracy increased by 12.1%, and inaccuracy was reduced by 11.0%.

Graphical Abstract

1. Introduction

A synthetic aperture radar (SAR) sensor is a type of active sensor that makes earth observation possible, regardless of weather conditions. It is sensitive to the geometric features of targets, and it can provide a radar image containing information on surface roughness, object shape, orientation, and moisture content [1]. However, SAR imagery is usually difficult to interpret because of certain factors (e.g., foreshortening, range and azimuth shift, layover, radiometric variation, and speckle noise) [2]. Panchromatic and multispectral imagery have high spatial resolution and can, respectively, reflect edge information [3] and spectral information of different objects. Maximizing visual performance by combining the characteristics of SAR and optical sensors will benefit image processing tasks, such as object identification and regional change detection [4,5,6]. However, because of the significant differences between the imaging mechanisms of SAR and optical sensors, the properties of SAR, multispectral, and panchromatic images are different for the same surface features of the same target. Traditional fusion methods, such as intensity–hue–saturation (IHS) and principal component analysis are not adequately effective, because they do not consider the differences in mechanism and spectrum characteristics of SAR, multispectral, and panchromatic images. Recently, the multiscale decomposition method (MSD) has been recognized as useful for image fusion. Based on this finding, various MSD-based fusion methods have been proposed for the fusion of SAR and panchromatic images [7,8,9]. The discrete wavelet transform (DWT) fusion method generally produces good results with computational efficiency; however, this method is not shift-invariant because of the subsampling done during processing. Fusion with contourlets, which provides asymptotic optimal representation of contours, has been used in the field of image fusion with successful results. However, contourlet methods also lack shift-invariance, which can result in artifacts along edges [10]. Wavelet decomposition, similar to pyramid methods, depends on a predefined filter or wavelet function, which greatly affects the quality of the fused image. The empirical mode decomposition (EMD) technique, formulated by Huang [11], is an adaptive decomposition method that can decompose any complicated signal into multiple scales of spatial frequencies, called intrinsic mode functions (IMFs). As a time domain analysis technique, EMD is different from Fourier methods and the DWT method, and it can be adapted for the analysis of nonlinear and nonstationary datasets [12,13,14]. The bidimensional empirical mode decomposition (BEMD) method has been used since 2003 [15]. It is an extension of the EMD technique that has been applied to image fusion [16]. In this study, we applied the BEMD method to SAR, panchromatic, and multispectral images. Achieving the fusion of SAR, multispectral, and panchromatic imagery simultaneously was the objective of this study.

2. Study Area and Dataset

2.1. Study Area

The study area is located within the region 24°38′–25°54′N, 110°09′–110°46′E, covering an estimated area of 5.96 × 103 km2 (Figure 1). The Li River flows from Cat Mountain, located on the border between Xingan and Lingchuan counties, through Xingan and Lingchuan counties, the city of Guilin, and Yangshuo County. The length of the imaged area is 188 km. The area of specific interest in this study is located in the middle reaches of the Li River, around Guilin, Lingchuan County and the surrounding areas.
In the past few years, problems such as a decrease of water and reduction of forest quality have affected the Li River basin. The reason for this is that local residents have planted bamboo that is not conducive to water and soil conservation. Mao bamboo is the most important species in the bamboo forest. It often dominates the plantation, providing a horizontal canopy with a regular single structure at a height of 10–20 m. Below this canopy, the numbers of shrub and herb species are reduced and their coverages are small. Through the classification of vegetation in the Li River basin, we can analyze the impact of human activities on the natural landforms, and provide a basis for the establishment of appropriate protection measures.

2.2. Dataset

The SAR image acquired by the TerraSAR-X satellite at 22:38 (Universal Time Coordinated, UTC) on 3 November 2013, was used for this study. The TerraSAR-X is part of the TanDEM-X constellation and the developments were made at German Aerospace Center (DLR), Germany. It carries an X-band SAR sensor, which has the ability to acquire images in different modes (Spotlight, Staring Spotlight, Stripmap and etc.) and it has a variety of polarization mode including single polarization, dual polarization and full polarization. It can obtain high-resolution images irrespective of weather conditions, which makes it suitable for monitoring areas prone to complex weathers conditions. The revisit time of TerraSAR-X satellite is 11 days, however when benefitting from the electron beam control mechanism, 90% of given location on earth can be covered within 2.5 days. TerraSAR-X can provide imagery with extremely high radiometric accuracy [17]. The parameters of the SAR image used in this study are spatial resolution of 3m × 3 m, descending orbit, HH polarization, and incidence angle of about 37°.
The Gaofen-1 (GF-1) satellite is a high-resolution earth observation satellite developed by the Institute of Space Technology at the China Aerospace Science and Technology Corporation. It was successfully launched on 26 April 2013, from the Jiuquan Satellite Launch Center. It is equipped with two sets of panchromatic and multispectral cameras, respectively, as well as four sets of multispectral wide-field cameras with 16-m resolution. GF-1 represents the cutting edge in optical remote sensing technology with its high spatial, multispectral, and temporal resolutions, multiload image mosaicking technique, and high-precision attitude control technology. The GF-1 remote sensing data used in this study include a multispectral image acquired at 13:34:40 (UTC) on 16 October 2013, and high-resolution panchromatic images acquired at the same time. The spatial resolution of the panchromatic waveband is 2 m × 2 m, and the spatial resolution of the multispectral image is 8m × 8 m.

3. Fusion Algorithm of Three Images with Different Feature Information Based on BEMD

The wavelet transform used for multiscale SAR and optical image fusion is essentially a linear transformation that has limited capability in processing nonstationary signals [18,19]. BEMD transform can be applied to the analyzing of nonlinear and nonstationary signals [11,20], and the algorithm has invariance property as well. These characteristics are advantageous for SAR imagery which has a nonstationary nature. Compared with wavelet transform, BEMD extracts the oscillating part of the signal at each local time, and decomposes the original complex signal into different spatial frequency components (that is, IMF). BEMD is adaptive, while IMF is a direct signal based local characteristic scale selection, which can reflect the signal oscillation mode. The instantaneous frequency and local energy obtained by IMF can well reflect the time frequency energy distribution of the data. The use of BEMD to achieve fusion of SAR, multispectral, and panchromatic imagery was the objective of this study. The SAR and panchromatic images are single-band images and the multispectral image is a three-band image. This section describes how our proposed scheme first fuses the SAR and panchromatic images using BEMD, and then its effectiveness is compared against other methods. Then, the fusion of three types of images is performed using BEMD for classification.

3.1. Bidimensional Empirical Mode Decomposition (BEMD)

Compared with traditional integral transform techniques, EMD is a highly efficient and adaptive method that provides higher frequency resolution and more accurate timing of nonlinear and nonstationary signal events [12,13]. The EMD technique, which is based on the direct extraction of the energy associated with various intrinsic time scales, adaptively decomposes a given signal into IMFs, where the components obtained rely on a sifting process algorithm. In principle, this method assures complete decomposition, which depends only on the precision of the numerical sifting process. Considering that the EMD method selects the highest frequency oscillation remaining in the signal, each IMF contains lower frequency components than the one extracted just before. The EMD theory was originally proposed for one-dimensional data; however, its use has been extended to two-dimensional data and it has since been used in many fields such as image fusion and noise reduction [14,16].

3.2. Area-Based Adaptive Weighting Scheme

From the perspective of fusing remote sensing images, the fusion process can be considered as one that constructs weighting coefficients of the textural and spatial responses of the SAR and panchromatic images. The aim of fusing SAR and optical images is to preserve the edge, outline, texture, color, and other structural information, as well as to add specific descriptive characteristic features simultaneously. Therefore, an adaptive weighting scheme is established to protect such information. As the local area characteristics of an image should be represented by pixels with strong correlation, feature-level fusion based on regional characteristics is visually more effective than pixel-level fusion and it can restrain the fusion trace effectively [21]. Here, we propose an area-based adaptive weighting scheme to fuse the images at multiple scales, as described in detail below.

3.3. Fusion Processing Flow and Detail Description

Generally, different images have different respective roles. For example, SAR and panchromatic images could provide texture and edge contour information, respectively, for the classification of a study area. Multispectral images could provide color information for the visual interpretation of a study area for classification purposes.
In this paper, we propose a new fusion classification scheme. To verify the validity of this scheme, the results are compared against two other existing schemes. A flow chart of the proposed scheme is illustrated in Figure 2.
The principles of the algorithm of the proposed scheme are as follows:
(1) Data preprocessing
The nonlinear and nonstationary characteristics of high-resolution images are more prominent than for lower resolution imagery, which makes system-immanent speckle noise more remarkable [19]. Generally, speckle in SAR images is supposed to be modeled as multiplicative noise, which can be reduced by multilook processing or spatial filtering. Here, the enhanced Lee filter was selected, such that the filtered image could preserve the inherent characteristics of the SAR image while minimizing speckle noise.
Image registration is a crucial step in the fusion process. We used the standard registration method, which involves the manual identification of tie points between the images and then applying a geometric transformation.
The given panchromatic and multispectral images are then fused using the method of the Gram-Schmidt transformation in four steps. First, a panchromatic band is copied from the low-resolution band. Second, the Gram-Schmidt transform is applied to replicate the panchromatic and multiband, where the panchromatic band is as the first band. Third, the first band is replaced after the Gram-Schmidt transformation with the high-spatial resolution panchromatic band. Finally, the fusion image is obtained using the Gram-Schmidt inverse transform.
(2) Perform a multiresolution decomposition on the speckle-free SAR and multispectral images using BEMD.
BEMD can decompose the original two-dimensional signal or image into multiple IMF and residual components using a sifting algorithm. The following sifting algorithm was adopted to obtain the IMFs. For a given initial fusion image, the steps are:
  • Search for all local maxima and minima of the image
  • Fit all maximum and minimum values. Then, obtain the upper and lower envelopes and calculate the average envelope surface
  • Calculate the local trend
  • Determine whether the local trend is satisfied by the conditions of the IMF, and realize the selection of the IMF
  • Calculate the residual component and determine whether the screening process is complete
It is known that the IMF component of the decomposition represents the local small-scale characteristics and that it has strong correlation with the range of the scale. However, by considering that the components in the RGB (Red, Green, Blue) color coordinate system are nonlinear, the image can be converted into the IHS color space. Initializing with I M S S = F M S S , all the local maxima and minima of the current mode signal IMSS can be detected and extracted. Then according to the local maxima and minima, the bounding envelopes of I M S S can be generated using appropriate surface interpolation techniques. For the process of generating an envelope, the Delaunay triangulation-based interpolation was chosen because it is effective in analyzing images that contain many extrema [20]. Consequently, an upper envelope (UE) and a lower envelope (LE) of I M S S could be formed and the mean envelope (ME) subtracted. Along with the ME, the first estimation of an IMF can be calculated by subtracting the mean surface from the data:
I e s t i m a t e M S S = I M S S M E
As part of the sifting process, if I e s t i m a t e M S S meets the following conditions of the IMF, I e s t i m a t e M S S would be considered the required IMF component ( I k M S S ); otherwise, I MSS is replaced with I e s t i m a t e M S S and the iterative process continues until that is achieved. The IMF components must satisfy the following two criteria [10]: for the entire dataset, the number of extrema and the number of zero crossings should differ by one at most or be equal to one, and at any point, the mean value of the envelopes, defined by the local maxima and minima, should be zero.
After obtaining the IMF component I k M S S , the residual R k M S S can be estimated as:
R k M S S = I M S S I k M S S
If the residual R k M S S is monotone and contains no IMFs, the procedure is terminated. Otherwise, it is necessary to replace the original I M S S with R k M S S and repeat the procedure from the step of calculating ME.
Thus, an image can be expressed as the sum of the IMFs and the last residual:
F M S S I = k = 1 M I k M S S + R k M S S
The same sifting processing algorithm is used to achieve the BEMD of the speckle-free SAR image F S F :
F S F = k = 1 N I k S F + R k S F
After the panchromatic and speckle-free SAR images are decomposed by BEMD, we can obtain the multiscale coefficients { I k M S S ,   R M S S } and { I k S F ,   R S F } ( 1 k n ) , where I k is the IMF and R is the residual. The number of IMFs decomposed from different images might be different; it is denoted by M in the panchromatic image and by N in the speckle-free SAR image. Then, let n = min { M ,   N } be the common number of their IMFs, and the remaining IMFs of the image can be incorporated into the corresponding residual [22].
(3) Fuse the IMFs { I k P A N ,   I k S F } and residual { R P A N ,   R S F } as in the following equations:
I k f u s e = α k I k S F + ( 1 α k ) I k M S S
R f u s e = β R S F + ( 1 β ) R M S S
Here, I k f u s e is the fusion result of the k-th IMF and α k is the weight of k-th IMF of the SAR image, R f u s e is the fusion result of the residual, and β is the weight of the residual of the SAR image. Weighting coefficients α k and β are obtained adaptively through an area-based weighting method. To calculate the values of α k and β , the weights from the pixel-based fusion scheme of the k-th IMF and the residual of the SAR image, denoted as coefficients α k and β , should be obtained first. According to the principles of BEMD, low-order IMFs mainly represent the detailed information of the images, i.e., the high-frequency information, whereas the residual mainly includes low-frequency information. Therefore, a Laplace high-pass filter, which denotes the detailed information at the corresponding point, is applied to evaluate weight α k . The outcome of the Laplace high-pass filter is defined as:
L ( x , y ) = | 9 f ( x , y ) m , n = 1 1 f ( x + m , y + n ) |
Thus, weight α k is calculated by
α k = L ( I k S F ) L ( I k S F ) + L ( I k P A N )
Correspondingly, to evaluate the weight of the residual, we use a low-pass filter:
A ( x , y ) = 1 9 m , n = 1 1 f ( x + m , y + n )
Therefore, weight β is evaluated by
β = A ( R S F ) A ( R S F ) + A ( R P A N )
After obtaining coefficients α k and β by Laplace filters, coefficients α k and β could be calculated from the edge information of the panchromatic image within windows of a certain size. If there is no other feature within the window, the pixels of the window can share the same weighting coefficient; otherwise, the pixels should retain their own weighting coefficients. A Laplace edge detection operator is selected to extract the edge information of the panchromatic image. If the window contains pixels belonging to edge points in the panchromatic image, the original weight will be retained; otherwise, the area is supposed smooth, and the average weight of the window should be the weight of all the pixels within the area. That is,
α k = { α k ¯ ,   i f   t h e   w i n d o w   i s   s m o o t h α k ,   o t h e r w i s e
where coefficients α k and α k are the weights of the k-th IMF of the SAR image, based on the area-based and pixel-based fusion schemes, respectively. Coefficient α k ¯ is the average of all α k within the same window. The weight β can be obtained from β in the same way.
(4) Obtain I f u s e using the inverse BEMD as in the following equation:
I f u s e = k = 1 n I k f u s e + R f u s e
Then, the final fusion image is obtained by inverse IHS transform.

3.4. Criteria for Comparison of Fusion Results

The criteria for the comparison of the fusion results can be divided into two. First, from the visual angle, we can evaluate the quality of the fusion result intuitively. Second, we can compare some of the features contained in the fusion results. Here, the formula for calculating some of the features contained in the fusion results is provided.
Entropy plays an important role in evaluating the amount of information contained in an image. The higher the entropy, the more information the image contains. The entropy of an image is given by
E = i = 0 L 1 p i l o g 2 p i
where p i denotes the probability of the particular gray value i and L represents the gray level of the image.
The spatial frequency is a measure of the active degree of an image in the spatial domain. The higher the frequency, the more detailed the spatial information. The spatial frequency is defined as:
S F = R F 2 + C F 2
RF and CF can be calculated by:
R F = 1 M N i = 1 M j = 2 N ( f ( x i , y j ) f ( x i , y j 1 ) ) 2
C F = 1 M N i = 2 M j = 1 N ( f ( x i , y j ) f ( x i 1 , y j ) ) 2
The average gradient represents information on the edge details of an image. It can sensitively reflect minute details of the characteristics of contrast and texture within an image:
G ¯ = 1 M N i = 0 M 1 j = 0 N 1 Δ I x 2 + Δ I y 2 2
where Δ I x and Δ I y refer to the differences in the x and y directions, respectively.

4. Results and Discussion

4.1. Comparison of Fusion Results

Our proposed fusion process is a step-by-step integration of three types of image; therefore, to verify the effectiveness of BEMD in image fusion we selected a set of data comprising SAR and panchromatic images. The SAR and panchromatic images were used as sample data, and three different fusion algorithms were applied in comparative experiments to verify the validity of the BEMD algorithm.
The dataset used for the experiments is shown in Figure 3a,b. It is obvious that the spatial resolution of the panchromatic image is much higher than that of the SAR image. Some small spatial structural details, such as the edges of buildings and roads, which cannot be identified in the SAR image, are discernible in the panchromatic image. The SAR image provides information on surface roughness and the multispectral image provides information on the spectrum to prove the superiority of the proposed fusion method, and the results are compared against the DWT, à-trous [23], and non-subsampled contourlet transform (NSCT) [10] methods. For the DWT method, the average values of the low- and high-frequency subband coefficients are selected as the low and high frequencies, respectively. In the à-trous fusion method, certain high-frequency sub-band coefficients are added to the SAR image, while the low frequency of the panchromatic image is ignored. The NSCT method adopts a feature-level fusion scheme. It is known that wavelet decomposition depends on a predefined filter or wavelet function, wavelet basis, and decomposition levels, which directly affect the fusion results. Many experimental studies have been conducted to evaluate different wavelet bases and decomposition levels [24,25]. The results indicate that Daubechies wavelet basis and three decomposition layers yield the best fusion images; therefore, these were adopted as the wavelet basis and decomposition level number of the algorithms. The fused images obtained using the different algorithms are shown in Figure 3.
Visual inspection shows that the fused images contain more information than the original single images. Building edges, lines, and corners (Figure 3c,e,f) are much sharper than in the original SAR image. This implies that fusion by the DWT, NSCT, à-trous and BEMD methods improves the spatial quality of SAR images. The fused images (Figure 3c–f) preserve the information of the SAR image well, e.g., the highlighted object information. Thus, the fused images (Figure 3c–f) fully combine the complementary and supplementary information of the original images. Compared with Figure 3c–e, the image obtained by the proposed method contains more information.
Although visual evaluation is easy and direct, it is highly subjective and it cannot accurately evaluate the practical effects of the fusing methods. Therefore, the performance of each method is quantitatively analyzed further based on entropy, spatial frequency, and average gradient.
The calculated values of entropy, average gradient, and spatial frequency are presented in Table 1. It can be seen that the entropy of the fusion images based on BEMD is higher than that of the original images, and the value improves 0.13. The evaluation indices of the proposed method are greater than the DWT, à-trous, and NSCT fusion methods; the value of entropy enhances about 0.3–0.6. With respect to the other two evaluation indices: the average gradient of BEMD is 4% higher than other methods and spatial frequency of BEMD is mantained 3.2 higher than other methods. This means that the fusion image obtained by the proposed method imports the target and surface information in the SAR image, which is difficult to identify in the panchromatic image. Furthermore, it preserves the high spatial resolution of the panchromatic image. Thus, the analytical results show that the fusion effect obtained by the proposed BEMD method is the best, and this inference is consistent with that drawn following visual observation.
It is worth mentioning that although the à-trous wavelet method has many advantages over the DWT method, the performance of the DWT fusion method is better, because the à-trous fusion method uses a more restrictive method, which ignores the low-frequency sub-band coefficients of the panchromatic image. Thus, it follows that the fusion scheme in the proposed BEMD fusion method is potentially more effective.
The texture feature of the building is particularly prominent in the SAR image [26]. However, our purpose was to verify the availability of BEMD in the extraction of plant texture features. Another study area was selected to demonstrate the superiority of the proposed BEMD fusion method, in which the features in the selected area are mainly vegetation and farmland. The fused images of this area, obtained by the different algorithms, are shown in Figure 4. Table 2 presents the evaluation indexes of the different fusion algorithms. In a common situation, the larger the value of entropy index, the more information included. It can be seen that the BEMD method is the best, and this inference is consistent with that drawn based on the study area of buildings. Compared with the evaluation index in Table 1, the evaluation index of the BEMD fusion image in Table 2 is improved (the entropy of the fusion images based on BEMD improves 0.38 compared with that of the original images, and is improved 0.10–0.12 when compared to the other methods. With respect to the other two evaluation indices: the average gradient of BEMD is 6% higher than in other methods and spatial frequency of BEMD is maintained 4.0 higher compared to other methods.), which means the advantages of the BEMD fusion method are greater for the area of buildings. The nonstationarity of a radar signal is greater in building areas and BEMD is advantageous for analyzing nonstationary signals, which makes the BEMD fusion method more effective in vegetation and farmland areas.
From Figure 4, it can be seen that when a mountain has layover toward the sensor in the SAR image and there are no corresponding phenomena in the panchromatic image, the fusion product can reduce the influence of the layover of the SAR image to a certain extent. Furthermore, when the panchromatic image has solar shadows and the SAR image has microwave shadows but in a different direction, the fusion image product shows a greater degree of useful information than the original single image.

4.2. Application in Classification

The results described in Section 4.1 verified that the proposed BEMD fusion method is better than the other methods at retaining edge, outline, texture, color, and other structural information. Therefore, using the fusion scheme proposed in this paper, three types of image were fused together, and then a classification and accuracy evaluation were performed.
This section describes three classification results: the first two are the contrast tests and the third is the proposed test.

4.2.1. Contrastive Analysis 1

Conventional fusion classification method. The fusion method selected was the Gram-Schmidt transform, and the chosen method of classification was rule-based feature extraction. The data used were the panchromatic, multispectral, and SAR images. Comparing the results of the fused SAR, panchromatic, and multispectral image (Figure 5b) with the results of the panchromatic and multispectral images, it is evident that the information of the SAR image is almost completely ineffective (Figure 5a). The results of the classification and extraction of bamboo forest, which were based on Figure 5b, are shown in Figure 5c.

4.2.2. Contrastive Analysis 2

Fused by BEMD and texture analyzed by grey-level co-occurrence matrix (GLCM). Texture feature components were classified using rule-based feature extraction. The data used were the multispectral and SAR images. The fusion results are shown in Figure 6a. The results of the classification and extraction of bamboo are shown in Figure 6b.

4.2.3. Contrastive Analysis 3

Fused by BEMD and texture analyzed by GLCM. Texture feature components were classified using rule-based feature extraction. The data used were the panchromatic, multispectral, and SAR images. The fusion results are shown in Figure 7a. The results of the classification and extraction of bamboo are shown in Figure 7b.
Comparing the areas of bamboo extracted by the three experiments, it can be determined that S1 > S2 > S3 (S1, S2, S3 are the area of bamboo forest extracted from Experiment 1, 2, 3). What we need to be aware of is that the results are not more accurate with larger areas extracted. Here we selected verification point on the spot to verify the extraction accuracy. The results show in Table 3.
Figure 8 presents photos of the scene at points A, B, and C for reference. Through the verification, it can be concluded that the BEMD fusion algorithm can improve the precision of extraction of bamboo forest.

4.3. Discussion

The BEMD method proposed in this paper has advantages over other methods in maintaining SAR image information. From the quantitative aspect, we can see that the information entropy based on BEMD is 7.87 and 7.78 (Table 1 and Table 2) which is the highest; the average slope and spatial frequency of the fusion image were 40.01, 71.41 and 27.16, 48.57 which shows the advantage of BEMD in dealing with nonlinear non-stationary signals.
The objective of this study is to achieve the fusion of SAR, multispectral, and panchromatic imagery simultaneously. We compared some of the fusion methods, including BEMD, DWT, à-trous, NSCT, GS (Gram-Schmidt), etc., and have carried out some experiments to demonstrate the advantage of merging the three kinds of data together, however the research on the mechanism of these fusion methods are not discussed. Meanwhile we could see that currently used methods, such as PCA (Principal Component Analysis), DWT, BEMD etc., fuse two types of images. Kumar et al. chose the fusion method of PCA and IHS to fuse high resolution TerraSAR-X image and multispectral IRS LISS-III (Indian Remote Sensing Satellites, Linear Imaging Self-Scanning System III) data when they studied the Himalayan glacier and extracted the texture features of glaciers [27]. Jiang et al. had fused high-resolution DEMs (Digital Elevation Models) derived from COSMO-SkyMed and TerraSAR-X InSAR datasets by maximum likelihood method [28]. Qian et al. undertook research on joint processes of registration and fusion for panchromatic and multispectral images [29]. Through the three comparative experiments in the second part, the results of the accuracy of the extraction show that no matter which of the two kinds of data are fused, extracted accuracy does not match that of the fusion with three kinds of data. The accuracy increases more than 11.9%, and inaccuracy can be reduced by more than 11.0% in our experiment. These results indicate the advantage of merging the three kinds of data together.

5. Conclusions

This paper proposed a novel method for fusing multispectral, panchromatic, and SAR images using the BEMD method. Based on the results of the study, the following conclusions are drawn. First, the BEMD method, which has been proven to have good adaptability, can be used to decompose optical and SAR images following which a multiscale area-based fusion scheme can be implemented. Compared with the original image, information entropy of the fusion image based on BEMD improves about 0.13–0.38. Compared with the other three methods, the improval is about 0.06–0.12. With respect to the average gradient, BEMD is 4%–6% higher than other methods. With reference to spatial frequency, BEMD is maintained 3.2–4.0 higher than other methods. Furthermore, the BEMD algorithm can be used to extract different texture features from the same surface features, and the extraction accuracy is improved. Accuracy increased by 12.1%, and inaccuracy was reduced by 11.0%. However, the number of IMFs obtained by BEMD is not necessarily equal. In this paper, those components that are greater than the public IMF layer are simply attributed to the residual component. This cannot guarantee that the spatial frequency of the components to be integrated is consistent.

Acknowledgments

This research was supported by the National Key Technology Research and Development Program of China (2012BAC16B01), National Science Foundation of China (41590852, 41001264). The TerraSAR-X data acquisition has been supported by the DLR AO project. The authors would like to thank the anonymous reviewers for their helpful constructive comments.

Author Contributions

Guang Liu, Lei Li and Hui Gong conceived and designed the experiments; Qingwen Jin, Xinwu Li and Yun Chen performed the experiments; Lei Li, Yu Chen and Rui Song analyzed the data; Chengxin He, Yuqing Huang and Yuefeng Yao contributed reagents/materials/analysis tools.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

SARSynthetic Aperture Radar
BEMDBidimensional Empirical Mode Decomposition
IHSIntensity–Hue–Saturation
DWTDiscrete Wavelet Transform
MSDMultiscale Decomposition Method
IMFIntrinsic Mode Functions
NSCTNon-Subsampled Contourlet Transform Methods
GLCMGrey-Level Co-occurrence Matrix
DLRGerman Aerospace Center
GSGram-Schmidt
PCAPrincipal Component Analysis
IRS LISS-IIIIndian Remote Sensing Satellites, Linear Imaging Self-Scanning System III
DEMDigital Elevation Model

References

  1. Gungor, O.; Shan, J. An optimal fusion approach for optical and SAR images. In Proceedings of the ISPRS Commission VII Symposium: Remote Sensing: From Pixels to Process, Enschede, The Netherlands, 8–11 May 2006.
  2. Shimada, M. Ortho-rectification and slope correction of SAR data using DEM and its accuracy evaluation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 3, 657–671. [Google Scholar] [CrossRef]
  3. Pohl, C.; van Genderen, J.L. Review article Multisensor image fusion in remote sensing: Concepts, methods and applications. Int. J. Remote Sens. 1998, 19. [Google Scholar] [CrossRef]
  4. Pacifici, F.; del Frate, F.; Emery, W.J.; Gamba, P.; Chanussot, J. Urban mapping using coarse SAR and optical data: Outcome of the 2007 GRSS data fusion contest. IEEE Geosci. Remote Sens. Lett. 2008, 5, 331–335. [Google Scholar] [CrossRef]
  5. Vidal, A.; Moreno, M.R. Change detection of isolated housing using a new hybrid approach based on object classification with optical and TerraSAR-X data. Int. J. Remote Sens. 2011, 32, 9621–9635. [Google Scholar] [CrossRef]
  6. Longbotham, N.; Chaapel, C.; Bleiler, L.; Padwick, C.; Emery, W.J.; Pacifici, F. Very high resolution multiangle urban classification analysis. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1155–1170. [Google Scholar] [CrossRef]
  7. Chandrakanth, R.; Saibaba, J.; Varadan, G.; Raj, P.A. Fusion of high resolution satellite SAR and optical images. In Proceedings of the 2011 International Workshop on Multi-Platform/Multi-Sensor Remote Sensing and Mapping (M2RSM 2011), Xiamen, China, 10–12 January 2011.
  8. Miao, Q.; Shi, C.; Xu, P.; Yang, M.; Shi, Y. A novel algorithm of image fusion using shearlets. Opt. Commun. 2011, 284, 1540–1547. [Google Scholar] [CrossRef]
  9. Byun, Y.; Choi, J.; Han, Y. An area-based image fusion scheme for the integration of SAR and optical satellite imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2212–2220. [Google Scholar] [CrossRef]
  10. Chai, Y.; Li, H.; Zhang, X. Multifocus image fusion based on features contrast of multiscale products in nonsubsampled contourlet transform domain. Optik 2012, 123, 569–581. [Google Scholar] [CrossRef]
  11. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.-C.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. A Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  12. Expansions, W. 1st Reading. Int. J. 2004, 21, 20. [Google Scholar]
  13. Yang, Z. Signal Period Analysis Based on Hilbert-Huang Transform and Its Application to Texture Analysis. In Proceedings of the Third International Conference on Image and Graphics (ICIG’04), Hong Kong, China, 18–20 December 2004; pp. 430–433.
  14. Song, R.; Guo, H.; Liu, G.; Perski, Z.; Yue, H.; Han, C.; Fan, J. Improved goldstein SAR interferogram filter based on adaptive-neighborhood technique. IEEE Geosci. Remote Sens. Lett. 2015, 12, 140–144. [Google Scholar] [CrossRef]
  15. Nunes, J.C.; Bouaoune, Y.; Delechelle, E.; Niang, O.; Bunel, P. Image analysis by bidimensional empirical mode decomposition. Image Vis. Comput. 2003, 21, 1019–1026. [Google Scholar] [CrossRef]
  16. Xu, Y.; Hu, K.; Han, J. Hyperspectral image fusion using empirical mode decomposition. SPIE 2007. [Google Scholar] [CrossRef]
  17. Airbus Defence and Space. TerraSAR-X Services Image Product Guide. 2014. Available online: http://www.intelligence-airbusds.com/files/pmedia/public/r459_9_201408_tsxx-itd-ma-0009_tsx-productguide_i2.00.pdf (accessed on 29 December 2016).
  18. Hong, G.; Zhang, Y. Comparison and improvement of wavelet—Based image fusion. Int. J. Remote Sens. 2008, 29, 673–691. [Google Scholar] [CrossRef]
  19. Aslantas, V.; Kurban, R. A comparison of criterion functions for fusion of multi-focus noisy images. Opt. Commun. 2009, 282, 3231–3242. [Google Scholar] [CrossRef]
  20. Liu, G.; Song, R.; Guo, H.; Perski, Z.; Yue, H.; Han, C.; Fan, J. Filtering SAR interferometric phase noise using a split-window model. Remote Sens. Lett. 2016, 7, 800–809. [Google Scholar] [CrossRef]
  21. Wu, J.; Yan, W.; Bian, H.; Ni, W. A despeckling algorithm combining curvelet and wavelet transforms of high resolution SAR images. In Proceedings of the 2010 International Conference on Computer Design and Applications (ICCDA 2010), Qinhuangdao, China, 25–27 June 2010; Volume 1.
  22. Damerval, C.; Meignen, S.; Perrier, V. A fast algorithm for bidimensional EMD. IEEE Signal Process. Lett. 2005, 12, 701–704. [Google Scholar] [CrossRef]
  23. Chibani, Y. Selective synthetic aperture radar and panchromatic image fusion by using the à trous wavelet decomposition. EURASIP J. Appl. Signal Process. 2005, 14, 2207–2214. [Google Scholar] [CrossRef]
  24. Chen, J.; Wang, J.; Li, P. Research on wavelet basis selection in the image fusion in SAR and visual. Infrared Technol. 2008, 30, 158–162. [Google Scholar]
  25. Gong, J.-Z.; Liu, Y.-S.; Xia, B.-C.; Chen, J.-F. Effect of wavelet basis and decomposition levels on performance of fusion images from remotely sensed data. Geogr. Geo-Inf. Sci. 2010, 26, 6–10. [Google Scholar]
  26. Auer, S.; Gisinger, C.; Tao, J. Characterization of facade regularities in high-resolution SAR images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2727–2737. [Google Scholar] [CrossRef]
  27. Kumar, V.; Venkataraman, G. High resolution TerraSAR-X image speckle suppression and its fusion with multispectral IRS LISS -III data for himalayan glacier feature extraction. J. Indian Soc. Remote Sens. 2012, 40, 325–334. [Google Scholar] [CrossRef]
  28. Jiang, H.; Zhang, L.; Wang, Y.; Liao, M. Fusion of high-resolution DEMs derived from COSMO-SkyMed and TerraSAR-X InSAR datasets. J. Geod. 2014, 88, 587–599. [Google Scholar] [CrossRef]
  29. Zhang, Q.; Cao, Z.; Hu, Z.; Jia, Y.; Wu, X. Joint image registration and fusion for panchromatic and multispectral images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 467–471. [Google Scholar] [CrossRef]
Figure 1. Location of study area which is located within the region 24°38′–25°54′N, 110°09′–110°46′E. SAR: synthetic aperture radar. N: northern latitude. E: east longitude.
Figure 1. Location of study area which is located within the region 24°38′–25°54′N, 110°09′–110°46′E. SAR: synthetic aperture radar. N: northern latitude. E: east longitude.
Remotesensing 09 00019 g001
Figure 2. Flowchart of image fusion based on bidimensional empirical mode decomposition (BEMD). IMF: intrinsic mode functions.
Figure 2. Flowchart of image fusion based on bidimensional empirical mode decomposition (BEMD). IMF: intrinsic mode functions.
Remotesensing 09 00019 g002
Figure 3. Original SAR and panchromatic images and fused images based on different fusion methods: (a) panchromatic image; (b) SAR image; (c) fused image based on discrete wavelet transform (DWT method); (d) fused image based on the à-trous method; (e) fused image based on non-subsampled contourlet transform (NSCT) method; and (f) fused image based on the proposed BEMD fusion method; The area is the northeast of LingChuan town, and the size of the images was 512 × 512 pixels.
Figure 3. Original SAR and panchromatic images and fused images based on different fusion methods: (a) panchromatic image; (b) SAR image; (c) fused image based on discrete wavelet transform (DWT method); (d) fused image based on the à-trous method; (e) fused image based on non-subsampled contourlet transform (NSCT) method; and (f) fused image based on the proposed BEMD fusion method; The area is the northeast of LingChuan town, and the size of the images was 512 × 512 pixels.
Remotesensing 09 00019 g003
Figure 4. Original SAR and panchromatic images and fused images based on different fusion methods: (a) panchromatic image; (b) SAR image; (c) fused image based on DWT method; (d) fused image based on the à-trous method; (e) fused image based on NSCT; and (f) fused image based on the proposed BEMD fusion method; The area is a mountain located at the northeast of LingChuan town, and the size of the images was 512 × 512 pixels.
Figure 4. Original SAR and panchromatic images and fused images based on different fusion methods: (a) panchromatic image; (b) SAR image; (c) fused image based on DWT method; (d) fused image based on the à-trous method; (e) fused image based on NSCT; and (f) fused image based on the proposed BEMD fusion method; The area is a mountain located at the northeast of LingChuan town, and the size of the images was 512 × 512 pixels.
Remotesensing 09 00019 g004
Figure 5. The fusion result based on Gram-Schmidt transform (a) The fusion result based on Gram-Schmidt transform, which includes two types of images (panchromatic and multispectral images). This region located in the northeast of Guilin city, covered 800 × 800 pixels, the pixel resolution is 4 m × 4 m. (b) The fusion result based on Gram-Schmidt transform, which include three types of image (panchromatic image, multispectral image and SAR image). This region located in the northeast of Guilin city, covered 800 × 800 pixels, and the pixel resolution is 4 m × 4 m. (c) The location of verification points; pink areas Remotesensing 09 00019 i001 represent the area of the extraction of bamboo, blue dots Remotesensing 09 00019 i002 represent the verification points.
Figure 5. The fusion result based on Gram-Schmidt transform (a) The fusion result based on Gram-Schmidt transform, which includes two types of images (panchromatic and multispectral images). This region located in the northeast of Guilin city, covered 800 × 800 pixels, the pixel resolution is 4 m × 4 m. (b) The fusion result based on Gram-Schmidt transform, which include three types of image (panchromatic image, multispectral image and SAR image). This region located in the northeast of Guilin city, covered 800 × 800 pixels, and the pixel resolution is 4 m × 4 m. (c) The location of verification points; pink areas Remotesensing 09 00019 i001 represent the area of the extraction of bamboo, blue dots Remotesensing 09 00019 i002 represent the verification points.
Remotesensing 09 00019 g005
Figure 6. The fusion result based on BEMD (a) The fusion result based on BEMD, which includes two types of images (SAR and multispectral images). This region located in the northeast of Guilin city, covered 800 × 800 pixels, and the pixel resolution is 4m × 4 m. (b) The location of verification points; pink areas Remotesensing 09 00019 i003 represent the area of the extraction of bamboo, blue dots Remotesensing 09 00019 i004 represent verification points.
Figure 6. The fusion result based on BEMD (a) The fusion result based on BEMD, which includes two types of images (SAR and multispectral images). This region located in the northeast of Guilin city, covered 800 × 800 pixels, and the pixel resolution is 4m × 4 m. (b) The location of verification points; pink areas Remotesensing 09 00019 i003 represent the area of the extraction of bamboo, blue dots Remotesensing 09 00019 i004 represent verification points.
Remotesensing 09 00019 g006
Figure 7. The fusion result based on BEMD (increase using panchromatic images) (a) The fusion result based on BEMD and Gram-Schmidt transform, which includes three types of images (panchromatic, multispectral and SAR images). This region located in the northeast of Guilin city, covered 800 × 800 pixels, and the pixel resolution is 4m × 4 m. (b) The location of verification points; pink areas Remotesensing 09 00019 i005 represent the area of the extraction of bamboo, blue dots Remotesensing 09 00019 i006 represent verification points.
Figure 7. The fusion result based on BEMD (increase using panchromatic images) (a) The fusion result based on BEMD and Gram-Schmidt transform, which includes three types of images (panchromatic, multispectral and SAR images). This region located in the northeast of Guilin city, covered 800 × 800 pixels, and the pixel resolution is 4m × 4 m. (b) The location of verification points; pink areas Remotesensing 09 00019 i005 represent the area of the extraction of bamboo, blue dots Remotesensing 09 00019 i006 represent verification points.
Remotesensing 09 00019 g007
Figure 8. Photos of the bamboo forest at specified locations. Illustrations (A), (B) and (C) are the photographs of locations A, B and C.
Figure 8. Photos of the bamboo forest at specified locations. Illustrations (A), (B) and (C) are the photographs of locations A, B and C.
Remotesensing 09 00019 g008
Table 1. Evaluation of different fusion methods in Figure 3.
Table 1. Evaluation of different fusion methods in Figure 3.
Fusion MethodEntropyAverage GradientSpatial Frequency
Pan7.122112.568822.3862
SAR7.747541.258474.3752
DWT7.847837.799067.4227
à-trous7.358114.543024.2045
NSCT7.810038.984068.7256
BEMD7.874240.014371.4081
Table 2. Evaluation of different fusion methods in Figure 5.
Table 2. Evaluation of different fusion methods in Figure 5.
Fusion MethodEntropyAverage GradientSpatial Frequency
Pan6.63896.940013.4693
SAR7.405329.694853.9924
DWT7.684627.017548.2980
à-trous6.90579.576416.6302
NSCT7.668426.656148.4350
BEMD7.782427.162548.5677
Table 3. Bamboo forest extraction accuracy.
Table 3. Bamboo forest extraction accuracy.
ImageTest AmountErroneous JudgementLeak DetectionAccuracyInaccuracy
Experiment 1128450.0%40.0%
Experiment 2146458.3%30.0%
Experiment 3174370.1%19.0%

Share and Cite

MDPI and ACS Style

Liu, G.; Li, L.; Gong, H.; Jin, Q.; Li, X.; Song, R.; Chen, Y.; Chen, Y.; He, C.; Huang, Y.; et al. Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD) and Its Application to the Extraction of Bamboo Forest. Remote Sens. 2017, 9, 19. https://doi.org/10.3390/rs9010019

AMA Style

Liu G, Li L, Gong H, Jin Q, Li X, Song R, Chen Y, Chen Y, He C, Huang Y, et al. Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD) and Its Application to the Extraction of Bamboo Forest. Remote Sensing. 2017; 9(1):19. https://doi.org/10.3390/rs9010019

Chicago/Turabian Style

Liu, Guang, Lei Li, Hui Gong, Qingwen Jin, Xinwu Li, Rui Song, Yun Chen, Yu Chen, Chengxin He, Yuqing Huang, and et al. 2017. "Multisource Remote Sensing Imagery Fusion Scheme Based on Bidimensional Empirical Mode Decomposition (BEMD) and Its Application to the Extraction of Bamboo Forest" Remote Sensing 9, no. 1: 19. https://doi.org/10.3390/rs9010019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop