Next Article in Journal
Application of an Electrochemical Immunosensor with a MWCNT/PDAA Modified Electrode for Detection of Serum Trypsin
Previous Article in Journal
Quorum Sensing Activity in Pandoraea pnomenusa RB38
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Thermal Physical Property-Based Fusion of Geostationary Meteorological Satellite Visible and Infrared Channel Images

1
School of Information Science and Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China
2
State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing 100081, China
3
College of Engineering, Ocean University of China, 238 Songling Road, Qingdao 266100, China
*
Authors to whom correspondence should be addressed.
Sensors 2014, 14(6), 10187-10202; https://doi.org/10.3390/s140610187
Submission received: 22 March 2014 / Revised: 21 May 2014 / Accepted: 30 May 2014 / Published: 10 June 2014
(This article belongs to the Section Remote Sensors)

Abstract

: Geostationary meteorological satellite infrared (IR) channel data contain important spectral information for meteorological research and applications, but their spatial resolution is relatively low. The objective of this study is to obtain higher-resolution IR images. One common method of increasing resolution fuses the IR data with high-resolution visible (VIS) channel data. However, most existing image fusion methods focus only on visual performance, and often fail to take into account the thermal physical properties of the IR images. As a result, spectral distortion occurs frequently. To tackle this problem, we propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. In our two-step process, the high-resolution structural features of the VIS image are first extracted and incorporated into the IR image using regular multi-resolution fusion approach, such as the multiwavelet analysis. This step significantly increases the visual details in the IR image, but fake thermal information may be included. Next, the Stefan-Boltzmann Law is applied to correct the distortion, to retain or recover the thermal infrared nature of the fused image. The results of both the qualitative and quantitative evaluation demonstrate that the proposed physical correction method both improves the spatial resolution and preserves the infrared thermal properties.

Graphical Abstract

1. Introduction

The infrared (IR) channels of geostationary meteorological satellites contain important thermal information for meteorological research and applications. Raw IR data are often converted to brightness temperature, and are readily used in weather analysis, numerical weather prediction (NWP), and climate modeling. One limitation of IR data is the low spatial resolution. In contrast, the VIS channel has considerably higher resolution. The motivation for fusing the IR and VIS data is to produce data with improved spatial resolution.

Image fusion aims to integrate complementary information from multisensor data, such that the synthesized image is more suitable for human visual perception or further processing. Current image fusion methods are categorized into three levels: pixel, feature and decision [1,2]. Here, only pixel-level fusion is described. The simplest fusion method is based on pixel-wise comparisons, such as taking the maximum or average of the pixels of interest. Multi-resolution analysis [3,4] is a popular fusion approach. The Laplacian pyramid technique represents the first attempt to decompose and merge images hierarchically [5]. Another effective multi-resolution approach is the wavelet-based method [68]. After wavelet analysis, a wave of multi-resolution transform bases have been developed and applied to image fusion, including curvelet [9,10], contourlet [11], and nonsubsampled contourlet [12]. All of these methods aim to obtain better visual performance [13], but fail to take into account the underlying physical properties. Therefore, the fused image may suffer from spectral distortion despite the enhanced visual quality.

In addition to these general multi-resolution fusion methods, many techniques have been developed specifically for the fusion of multispectral satellite images. Some methods applied widely include the Hue-Saturation-Intensity (HSI) transform, the Brovey method, and Primary Component Analysis (PCA) [1]. Guo and Moore introduced the pixel block intensity modulation (PBIM) method [14] to modulate the high-resolution panchromatic band of the Landsat Thematic Mapper (TM) into its low-resolution thermal band, to increase the details in the thermal band. The enhanced smoothing filter-based intensity modulation (SFIM) method proposed by Liu is able to integrate lower resolution multispectral images with higher resolution panchromatic images [15]. Choi et al. applied a curvelet-based approach to fuse multispectral and panchromatic images [16]. Aanaes et al. proposed a solution to the ill-posed fusion problem by presenting a framework for pixel neighborhood regularization based on prior assumptions on the image data [17]. Despite their multispectral capability, these methods cannot be adapted directly to the fusion of VIS and IR images. Specifically, the PBIM method improves only the spatial topographical resolution, rather than the actual spectral resolution. It is not therefore applicable to situations where the image contrasts are produced by the terrain's thermal emission, such as clouds. Second, both the PBIM and SFIM methods are modeled on the terrain's reflective properties. However, in our problem, the infrared radiation is dominant. Moreover, all methods except the PBIM require that that source images must have overlapping spectral responses, such as those between RGB visible channels and the panchromatic channel. This requirement is not satisfied by the IR and VIS images.

Here we propose a novel thermal physical property-based post-correction solution to the fusion of geostationary satellite IR and VIS channel images. Unlike the fusion of multispectral and panchromatic images, for which multiple sources (R, G, B, and Pan) are required, our method requires only one VIS channel and one IR channel. By taking advantage of the infrared thermal properties, our objective is to generate a higher-resolution IR image from the VIS-IR composite image produced by regular fusion methods. The organization of this paper is as follows: in Section 2, we discuss the methodological details in two steps, namely multiwavelet fusion and the physical correction. Section 3 presents the experimental results and quantitative evaluations of the proposed approach. This is followed by a discussion and conclusions in Section 4.

2. Method

Given the VIS and IR satellite images, we produce a higher-resolution infrared image following a two-step procedure. To begin with, the low-resolution IR image is resampled and interpolated so that both images are on the same pixel scale. Then, we fuse the resampled IR image with the VIS image using a regular multi-resolution fusion algorithm, so that the texture details are extracted from the VIS image and transferred into the IR image. In this step, many aforementioned multi-resolution analysis techniques can satisfy our need. Here we choose a relatively new and mature method, the multiwavelet analysis. Second, based on the Stefan-Boltzmann Law, we convert the original IR image to a thermal radiation map, and use this map as a reference to perform a physical correction on the fused image. The purpose of this correction is to adjust the spectral distortions induced by the fusion. The details of the algorithm are as follows.

2.1. Multiwavelet Image Fusion

We adopt a multiwavelet image fusion algorithm to extract the fine textural features from the VIS image and impose them on the IR image. This multi-resolution approach enables the decomposition of the source images into multiple resolution scales such that the structural details of the VIS image are readily available. At this stage, the infrared thermal properties are not considered, and we merge the two images only at the pixel level. This leads to further thermal physical properties-based correction discussed in the next section. More information on the multiwavelet algorithm and its application to image fusion can be found in [1820]. It should be noted that other multi-resolution image fusion methods can also be used in this step.

First, we apply the multiwavelet decomposition to the two source images. Their resultant coefficient arrays contain a low-frequency approximation component, or rough contrast, and multiple high-frequency detail components representing sharp edges and details at various scales. Fusion is achieved by recombining the coefficients from source images. In our case, we take the approximation coefficients of the IR image as the low frequency component of the fused image, so that substantial infrared properties are preserved. The high-frequency components are selected from either the IR or the VIS detail coefficients, depending on which displays more regional structural detail. This is implemented by comparing the regional variance of the two coefficient arrays. We take the coefficient with higher local variance, indicating more structural information. The purpose here is not only to bring in the richest structural detail, but also to take into account the surrounding regional landscape, to make a more reasonable selection. Mathematically, the local variance within an N × N window around a position (x, y) is defined as:

σ 2 ( x , y ) = 1 N 2 i = 1 N j = 1 N [ D ( i , j ) μ ] 2
μ ( x , y ) = 1 N 2 i = 1 N j = 1 N D ( i , j )
where D(i, j) is the detail coefficient at (i, j), and μ(x, y) is the mean value in the window. The selection rule for the detail coefficients can thus be described as:
D F ( x , y ) = { D V I S ( x , y ) , σ V I S 2 ( x , y ) σ I R 2 ( x , y ) D I R ( x , y ) , σ V I S 2 ( x , y ) < σ I R 2 ( x , y )

An inverse multiwavelet transform of the selected approximation and detail coefficients recovers the fused image from the VIS and IR data. To sum up, this process can be summarized in the following steps, illustrated in Figure 1.

First, we pre-process the resampled IR image, and decompose both IR and VIS images using the CL multiwavelet system [21]. The selection of the number of decomposition levels is based on empirical outcome. Experiments show that six-level decomposition, or decomposition of the source images into six detail scales produces preferable fusion effects.

Second, we take the IR approximation coefficients as low-frequency components for the fused image, and use the local variance comparison method to select the high-frequency components. We use a 3 × 3 window to compare the local variance. Third, we apply the inverse multiwavelet transform on the selected coefficient arrays to produce the synthesized image from IR and VIS data.

2.2. Physical Property-Based Correction

The aforementioned multiwavelet-based method greatly enhances the visual resolution, but fails to preserve the infrared physical properties. Although the low-frequency components from the IR image remain untouched, the high-frequency information from the VIS image brings in serious spectral distortion. The distortion is particularly severe in areas where clouds are present only in the VIS image but it is spared in the IR image. After fusion, these areas may appear especially bright. This suggests a low brightness temperature if the infrared fusion outcome is assumed, which is not what the original IR image indicates. Therefore, a physical property-based post hoc correction is necessary to maintain or restore the infrared nature of the fused image.

The basic principle is to trace back to the fundamental property of the IR image, i.e., the thermal radiation. Satellite infrared data are derived from the Earth's thermal radiation. This measured thermal radiation can be converted to brightness temperature. Inversely, given a brightness temperature, the original thermal radiation can be estimated. This temperature-to-radiation conversion can be approximated using the Stefan-Boltzmann Law, which is the integral of Planck's Law over the wavelengths and solid angles:

j = ɛ σ T 4

Here j represents the total energy radiated per unit surface area per unit time, T is the temperature in Kelvin, ε is the emissivity and σ is the Stefan-Boltzmann constant. To dictate the infrared properties of the fused image (i.e., to assume that the fused pixel values represent the brightness temperature), it is imperative that the underlying thermal radiation be comparable to that of the original IR image. The objective of the post hoc correction is therefore to constrain the regional radiation energy to the original IR level.

Specifically, we first converted both the IR image and the synthesized fused image F into the radiation map, such that the values at each pixel, jIR(u, v) for IR image and jF(x, y) for the fused image, are the approximate amount of radiation energy observed at that pixel. Second, we traverse the two images and proportionally adjust the radiation energy in the fused image to ensure that the regional radiation is equal to the IR image within each window of interest. The choice of size of the window η takes the resolution disparity between the IR image and fused image (equal to the original VIS image) into account, such that one single pixel in the IR image corresponds to a η × η window in the fused image. This is essentially an interpolation process using the nearest-neighbor method (Figure 2). For instance, the satellite data have a VIS resolution of 1 km, and IR resolution of 4 km, so η is 4 in our case. Mathematically, letting jF*(x, y) be the corrected radiation energy at pixel (x, y), we expect that the following relation holds within each η × η:

x = 1 η y = 1 η j F * ( x , y ) = η 2 j I R ( u , v )

Accordingly, the old radiation value at pixel (x, y) in the fused image should be updated to:

j F * ( x , y ) = j F ( x , y ) η 2 j I R ( u , v ) x = 1 η y = 1 η j F ( x , y ) , 1 x , y η

After the correction, the radiation maps are converted back to the brightness temperature.

However, a blocking result may occur due to the tight constraints. Because the process depends on only a single point in the IR image to correct a region, it fails to take into account the surrounding radiation landscape. If the values in one region need to be scaled upward while the adjacent window is scaled downward, for example, a clear border will be exposed. Consequently, the constraint needs to be loosened and more surrounding regions need to be included. Although a correction is still applied to each η × η window, we now consider a surrounding N-by-N area that is mapped to an M-by-M area in the IR image (Figure 3), instead of a single point, to decide the degree to which the radiation energy needs to be scaled. The M-to-N ratio is still kept equal as the ratio of the VIS and IR image resolution for computational ease. In this case, the equality of the regional radiation energy is expressed as:

x = 1 M y = 1 M j F * ( x , y ) = η 2 u = 1 N v = 1 N j I R ( u , v )

Accordingly, the corrected value in the fused image should be:

j F * ( x , y ) = j F ( x , y ) η 2 u = 1 N v = 1 N j I R ( u , v ) x = 1 M y = 1 M j F ( x , y ) , 1 x , y η

3. Experimental Results and Analysis

We tested the proposed algorithm using the data collected by the Multifunctional Transport Satellite (MTSAT), a geostationary satellite operated by the Japan Meteorological Agency (JMA). It has five spectral bands: one visible channel (VIS) with 1-km resolution, and four IR channels (IR1∼4) with 4-km resolution. The VIS (0.55–0.90 μm) and IR1 (10.3–11.3 μm) channels were used to test the fusion algorithm. In this section, we first present three case studies to evaluate the algorithm based on visual inspection. Then, we introduce several objective measurements to quantitatively assess its performance.

3.1. Case Studies

The first case (Figure 4) shows a cyclone near the South China coast and Taiwan on 1300 LST 24 July 2006. In Figure 4a, the VIS channel fails to capture some peripheral clouds of the cyclone that are present in the IR1 channel in Figure 4b. The multiwavelet fusion retains this feature of the IR1 image, and increases the overall image resolution, as shown in Figure 4c. However, as indicated in the red box, without a well-defined physical spectral characterization direct fusion incurs spectral distortion, exhibited by a prominent low temperature region. After the physical correction, the underlying pseudo-radiation level of the fused image is confined to the IR level, and therefore its infrared physical identity is established. As shown in Figure 4d, the corrected image has eliminated the abnormal temperature caused by spectral distortion, while the overall image maintains a higher resolution than the original IR1 image.

The second case (Figure 5) was taken from the same region on 1300 LST 25 July 2006. The boxed regions illustrate how spectral distortion is incurred and corrected with our method. In the VIS image (Figure 5a), the red boxes enclose two regions with thick and bright cloud tops.

If these pixels are directly merged into the IR image (Figure 5b), unusually low temperatures will be induced, as illustrated in Figure 5c. The physical correction properly controls the level of distortion, and the abnormally low temperatures are recovered to the normal range (Figure 5d).

The third case (Figure 6) was taken from the border of China's Hebei, Shanxi and Henan Provinces on 1300 LST 25 July 2006. The VIS image (Figure 6a) indicates that most of the region is covered with scattered clouds, but the IR image (Figure 6b) suggests a relatively high brightness temperature in the entire region, and no obvious clouds are displayed. A direct fusion of VIS and IR images yields higher spatial resolution and incorporates fine cloud details from the VIS image. However, the overall temperature in the region is lowered significantly (Figure 6c). After the physical correction, the temperatures in this region are refined to the infrared level, as shown by the reduced brightness in Figure 6d, while the general cloud pattern from the VIS image is retained.

3.2. Quantitative Analyses

In addition to the visual inspection, we used the following five categories of parameters to objectively examine the performance of the fusion algorithm:

(1)

Information Entropy (IE) and Mutual Information (MI). IE quantifies the amount of information contained within an image, and MI measures how much information is shared between images. These parameters can characterize the flow of information during the fusion process and the similarity of the synthesized and source images.

(2)

Average Gradient (AG). The gradient at a pixel measures how sharply the pixel values change in the surrounding region. Fine details, sharp edges and complex textures produce high varying regional characteristics and are reflected by high gradients. The AG is the average of all regional gradients and reflects the overall sharpness of the image.

(3)

Objective Fusion Performance Measure (Qabf). Proposed by Xydeas and Petrović [22], Qabf is a measurement of how much detailed edge information is transferred from the source images to the fused image. This parameter ranges between 0 and 1, where the value 0 represents the complete loss of edge information and 1 represents the perfect preservation of edges.

(4)

Universal Image Quality Index (QI) and Edge-dependent Quality Index (QE). QI was initially proposed as a universal measure of image quality by modeling the structural distortion [23], but here we use it as an index of image similarity. This index does not depend on individual observers or testing conditions, and exhibits consistency with subjective evaluations. The dynamic range of QI is [−1, 1] [23]. The closer QI is to 1, the more similar the two images are in comparison. QE is adapted from QI such that edge information is taken into account [24]. In addition to the QI of the original images, QE also incorporates the QI of the corresponding edge images obtained from source images. Similarly, the values of QE still range between −1 and 1, with the best value of 1 achieved by perfect image similarity.

(5)

Thermal Energy Deviation (AVGD and RMSD). Because the fusion method in this paper is based on thermal radiation, we introduce two new parameters: the average thermal energy deviation (AVGD), and the root-mean-square thermal energy deviation (RMSD). These allow us to evaluate the deviation of thermal energy between the synthesized image and the original IR image.

Assuming that the VIS-to-IR resolution ratio is η, a single pixel at (u, v) in the IR image corresponds to a η × η area in the fused image, which can be conceptualized as a simple interpolation (Figure 7). In each η × η window, the local thermal energy deviation from the original IR image is:

Δ j ( u , v ) = x = 1 η y = 1 η j F ( x , y ) η 2 j I R ( u , v )

Traversing the calculation over the entire image, we have the AVGD, defined as:

j A V G D = 1 N 2 u = 1 N v = 1 N | Δ j ( u , v ) |

Similarly, the RMSD is defined as:

j R M S D = 1 N 2 u = 1 N v = 1 N Δ j ( u , v ) 2

The purpose of the quantitative analysis is to evaluate the changes at the physical level. However, to the best of our knowledge, few metrics in the literature measure the similarity of the images' underlying thermal properties. Most of the metrics of choice here, e.g., AG and Qabf, are quality measures at the image level, so they may not perfectly satisfy our need. Fortunately, the fundamental physical-level similarity often externalizes at the image level, as demonstrated in the qualitative case studies in the previous session. Therefore, these image-level metrics may be of indirect interest to a certain extent. We have also customized two metrics, AVGD and RMSD, to compare the images' thermal properties directly. Moreover, it should be noted that what we evaluate is the relative change before and after the physical correction, instead of the simple absolute similarity with the original IR image. This reflects the compromise and trade-off between visual resolution and physical reality.

We arbitrarily selected the MTSAT data collected on 1300 LST 26 July 2006 to apply the multiwavelet fusion and physical correction, and calculated these performance parameters. The results are shown in Tables 1 and 2.

As shown in Table 1, the IE of the fused and corrected images are considerably higher than those of the original VIS and IR images, suggesting that the fusion process has indeed managed to integrate information from the two sources. Despite a slight loss of IE after the physical correction (from FUS to COR, about −0.9%), a significant amount is preserved. The MI values in Table 2 further reflect the “trade-off” and movement of information during the physical correction. The MI between COR and IR is about 33.8% higher than that between FUS and IR, yet a decrease of 6.1% is observed from FUS to COR with regard to the VIS image. This shift indicates that the physical correction obtains more meaningful information from the IR image while abandoning less relevant information from the VIS image. Notably, the 33.8% increase is greater than the 6.1% decrease, which means that much information from the VIS image is retained. Furthermore, with regard to the AG, both the FUS and COR images have higher gradients and thus higher spatial edges and details than the original IR image. This is consistent with subjective perception. The increase of AG from FUS to COR may be attributed to the disturbance caused by the introduction of noise.

In Table 2, both the FUS and COR images have higher QI with VIS images than with the IR image. The same trend is also observed for the QE parameter. The fusion result was closer to the VIS image than the IR image, indirectly suggesting increased spatial resolution after both the multiwavelet fusion and physical correction. Besides, by comparing the results before and after the physical correction; i.e., FUS and COR, we noticed that the QI and QE increased from FUS to COR in the IR, while it decreased in the VIS. This again was due to the information trade-off in the physical correction, reflecting a gain in the meaningful thermal physical properties from the IR image and the loss of inaccurate visual information from the VIS image.

From the perspective of thermal radiation, as shown in Table 1, compared with the FUS image, the COR image exhibited much lower AVGD and RMSD (43.8% and 39.7% less, respectively), indicating that after the physical correction, the COR image deviated much less from the original IR image in terms of the underlying thermal radiation than the FUS image pre-correction. This was not surprising, because the physical correction process referred to the IR thermal radiation level to constrain that of the fused image, such that the outcome resembled the infrared physical properties. However, the cost was a slight loss of spatial resolution, especially the fine edge information, as reflected by the 13.5% decrease in Qabf in Table 1.

Fortunately, the case studies have shown that such a loss is not catastrophic, and the final corrected image still has high resolution compared with the original IR image. This is consistent with our fusion objective, where the re-establishment of physical truth is the primary goal, instead of visual sharpness. Tables 3, 4, 5 and 6 present similar results using data obtained on different days.

4. Conclusions

We propose a thermal physical properties-based correction method for fusing geostationary meteorological satellite IR and VIS images. Most existing image fusion techniques are capable of synthesizing multiple source images and producing satisfactory visual effects, but often fail to take into account the thermal physical properties of the IR images leading to spectral distortion. Our method mitigates the spectral distortion and reinstates the infrared physical properties of the fused image. We first apply a well-developed multiwavelet-based fusion method to merge the VIS and IR images, so that the rich textural details from the VIS image are used to increase the spatial resolution. Then, we use the thermal radiation inferred from the IR image as a reference to correct the fused image, to maintain a similar level of thermal radiation. The experimental results have demonstrated that the proposed method both improves the IR spatial property and preserves its spectral consistency.

The proposed fusion method has one limitation. This method is sensitive to the altitude of the sun at the time of data acquisition, or more precisely, the solar elevation angle. The algorithm performs best on data collected between 1100 and 1300 LST. At other times, the oblique incidence of sunlight may lead to a degradation of the data quality in the visible channel, compromising the fusion result. It will therefore be necessary to improve the method in the future by incorporating the influence of the angle between the sun and the satellite.

Acknowledgments

This research was supported by the Natural Science Foundation of China (41005024), the High Technology Research and Development Program of China (2012AA091004), the Promotive Research Fund for Young Scientists of Shandong Province (BS2010DX034) and Open Research Project of State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences (Grant No. 2011LASW-B06).

Author Contributions

Lei Han and Dalei Song proposed the basic idea of this study and finished the second part of the algorithm. Lu Shi implemented the wavelet part of the algorithm. Yiling Yang performed the evaluation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, J. Multi-source image sensing data fusion: Status and trends. Int. J. Image Data Fusion 2010, 1, 5–24. [Google Scholar]
  2. Ehlers, M.; Klonus, S.; Johan Åstrand, P.; Rosso, P. Multi-sensor image fusion for pansharpening in remote sensing. Int. J. Image Data Fusion 2010, 1, 25–45. [Google Scholar]
  3. Piella, G. A general framework for multiresolution image fusion: From pixels to regions. Inf. Fusion 2003, 4, 259–280. [Google Scholar]
  4. Li, S.; Yang, B.; Hu, J. Performance comparison of different multi-resolution transforms for image fusion. Inf. Fusion 2011, 12, 74–84. [Google Scholar]
  5. Adelson, E.H.; Anderson, C.H.; Bergen, J.R.; Burt, P.J.; Ogden, J.M. Pyramid methods in image processing. RCA Eng. 1984, 29, 33–41. [Google Scholar]
  6. Ganzalo, P.; Jesus, M. A wavelet-based image fusion tutorial. Pattern Recognit. 2004, 37, 1855–1872. [Google Scholar]
  7. Piella, G. Adaptive wavelets and their applications to image fusion and compression. Ph.D. Thesis, University of Amsterdam, Amsterdam, The Netherlands, 2003. [Google Scholar]
  8. Liu, Z.; Blasch, E.; Xue, Z.; Zhao, J.; Laganiere, R.; Wu, W. Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 22, 1005–1017. [Google Scholar]
  9. Nencini, F.; Garzelli, A.; Baronti, S.; Alparone, L. Remote sensing image fusion using the curvelet transform. Inf. Fusion 2007, 8, 143–156. [Google Scholar]
  10. Mahyari, A.G.; Yazdi, M. A novel image fusion method using curvelet transform based on linear dependency test. Proceedings of the 2009 International Conference on Digital Image Processing, Bangkok, Thailand, 7–9 March 2009; pp. 351–354.
  11. Yang, S.; Wang, M.; Jiao, L.; Wu, R.; Wang, Z. Image fusion based on a new contourlet packet. Inf. Fusion 2010, 11, 78–84. [Google Scholar]
  12. Zhang, Q.; Guo, B.L. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process. 2009, 89, 1334–1346. [Google Scholar]
  13. Han, Y.; Cai, Y.; Cao, Y.; Xu, X. A new image fusion performance metric based on visual information fidelity. Inf. Fusion 2013, 14, 127–135. [Google Scholar]
  14. Guo, L.J.; Moore, J.M. Pixel block intensity modulation: Adding spatial detail to TM band 6 thermal imagery. Int. J. Remote Sens. 1998, 19, 2477–2491. [Google Scholar]
  15. Liu, J. Smoothing filter-based intensity modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar]
  16. Choi, M.; Kim, R.Y.; Nam, M.R.; Kim, H.O. Fusion of multispectral and panchromatic satellite images using the curvelet transform. IEEE Geosci. Remote Sens. Lett. 2005, 2, 136–140. [Google Scholar]
  17. Aanaes, H.; Sveinsson, J.R.; Nielsen, A.A.; Bovith, T.; Benediktsson, J.A. Model-based satellite image fusion. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1336–1346. [Google Scholar]
  18. Martin, M.B.; Bell, A.E. New image compression techniques using multiwavelets and multiwavelet packets. IEEE Trans. Image Process. 2001, 10, 500–510. [Google Scholar]
  19. Strela, V.; Heller, P.N.; Strang, G.; Topiwala, P.; Heil, C. The application of multiwavelet filterbanks to image processing. IEEE Trans. Image Process. 1999, 8, 548–563. [Google Scholar]
  20. Wang, H.H. A new multiwavelet-based approach to image fusion. J. Math. Imaging Vis. 2004, 21, 177–192. [Google Scholar]
  21. Chui, C.K.; Lian, J.A. A study of orthonormal multi-wavelets. Appl. Numer. Math. 1996, 20, 273–298. [Google Scholar]
  22. Xydeas, C.; Petrović, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar]
  23. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar]
  24. Piella, G.; Heijamns, H. A new quality metric for image fusion. Proceedings of the 2003 International Conference on Image Processing, Barcelona, Spain, 14–17 September 2003; 3, pp. 173–176.
Figure 1. The multiwavelet-based algorithm for IR and VIS image fusion. Only three levels of decomposition are displayed.
Figure 1. The multiwavelet-based algorithm for IR and VIS image fusion. Only three levels of decomposition are displayed.
Sensors 14 10187f1 1024
Figure 2. Illustration of physical correction. (a) A η × η window in the fused image. Here, η = 4. (b) One pixel in the low-resolution IR image. It is interpolated to the same scale as (a). Both windows in (a) and (b) should have the same amount of radiation energy.
Figure 2. Illustration of physical correction. (a) A η × η window in the fused image. Here, η = 4. (b) One pixel in the low-resolution IR image. It is interpolated to the same scale as (a). Both windows in (a) and (b) should have the same amount of radiation energy.
Sensors 14 10187f2 1024
Figure 3. Improved physical correction. (a) An M × M area in the fused image is used to correct the highlighted η × η window in the center. (b) An N × N area in the low-resolution IR image. The objective is to let both areas in (a) and (b) have the same radiation energy.
Figure 3. Improved physical correction. (a) An M × M area in the fused image is used to correct the highlighted η × η window in the center. (b) An N × N area in the low-resolution IR image. The objective is to let both areas in (a) and (b) have the same radiation energy.
Sensors 14 10187f3 1024
Figure 4. MTSAT images of South China and Taiwan on 1300 LST 24 July 2006. (a) VIS image. It fails to capture the peripheral clouds of the cyclone. (b) IR image. It presents the peripheral clouds that are absent in (a). (c) Fused image. It has higher resolution, and retains the cyclone peripheral clouds as in (b). The red box indicates an area with abnormal temperature. (d) Corrected image.
Figure 4. MTSAT images of South China and Taiwan on 1300 LST 24 July 2006. (a) VIS image. It fails to capture the peripheral clouds of the cyclone. (b) IR image. It presents the peripheral clouds that are absent in (a). (c) Fused image. It has higher resolution, and retains the cyclone peripheral clouds as in (b). The red box indicates an area with abnormal temperature. (d) Corrected image.
Sensors 14 10187f4a 1024Sensors 14 10187f4b 1024
Figure 5. MTSAT images of South China and Taiwan on 1300 LST 25 July 2006. (a) VIS image; (b) IR image; (c) Fused image. Red boxes indicate areas with serious spectral distortion; (d) Corrected image.
Figure 5. MTSAT images of South China and Taiwan on 1300 LST 25 July 2006. (a) VIS image; (b) IR image; (c) Fused image. Red boxes indicate areas with serious spectral distortion; (d) Corrected image.
Sensors 14 10187f5 1024
Figure 6. MTSAT image of the borders of Hebei, Shanxi and Henan Provinces of China (1300 LST 25 July 2006). (a) VIS image; (b) IR image; (c) Fused image; (d) Corrected image.
Figure 6. MTSAT image of the borders of Hebei, Shanxi and Henan Provinces of China (1300 LST 25 July 2006). (a) VIS image; (b) IR image; (c) Fused image; (d) Corrected image.
Sensors 14 10187f6 1024
Figure 7. Illustration of the thermal energy deviation. (a) A η × η window in the fused image. In this case, η = 4. (b) One pixel in the low-resolution IR image. It is interpolated to the same scale as (a). The thermal energy deviation characterizes the difference of radiation energy in these η × η areas between fused image and IR image.
Figure 7. Illustration of the thermal energy deviation. (a) A η × η window in the fused image. In this case, η = 4. (b) One pixel in the low-resolution IR image. It is interpolated to the same scale as (a). The thermal energy deviation characterizes the difference of radiation energy in these η × η areas between fused image and IR image.
Sensors 14 10187f7 1024
Table 1. Quantitative analysis of IR and VIS image fusion and physical correction. The satellite data were collected on 1300 LST 26 July 2006. IR: infrared image. VIS: visible image. FUS: fused image using multiwavelet. COR: corrected image.
Table 1. Quantitative analysis of IR and VIS image fusion and physical correction. The satellite data were collected on 1300 LST 26 July 2006. IR: infrared image. VIS: visible image. FUS: fused image using multiwavelet. COR: corrected image.
IEAGQabfAVGDRMSD
IR6.90910.6502
VIS4.97912.4797
FUS7.05602.74120.6510667.0806924.3553
COR6.98902.88650.5633375.2017557.3362
Table 2. Quantitative analysis of IR and VIS image fusion and physical correction. The satellite data were collected on 1300 LST 26 July 2006.
Table 2. Quantitative analysis of IR and VIS image fusion and physical correction. The satellite data were collected on 1300 LST 26 July 2006.
MIQIQE
IR + FUS1.28820.16440.0867
VIS + FUS0.88900.78950.7932
IR + COR1.72350.18500.0893
VIS + COR0.83500.72850.7169
Table 3. Quantitative analysis of IR and VIS image fusion and correction (1300 LST 24 July 2006).
Table 3. Quantitative analysis of IR and VIS image fusion and correction (1300 LST 24 July 2006).
IEAGQabfAVGDRMSD
IR6.90910.6502
VIS4.97912.4797
FUS7.05522.67710.6740649.6730906.2315
COR6.99342.93880.5407366.8883551.5277
Table 4. Quantitative analysis of IR and VIS image fusion and correction (1300 LST 24 July 2006).
Table 4. Quantitative analysis of IR and VIS image fusion and correction (1300 LST 24 July 2006).
MIQIQE
IR + FUS1.32770.18290.0957
VIS + FUS0.88300.80060.8114
IR + COR1.75300.19360.1045
VIS + COR0.80210.69690.6835
Table 5. Quantitative analysis of IR and VIS image fusion and correction (1300 LST 25 July 2006).
Table 5. Quantitative analysis of IR and VIS image fusion and correction (1300 LST 25 July 2006).
IEAGQabfAVGDRMSD
IR6.95670.6722
VIS5.04912.1981
FUS7.05742.43570.6451578.3445838.8759
COR7.02882.66510.5244336.5270516.8839
Table 6. Quantitative analysis of IR and VIS image fusion and correction (1300 LST 25 July 2006).
Table 6. Quantitative analysis of IR and VIS image fusion and correction (1300 LST 25 July 2006).
MIQIQE
IR + FUS1.60750.23140.1405
VIS + FUS0.90050.74470.7590
IR + COR2.01610.23820.1278
VIS + COR0.87240.65620.6478

Share and Cite

MDPI and ACS Style

Han, L.; Shi, L.; Yang, Y.; Song, D. Thermal Physical Property-Based Fusion of Geostationary Meteorological Satellite Visible and Infrared Channel Images. Sensors 2014, 14, 10187-10202. https://doi.org/10.3390/s140610187

AMA Style

Han L, Shi L, Yang Y, Song D. Thermal Physical Property-Based Fusion of Geostationary Meteorological Satellite Visible and Infrared Channel Images. Sensors. 2014; 14(6):10187-10202. https://doi.org/10.3390/s140610187

Chicago/Turabian Style

Han, Lei, Lu Shi, Yiling Yang, and Dalei Song. 2014. "Thermal Physical Property-Based Fusion of Geostationary Meteorological Satellite Visible and Infrared Channel Images" Sensors 14, no. 6: 10187-10202. https://doi.org/10.3390/s140610187

Article Metrics

Back to TopTop