Next Article in Journal
Integrated Circuit Design for Radiation-Hardened Charge-Sensitive Amplifier Survived up to 2 Mrad
Previous Article in Journal
Drivers’ Visual Perception Quantification Using 3D Mobile Sensor Data for Road Safety
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Pulse-Coupled Neural Network Model for Pansharpening

1
Faculty of Geomatics, Lanzhou Jiaotong University, Lanzhou 730070, China
2
National-Local Joint Engineering Research Center of Technologies and Applications for National Geographic State Monitoring, Lanzhou 730070, China
3
State Key Laboratory of Integrated Service Network, Xidian University, Xi’an 710071, China
4
State Key Laboratory of Resources and Environmental Information System, Institute of Geographical Sciences and Natural Resources Research, CAS, Beijing 100101, China
5
Guangdong Provincial Key Laboratory of Robotics and Intelligent System, Shenzhen Institutes of Advanced Technology, CAS, Shenzhen 518172, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(10), 2764; https://doi.org/10.3390/s20102764
Submission received: 8 April 2020 / Revised: 10 May 2020 / Accepted: 11 May 2020 / Published: 12 May 2020
(This article belongs to the Section Remote Sensors)

Abstract

:
Pulse-coupled neural network (PCNN) and its modified models are suitable for dealing with multi-focus and medical image fusion tasks. Unfortunately, PCNNs are difficult to directly apply to multispectral image fusion, especially when the spectral fidelity is considered. A key problem is that most fusion methods using PCNNs usually focus on the selection mechanism either in the space domain or in the transform domain, rather than a details injection mechanism, which is of utmost importance in multispectral image fusion. Thus, a novel pansharpening PCNN model for multispectral image fusion is proposed. The new model is designed to acquire the spectral fidelity in terms of human visual perception for the fusion tasks. The experimental results, examined by different kinds of datasets, show the suitability of the proposed model for pansharpening.

1. Introduction

Multispectral (MS) image is of great importance in typical remote sensing applications, such as land utilization [1], urban interpretation [2], urban monitoring [3], and scene understanding [4] tasks. However, owing to the physical constraints of the optical satellite sensor and its communication bandwidth, the spectral diversity and the high spatial resolution cannot be obtained at the same time. In other words, the finer spectral resolution is obtained at the cost of the coarser spatial resolution. Thus, pansharpening—which combines the narrow-band multispectral image with a broadband high spatial resolution panchromatic (PAN) image to acquire a high-resolution multispectral image—is desirable. The contributions of the pansharpening technology for remote sensing tasks mainly include change detection [5], feature extraction [6], land classification [7] and scene interpretation [8]. Detailed reviews of the pansharpening methods can be found in [9,10,11,12].
Most popular methods for pansharpening fall into two main groups, which are the component substitution (CS) methods and the multiresolution analysis (MRA) methods [9]. The CS methods mean that one of a component extracted from the multispectral image is substituted by using the PAN image. It consists of the methods, such as the intensity-hue-saturation method (IHS) [13,14], the principal component analysis method (PCA) [15,16], and the Gram–Schmidt (GS) method [17]. The MRA methods provide effective multiresolution decomposition tools to obtain spatial details from the high-resolution image, which could be injected into the resampled multispectral images. These algorithms include the decimated wavelet transform (DWT) [18], the undecimated wavelet transform (UDWT) [19], the “à trous” wavelet transform (ATWT) [20], the Laplacian pyramid (LP) [21], and the morphological pyramid [22]. More recently, several deep learning approaches have been proposed to address the fusion problem, which achieves the better fusion results through studying the training dataset. It consists of methods such as the convolutional neural networks method [23] and the pansharpening convolutional neural networks method [24]. Other pansharpening methods have been also proposed for the preprocessed steps or the misaligned data, such as the enhanced back-projection model [25] and the improved fusion model [26].
As a basic remote sensing problem, pansharpening is getting more attention along with the rapid development of the high-resolution satellite image. The platforms of Google Earth and Microsoft Bing are the most representative applications of pansharpening [27]. The main purpose of pansharpening is to inject the details from the higher resolution image into the lower ones so as to obtain the high-resolution multispectral image. Classical methods produce the injection coefficients either through global estimation or locally used rectangular windows. Obviously, global injection algorithms will inevitably result in more spectral distortion, because all the extracted details are treated without distinction. Local estimation of the injection coefficients performs better than global ones by considering the correlations of the neighborhood pixels. However, the local estimation areas for the injection procedure are always restricted in a fixed square window [28], which will provide frustrating results as the foreground and the background pixels appear in the same square. The mixed situation in a given square window is especially common in high-resolution remote sensing images, since the tiny objects are more likely available for the high-resolution images. Thus, the injection coefficients should be estimated in an irregular region rather than a square one. The pulse-coupled neural network (PCNN), which derives from the synchronization oscillation phenomenon of visual cortex neuron, has the ability to generate the irregular clustering regions. The synchronization oscillation phenomenon means that the neurons (corresponding to the pixels in an image) of a similar status will release the pulses at the same time.
The PCNN has shown great ability in image processing and pattern recognition applications [29]. More specifically, one of the most diffused uses of PCNN and its improved versions is in dealing with the image fusion problem, which includes m-PCNN [30], memristive PCNN [31], memristor-based PCNN [32], and shuffled frog leaping PCNN [33]. Though these models have commendable ability in the multi-focus image fusion and medical image fusion, they are difficult to apply directly in pansharpening. All these PCNN fusion models suggest that the input channels are similar and parallel, which lacks the considerations of details injection and spectral preservation. Obviously, the degree of spectral distortion is of the utmost importance in the pansharpening assessment of the remote sensing images. Therefore, the issue is unable to be directly addressed by traditional PCNNs. Cheng proposed a new adaptive dual-channel PCNN for fusing the infrared and visual images [34], which exhibits good fusion performance. Ma et al. also tried to address the fusion issues for infrared and visible images by using neural networks, such as the DDcGAN method [35], the FusionGAN method [36], and the detail-preserving adversarial learning method [37]. These models have attained very satisfactory results. Nonetheless, the application does not take the spectral channels into account. Shi et al. presented a novel PCNN based algorithm for remote sensing image fusion [38], which can achieve good fusion results for the multi-source remote sensing images. However, the fusion algorithm mainly aimed for image enhancement, and spectral fidelity is not under consideration.
To settle the issue, a pansharpening PCNN (PPCNN) model for multispectral image fusion is proposed in the paper. It can adaptively inject spatial details to the multispectral images in each iteration. Benefiting from the characteristic of the synchronous pulse emitting phenomenon of PPCNN, the detail injection estimation can be manipulated among the pixels which not only have similar values, but also similar neighborhoods. In addition, the proposed model is proved to coincide with the human visual perception, which results in the better visual inspection of fusion results. In the paper, the performance of the PPCNN has been well tested with five different high-resolution datasets. The results have been well assessed by both the spectral and spatial quality evaluation criterion. The results of the experiments indicate that the proposed multispectral fusion method is effective for pansharpening.
The rest of the article is organized as follows: Section 2 reviews the standard PCNN model in brief, and then proposes a novel PPCNN model. Section 3 is devoted to presenting a pansharpening method based on the proposed PPCNN model. Section 4 gives the experimental method. Experimental results and discussions are exhibited in Section 5, and Section 6 summarizes the work.

2. PCNN and PPCNN Models

To propose the new PPCNN model, the standard PCNN model is briefly introduced. Next, the proposed PPCNN is introduced. The improvements on the standard PCNN model are based on the practical demands of the pansharpening applications. The analysis and the implementation of the new model are also given in the section.

2.1. Standard PCNN Model

The brain, as is well-known, perceives the outside world through a complicated neural network of visual cortex neurons. The information of the real world is transmitted by thousands of neurons in the network. Each neuron consists of the cell body, the synapse, the axon and the dendrites. As shown in Figure 1, the cell body receives the electrical impulses from the synapses of other neurons via its dendrites. The membrane potential of the cell body rises as the neuron continues to be stimulated by other neurons. The electrical impulse is generated when the membrane potential is larger than its threshold. The threshold of current neuron changes in a nonlinear way after stimulation. On the other hand, the neuron also transmits electrical impulses to other neurons via the axon part.
As inspired by the structure of visual cortex neuron, Johnson et al. proposed the PCNN model [39]. PCNN, which imitates the mechanism of mammalian visual cortex, is a laterally connected neural network of two-dimensional neurons with no training. As shown in Figure 2, each neuron of the standard PCNN model is divided into three sequential parts: the receptive field, the modulation field, and the pulse generator. It can be mathematically described as follows [39,40]:
F i j [ n ] = e α F F i j [ n 1 ] + V F k l M i j k l Y k l [ n 1 ] + I i j
L i j [ n ] = e α L L i j [ n 1 ] + V L k l W i j k l Y k l [ n 1 ]
U i j [ n ] = F i j [ n ] ( 1 + β L i j [ n ] )
Y i j [ n ] = { 1 U i j [ n ] > E i j [ n ] 0 otherwise
E i j [ n + 1 ] = e α E E i j [ n ] + V E Y i j [ n ]
In the PCNN model, the position of the neuron is denoted as a two-dimension symbol ij (ij means that the position of the neuron is at row i, column j), so as to be conveniently applied for an image. The neuron kl is defined as the neighboring neuron of the current neuron ij. The neuron ij receives the electrical impulses from the synapses of neighboring neurons kl, which simulates the functions of the dendrite part to receive the local stimulus via linking synapse M (or W). After the electrical impulses are gathered by the cell, the inputs are distinguished into two channels. One channel is the feeding input F and the other is the linking input L. The difference between them is that the feeding input F is influenced by the external stimulus I. In the modulation part, the internal activity U is designed to imitate the membrane potential of the cell body. The internal activity U is obtained by the coupling of F and weighted L, where the weighting factor is typically denoted as β. The pulse generator will produce the output pulse Y, if the dynamic threshold E is less than the internal activity U. Furthermore, VF, VL and VE indicate normalizing parameters. The parameters αF, αL and αE denote the exponential decay coefficients, which imitate the exponential attenuation characteristics over time.

2.2. Proposed PPCNN Model

In order to make the PCNN model appropriate for pansharpening, an improved PCNN model is proposed, which is called PPCNN hereafter. For the convenience of the fusion task, the new model should be designed to have two external inputs, and its characteristic of synchronous pulse emitting should remain unchanged. The neuron structure of the improved model is shown in Figure 3. Compared with the standard PCNN, the PPCNN model has at least four advantages: (1) two external stimuli I and P are considered simultaneously for the convenience of fusion operations, (2) each neuron ij of PPCNN has its own weighting factor βij rather than a uniform one in the standard PCNN model, (3) the internal activity U, which is composed of the sum of the feeding part F and the weighted linking part L, is considered as the final fusion result, and (4) fewer parameters in the new model. The PPCNN model can be mathematically described using the following expressions:
F i j [ n ] = V F k l M i j k l Y k l [ n 1 ] + I i j
L i j [ n ] = V L k l W i j k l Y k l [ n 1 ] + P i j
U i j [ n ] = F i j [ n 1 ] + β i j [ n 1 ] L i j [ n 1 ]
Y i j [ n ] = { 1 U i j [ n ] > E i j [ n ] 0 otherwise
E i j [ n + 1 ] = e α E E i j [ n ] + V E Y i j [ n ]

2.3. Implementation of PPCNN Model

Since the PPCNN model is mainly for the fusion tasks, the output part Y in the standard PCNN turns into the intermediate variable in the PPCNN model. Accordingly, the internal activity U represents the fusion result. Before the implementation of PPCNN is described, a couple of necessary symbols used should be defined. Symbol × indicates the multiplication between a constant and the matrix. Symbol • multiplies each element of one matrix with the corresponding one in the other matrix. Symbol ⊗ denotes the convolution between two matrices. Specifically, each neuron has its own βij in the PPCNN model, designed as follows:
C i j [ n ] = { Cov ( I , P a n ) Cov ( P a n , P a n ) if   Y i j [ n ] = 1 0 otherwise
β i j [ n ] = { Std ( I ) Std ( P a n ) if   C i j [ n ] > 0   and   Y i j [ n ] = 1 0 otherwise
where the symbol Cov(X, Y) stands for the covariance operation between X and Y. Std(X) indicates the standard deviation of X. I refers to the input multispectral image. Pan denotes the panchromatic image. An example of β image is shown in Figure 4.
The implementation procedure of PPCNN model includes the following steps:
  • Initializing matrices and parameters. Y[0] = F[0] = L[0] = β[0] = U[0] = 0, and E[0] = VE. The initial value of the iteration number n is 1. P is denoted as the spatial detail image. I and P are all normalized between 0 and 1. Other parameters (VF, VL, VE M, W, and αE) are determined by experience based on different applications.
  • Compute F[n] = VF × (Y[n − 1] ⊗ M) + I, L[n] = VL × (Y[n − 1] ⊗ W)+P, and U[n] = F[n − 1] + β[n − 1]•L[n − 1].
  • If Uij[n] > Eij[n], then Yij[n] = 1, else Yij[n] = 0.
  • Update E[n + 1] = e-αE × E[n] + VE × Y[n].
  • Stop iteration until all neurons have been stimulated, and the internal activity U is the final fusion result, else let n = n + 1 and return to the Step (2).

2.4. Analysis of PPCNN Model

For the PPCNN model, Equation (8) can be rewritten below:
U i j [ n ] = V F k l M i j k l Y k l [ n 1 ] + β i j [ n 1 ] V L k l W i j k l Y k l [ n 1 ] + I i j + β i j [ n 1 ] P i j
The part Iij + βijPij of the Equation (13) is similar to the traditional pansharpening method, except that βij changes in each iteration. It means that the proposed model has the injection mechanism which is beneficial for pansharpening application. In addition, some good characteristics have been inherited from the standard PCNN in the new model, such as the phenomenon of the synchronous pulse emitting and the exponential attenuation of the threshold. As exhibited in Equation (13), the influences to the current neuron are not only from the external stimulus (I and P), but also from the neighborhoods, which guarantee the synchronous pulse emitting mechanism. In other words, the neurons with similar neighborhoods and stimuli will emit the pulses at the same time. Once the stimulus is a high-resolution image, the synchronous pulse emitting mechanism will make the interpretation of boundaries and textures of the object more accurate.
Another observation is that the threshold of the neuron decays exponentially according to Equation (10). Since the initial value Y[0] = 0 and E[0] = VE, Eij at the first iteration is
E i j [ 1 ] = e α E V E = C V E
Suppose that the neuron ij fires at the n0th iteration first, and fires again at the nlth iteration, where l is the iteration number of the firing event. When the iteration number n is less than n0, we have
E i j [ n ] = C n V E
Otherwise, while n is greater than n0, we get
E i j [ n ] = C n V E l = 0 L ( 1 + C n l ) = e n ln C γ L
where L is the total firing times of the neuron ij.
According to Equation (9), the output pulse of the current neuron ij is generated when the threshold Eij is just less than the internal activity Uij. Thus, we have the firing time as follows:
n ln ( U i j [ n ] ) / ln ( c ) ln ( γ L ) / ln ( c ) = a ln ( U i j [ n ] ) + b
Since Uij is the accumulation of the external stimuli (multispectral and panchromatic images), Equation (17) obeys the Weber–Fechner law, i.e., there exists a logarithmic relationship between the firing time and the external stimuli. Therefore, the proposed model is coincide with the human visual perception. The exponential attenuation curve of the proposed model is shown in Figure 5. It is found that PPCNN processes the lower stimulus more precisely and the higher ones more coarsely. The characteristic is in accordance with the human visual system that people are very sensitive to contrast variation in the dark areas.
Compared with the detail injection mechanism of the most pansharpening methods and the pulse-emitting mechanism of standard PCNN model, the PPCNN model has the advantages of both by introducing the details injection mechanism into the standard PCNN model. In addition, as indicated in Equation (12), βij[n] automatically changes based on the current firing environment in the iteration, which guarantees that the statistical calculations are computed among the neurons with the similar state. Furthermore, PPCNN is proved to coincide with the human visual system. These improvements are believed to be propitious for multispectral remote sensing image fusion.

3. PPCNN Based Pansharpening Approach

For pansharpening applications, there is a one-to-one corresponding relationship between the neurons and the input image. Thus a neuron of the PPCNN model is a mapping from a specific pixel in the remote sensing image. Before the model is applied, the input multispectral and the Pan images should be normalized between 0 and 1 so as to simplify the parameter settings of the network. Let PI and MIk be the input PAN image and the interpolated multispectral images, respectively. The normalized ones can be defined as:
φ = max ( max ( M I 1 ) , max ( M I K ) , max ( P I ) )
I k = M I k / φ k = 1 , K
P N = P I / φ
where k indicates the kth band of the multispectral images, and the total band number is K. Symbol max indicates the maximum value of all pixels. Consequently, Ik and PN are the corresponding normalized versions of MIk and PI. The architecture of the PPCNN pansharpening approach is shown in Figure 6, which consists of the following steps:
  • Interpolate the multispectral image to the PAN size by employing the even cubic kernel to acquire the input MIk [41].
  • Obtain Ik and PN according to Equations (19) and (20).
  • Execute the histogram matching between PN and Ik to obtain the matched version PNk of the PAN image.
  • Obtain the reduced-resolution version PNL of PN by using a low-pass filter, which obeys the modulation transferring function of the given satellite [42]. Calculate Pk = PNkPNLk, where k = 1, …, K.
  • Set k = 1.
  • Update the internal activity Uk after implementing the PPCNN model, where the external stimuli are Ik and Pk, respectively.
  • If k < K, let k = k + 1 and return to Step (6), otherwise, end the procedure.
  • Obtain the fusion result R by the inverse normalization transformation Rk = φ × Uk.

4. Experiments

In this section, several experiments are designed to evaluate the effectiveness of the proposed model. Obviously, fusing the remote sensing images with high spatial resolution is more difficult; thus, the experimental datasets were selected mainly on the high-resolution images. In this section, the evaluation criteria for assessing the fusion results are firstly briefly reviewed. Subsequently, several experimental datasets are prepared to testify the broad capability of the PPCNN model in the fusion applications of remote sensing images. Because the proposed PPCNN model is a laterally connected feedback network with no training, the parameter settings of PPCNN are also given in the section.

4.1. Quality Evaluation Criteria

The experimental results need to be evaluated with quantitative statistical criteria, which focus on both the spectral and spatial quality of the fusion results. Here, the measures used include the spectral angle mapper (SAM) [43], the relative dimensionless global error in synthesis (ERGAS) [44], the quaternion index (Q4) [45,46] and the spatial correlation coefficient (SCC) [47]. They are mathematically described as follows:
SAM ( x , y ) = ( 1 K ) k = 1 K arccos ( x k , y k / ( x k y k ) )
ERGAS ( x , y ) = 100 r ( 1 K ) k = 1 K ( RMSE 2 ( x k , y k ) / μ 2 ( x k ) )
Q ( x , y ) = 4 Cov ( x , y ) μ ( x ) μ ( y ) ( μ 2 ( x ) + μ 2 ( y ) ) ( σ 2 ( x ) + σ 2 ( y ) )
where the symbol < > represents the inner product and the symbol‖‖indicates the l2-norm. RMSE is the abbreviation of root mean square error. In addition, r stands for the spatial resolution ratio. The symbols σ and μ represent the variance and mean value, respectively.
SAM measures the global spectral accuracy, while ERGAS can represent the radiometric distortion between the ground-truth image and the fusion result. Q4 is used for overcoming the limitations of the root mean square error (RMSE) [9], which quantifies both spatial and spectral quality. SCC can evaluate the spatial correlation between two images. It should be noticed that the ideal values of SAM, ERGAS, Q4 and SCC are 0, 0, 1 and 1, respectively.

4.2. Datasets

The performance of the PPCNN model has been well tested with five datasets. All the datasets represent different landscapes and are captured by four different high-resolution satellite sensors. The first dataset is captured by the WorldView-2 sensor over an urban area of Washington. The dataset mainly consists of buildings, woods, and the river. The second dataset is the landscape of the mountainous area of Sichuan province in China. It is captured by IKONOS-2 satellite sensor. The third dataset, which is captured from QuickBird satellite, indicates a suburban region of Boulder city in the United States. The fourth dataset, named Lanzhou dataset, is captured by the GF-2 sensor of China. The Lanzhou dataset represents a mountainous suburban area of Lanzhou city in northwest China, which is composed of the Yellow River, buildings and mountains. The fifth dataset is a dam area of Xinjiang province in China, which is also acquired using a GF-2 sensor. All the multispectral image of the datasets consists of four channels, i.e., green, blue, red, and near infrared bands. For convenient comparison, all the original images in the datasets are degraded based on the Wald’s protocol [48] so as to obtain the reference images. As a result, the original multispectral images can be used as the ground-truth images for accurate evaluation. Figure 7 shows the five datasets of the pansharpening experiments. The detailed parameters of all datasets are shown in Table 1.

4.3. Initialize PPCNN Parameters

Since no training is needed in the PPCNN model, the settings of parameters and matrices are discussed here. Among which, linking synapses M and W with the matrix [0.707, 1, 0.707; 1, 0, 1; 0.707, 1, 0.707] are obtained by means of the Euclidean distance between two neurons. VE is set to be a large value to ensure that each neuron will be stimulated only once. The initial value of iteration number n is 1. The selection of VF and VL are analyzed in Figure 8. They are discussed as the PPCNN is carried out with the Washington dataset. Figure 8 gives the SAM, ERGAS, Q4 and SCC results of different VF and VL, when αE equals to 1.1. It is found that VL offers little influence on the results. Another observation is that Q4 and SCC get the maximum values when the VF is larger than 0.1. SAM and ERGAS get the minimum values if the VF is larger than 0.1. Hence, the VF could be set as an arbitrary value when it is larger than 0.1. Figure 9 shows the SAM, ERGAS, Q4 and SCC curves of different αE. We can see from Figure 9 that the optimal value of αE is 1.1.

5. Experimental Results and Discussions

In this section, the fusion experiments between the Pan image and the multispectral images are carried out. The performance of the proposed model will be well assessed, and the discussions will be made in detail. In the experiments, a couple of classical and state-of-the-art algorithms are presented as comparative methods, i.e., the GS method [17], the PCA method [15], the Brovey transform (BT) method [49], the IHS method [13], the ATWT method [20], the additive wavelet luminance proportional (AWLP) method [47], the CBD method [9], the revisited AWLP (RAWLP) method [50], the full scale regression (FSR) method [51] and the MOF method [22]. The pansharpening results obtained from the PPCNN algorithm and other methods are presented and discussed using five different datasets.
The experimental results with the Washington dataset are shown in Figure 10. Figure 10a shows the low-resolution multispectral image of the Washington dataset. Figure 10b gives the reference ground-truth multispectral image. The fusion result obtained from the proposed PPCNN method is shown in Figure 10c. The resulting fused multispectral images produced from GS, PCA, BT, IHS, ATWT, AWLP, CBD, MOF, RAWLP and FSR methods are shown in Figure 5d–m, respectively. From the Washington dataset obtained by WorldView-2, it can be noticed that the proposed PPCNN method, the ATWT method, the AWLP method, the CBD method, the MOF method, the RAWLP method and the FSR method perform better in spatial details and spectral preservation, while the contours in the PCA method and the BT method are blurred and not sharp enough. Wrong colors of small roof are obtained by the BT and the IHS methods. Table 2 provides the quantitative comparison results for the Washington dataset, where the best result is highlighted in bold. It can be found that the PPCNN approach performs better than the others with less spectral distortion and better detail preservation.
Figure 11 shows the pansharpening results of Sichuan dataset. As shown in Figure 11, the colors of the BT and IHS methods are not correctly synthesized. The AWLP method looks a little blurry. CBD method shows nice correct colors as a whole, but some other information computed by the method is redundantly added, especially in white area of Figure 11j. It has also been found that visual quality is generally acceptable for other seven methods. Table 3 demonstrates that the PPCNN method obtains the best fusion result for Sichuan dataset.
From Figure 12, the experimental results for the Boulder dataset also show that the BT and IHS methods produce more spectral distortion, not only in the farmland and tree area, but also in the white roof area. For the Boulder dataset, wrong colors of small objects are also obtained using the CBD method, such as the fake red lines in the white roof. As shown in Table 4, the PPCNN method outperforms others in the quantitative comparison.
For the experiments with the Lanzhou dataset, it is obvious from Figure 13 that the large spectral distortion is introduced by the GS, PCA, BT and IHS methods, particularly in the Yellow River area of Lanzhou city. In addition, part of the small island is missing in Figure 13j obtained by CBD methods. Another observation is that the tree belt looks a little blur in Figure 13j. Table 5 compares the PPCNN algorithm with others through the Lanzhou dataset. It shows that the proposed approach outperforms the others in the quantitative evaluation.
Figure 14 shows the pansharpening results of the Xinjiang dataset. Since the Xinjiang dataset does not have a lot of high-resolution tiny textures, the visual quality is acceptable for all algorithms. However, Table 6 demonstrates that the PPCNN algorithm also performs the best.
In conclusion, we tested the effectiveness of the PPCNN method with five datasets of different landscapes and sources. The experimental results of all datasets are summarized in Figure 15. In general, the results of the BT and IHS methods exhibited more spectral distortion. In some cases, GS and PCA methods are not good at spectral preservation either. More specifically, the contours of small objects obtained by these four methods and the AWLP method are sometimes blurred and not sharp enough. In addition, the CBD method produces spectral distortion in some small objects. Hence, the PPCNN, ATWT, MOF, RAWLP and FSR methods obtain good visual results. In greater detail, it can be noticed that the proposed PPCNN method produces images with better spatial detail quality and spectral quality than other methods according to the quantitative comparison.

6. Conclusions

The paper presents a novel PPCNN model and applies it to pansharpening approaches. The PPCNN model has two external stimuli rather than a single one in the standard PCNN, which guarantees it more convenient for fusion tasks. In addition, the internal activity of PPCNN is designed to have the function of details injection operation, which makes it easier to reserve the spectral fidelity. Five datasets with different characteristics, acquired by four different high-resolution sensors, were used to evaluate the model. The efficient performance of the PPCNN model is thoroughly examined through urban, suburban, mountainous and other complex landform datasets, which demonstrates that the PPCNN approach performs better with regard to both detail and spectral preservation.
In fact, the PPCNN model imitates the visual cortex neurons to transmit the input images to produce the synchronous electrical pulses with no training. On the other hand, deep learning models, such as CNN and RNN, imitate the mechanism of brain by training. If the output pulses of the PPCNN model are treated as the input of the CNN model, we believe that helpful results will be obtained. Consequently, the PPCNN based training model will be the focus of our future work. Another further interesting investigation looks toward the other image processing applications of PPCNN. We will investigate these problems in future work.

Author Contributions

All the authors have contributed substantially to the manuscript. X.L. and H.Y. proposed the methodology. X.L. and W.X. performed the experiments and software. L.K. and Y.T. wrote the paper. H.Y. and Y.T. analyzed the data. All authors have read and agreed to the published version of the manuscript.

Funding

This research was jointly funded by National Key R&D Program of China (No. 2017YFB0504201), National Natural Science Foundation of China (Nos. 41861055, 41671447 and 41761082), China Postdoctoral Science Foundation (No. 2019M653795), and lzjtu EP Program (No. 201806).

Acknowledgments

The authors are grateful to the editor and anonymous reviewers for their helpful and valuable suggestions. We also appreciate Jie Li for her support and help.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
  2. Fu, H.; Zhou, T.; Sun, C. Object-based shadow index via illumination intensity from high resolution satellite images over urban areas. Sensors 2020, 20, 1077. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Frick, A.; Tervooren, S. A framework for the long-term monitoring of urban green volume based on multi-temporal and multi-sensoral remote sensing data. J. Geovis. Spat. Anal. 2019, 3, 6–16. [Google Scholar] [CrossRef]
  4. Wang, M.; Zhang, X.; Niu, X.; Wang, F.; Zhang, X. Scene classification of high-resolution remotely sensed image based on resnet. J. Geovis. Spat. Anal. 2019, 3, 1–9. [Google Scholar] [CrossRef]
  5. Bovolo, F.; Bruzzone, L.; Capobianco, L.; Garzelli, A.; Marchesi, S. Analysis of the effects of pansharpening in change detection on VHR images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 53–57. [Google Scholar] [CrossRef]
  6. Mohammadzadeh, A.; Tavakoli, A.; Zoej, M.J.V. Road extraction based on fuzzy logic and mathematical morphology from pan-sharpened IKONOS images. Photogramm. Rec. 2006, 21, 44–60. [Google Scholar] [CrossRef]
  7. Gašparović, M.; Jogun, T. The effect of fusing Sentinel-2 bands on land-cover classification. Int. J. Remote Sens. 2018, 39, 822–841. [Google Scholar] [CrossRef]
  8. Laporterie-Déjean, F.; Boissezon, H.; Flouzat, G.; Lefèvre-Fonollosaa, M. Thematic and statistical evaluations of five panchromatic/multispectral fusion methods on simulated PLEIADES-HR images. Inf. Fusion 2005, 6, 193–212. [Google Scholar] [CrossRef]
  9. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  10. Amro, I.; Mateos, J.; Vega, M.; Molina, R.; Katsaggelos, A.K. A survey of classical methods and new trends in pansharpening of multispectral images. EURASIP J. Adv. Signal Process. 2011, 2011, 1–22. [Google Scholar] [CrossRef] [Green Version]
  11. Thomas, C.; Ranchin, T.; Wald, L.; Chanussot, J. Synthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1301–1312. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, Z.; Ziou, D.; Armenakis, C.; Li, D.; Li, Q. A comparative analysis of image fusion methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1391–1402. [Google Scholar] [CrossRef]
  13. Tu, T.M.; Su, S.C.; Shyu, H.C.; Huang, P.S. A new look at IHS like image fusion methods. Inf. Fusion 2001, 2, 177–186. [Google Scholar] [CrossRef]
  14. Carper, W.; Lillesand, T.; Kiefer, R. The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Remote Sens. 1990, 56, 459–467. [Google Scholar]
  15. Psjr, C.; Sides, S.C.; Anderson, J.A. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic. Photogramm. Eng. Remote Sens. 1991, 57, 265–303. [Google Scholar]
  16. Chavez, P.S.; Kwarteng, A.Y. Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  17. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6011875, 4 January 2000. [Google Scholar]
  18. Mallat, S.G. A Theoryf Multiresolution Signal Decomposition: The Wavelet Representation; IEEE Computer Society: Washington, DC, USA, 1989. [Google Scholar]
  19. Nason, G.P.; Silverman, B.W. The Stationary Wavelet Transform and Some Statistical Applications; Springer: New York, NY, USA, 1995. [Google Scholar]
  20. Vivone, G.; Restaino, R.; Mura, M.D.; Licciardi, G.; Chanussot, J. Contrast and error-based fusion schemes for multispectral image pansharpening. IEEE Geosci. Remote Sens. Lett. 2013, 11, 930–934. [Google Scholar] [CrossRef] [Green Version]
  21. Burt, P.J.; Adelson, E.H. The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 2003, 31, 532–540. [Google Scholar] [CrossRef]
  22. Restaino, R.; Vivone, G.; Mura, M.D.; Chanussot, J. Fusion of multispectral and panchromatic images based on morphological operators. IEEE Trans. Image Process. 2016, 25, 2882–2895. [Google Scholar] [CrossRef] [Green Version]
  23. Scarpa, G.; Vitale, S.; Cozzolino, D. Target-adaptive cnn-based pansharpening. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1–15. [Google Scholar] [CrossRef] [Green Version]
  24. Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by convolutional neural networks. Remote Sens. 2016, 8, 594. [Google Scholar] [CrossRef] [Green Version]
  25. Liu, J.; Ma, J.; Fei, R.; Li, H.; Zhang, J. Enhanced back-projection as postprocessing for pansharpening. Remote Sens. 2019, 11, 712. [Google Scholar] [CrossRef] [Green Version]
  26. Li, H.; Jing, L.; Tang, Y.; Ding, H. An improved pansharpening method for misaligned panchromatic and multispectral data. Sensors 2018, 18, 557. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Aiazzi, B.; Alparone, L.; Baronti, S. Twenty-Five Years of Pansharpening: A critical Review and New Developments. In Signal and Image Processing for Remote Sensing; CRC Press: Boca Raton, FL, USA, 2012; pp. 533–548. [Google Scholar]
  28. Wang, N.; Jiang, W.; Lei, C.; Qin, S.; Wang, J. A robust image fusion method based on local spectral and spatial correlation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 454–458. [Google Scholar] [CrossRef]
  29. Ma, Y.; Zhan, K.; Wang, Z. Applications of Pulse-Coupled Neural Networks; Higher Education Press: Beijing, China, 2010. [Google Scholar]
  30. Wang, Z.; Ma, Y. Medical image fusion using m-PCNN. Inf. Fusion 2008, 9, 176–185. [Google Scholar] [CrossRef]
  31. Zhu, S.; Wang, L.; Duan, S. Memristive pulse coupled neural network with applications in medical image processing. Neurocomputing 2017, 227, 149–157. [Google Scholar] [CrossRef]
  32. Dong, Z.; Lai, C.; Qi, D.; Zhao, X.; Li, C. A general memristor-based pulse coupled neural network with variable linking coefficient for multi-focus image fusion. Neurocomputing 2018, 308, 172–183. [Google Scholar] [CrossRef]
  33. Huang, C.; Tian, G.; Lan, Y.; Peng, Y.; Ng, E.Y.K. A new pulse coupled neural network (pcnn) for brain medical image fusion empowered by shuffled frog leaping algorithm. Front. Neurosci. 2019, 13, 1–10. [Google Scholar] [CrossRef]
  34. Cheng, B.; Jin, L.; Li, G. Infrared and visual image fusion using LNSST and an adaptive dual-channel PCNN with triple-linking strength. Neurocomputing 2018, 310, 135–147. [Google Scholar] [CrossRef]
  35. Ma, J.; Xu, H.; Jiang, J.; Mei, X.; Zhang, X. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995. [Google Scholar] [CrossRef]
  36. Ma, J.; Yu, W.; Liang, P.; Li, C.; Jiang, J. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion. 2019, 48, 11–26. [Google Scholar] [CrossRef]
  37. Ma, J.; Liang, P.; Yu, W.; Chen, C.; Guo, X. Infrared and visible image fusion via detail preserving adversarial learning. Inf. Fusion. 2020, 54, 85–98. [Google Scholar] [CrossRef]
  38. Shi, C.; Miao, Q.G.; Xu, P.F. A novel algorithm of remote sensing image fusion based on Shearlets and PCNN. Neurocomputing 2013, 117, 47–53. [Google Scholar]
  39. Johnson, J.L.; Padgett, M.L. PCNN models and applications. IEEE Trans. Neural Netw. 1999, 10, 480–498. [Google Scholar] [CrossRef] [PubMed]
  40. Lindblad, T.; Kinser, J.M. Image Processing Using Pulse-Coupled Neural Networks; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  41. Aiazzi, B.; Baronti, S.; Selva, M.; Alparone, L. Bi-cubic interpolation for shift-free pan-sharpening. ISPRS J. Photogramm. Remote Sens. 2013, 86, 65–76. [Google Scholar] [CrossRef]
  42. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A. MTF-tailored multiscale fusion of high-resolution MS and pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  43. Goetz, A.; Boardman, W.; Yunas, R. Discrimination among semi-arid landscape endmembers using the Spectral Angle Mapper (SAM) algorithm. In Proceedings of the Summaries 3rd Annual JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 1–5 June 1992; pp. 147–149. [Google Scholar]
  44. Wald, L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Presses Des MINES: Paris, France, 2002. [Google Scholar]
  45. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A global quality measurement of pan-sharpened multispectral imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 313–317. [Google Scholar] [CrossRef]
  46. Garzelli, A.; Nencini, F. Hypercomplex quality assessment of multi/hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2009, 6, 662–665. [Google Scholar] [CrossRef]
  47. Otazu, X.; Gonzalez-Audicana, M.; Fors, O.; Nunez, J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef] [Green Version]
  48. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  49. Gillespie, A.R.; Kahle, A.B.; Walker, R.E. Color enhancement of highly correlated images. II. Channel ratio and “chromaticity” transformation techniques. Remote Sens. Environ. 1987, 22, 343–365. [Google Scholar] [CrossRef]
  50. Vivone, G.; Luciano, A.; Andrea, G.; Simone, L. Fast reproducible pansharpening based on instrument and acquisition modeling: AWLP revisited. Remote Sens. 2019, 11, 2315. [Google Scholar] [CrossRef] [Green Version]
  51. Vivone, G.; Rocco, R.; Jocelyn, C. Full scale regression-based injection coefficients for panchromatic sharpening. IEEE Trans. Image Process. 2018, 27, 3418–3431. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Structure of visual cortex neuron.
Figure 1. Structure of visual cortex neuron.
Sensors 20 02764 g001
Figure 2. Structure of standard PCNN model.
Figure 2. Structure of standard PCNN model.
Sensors 20 02764 g002
Figure 3. Structure of proposed PPCNN.
Figure 3. Structure of proposed PPCNN.
Sensors 20 02764 g003
Figure 4. Structure of proposed PPCNN. (a) MS image. (b) β image.
Figure 4. Structure of proposed PPCNN. (a) MS image. (b) β image.
Sensors 20 02764 g004
Figure 5. The exponential attenuation characteristic of the PPCNN model.
Figure 5. The exponential attenuation characteristic of the PPCNN model.
Sensors 20 02764 g005
Figure 6. Procedure of PPCNN pansharpening approach.
Figure 6. Procedure of PPCNN pansharpening approach.
Sensors 20 02764 g006
Figure 7. Datasets of pansharpening experiments. (a) Washington dataset. (b) Sichuan dataset. (c) Boulder dataset. (d) Lanzhou dataset. (e) Xinjiang dataset.
Figure 7. Datasets of pansharpening experiments. (a) Washington dataset. (b) Sichuan dataset. (c) Boulder dataset. (d) Lanzhou dataset. (e) Xinjiang dataset.
Sensors 20 02764 g007
Figure 8. Analysis of the parameters VF and VL. (a) Q4. (b) SAM. (c) ERGAS. (d) SCC.
Figure 8. Analysis of the parameters VF and VL. (a) Q4. (b) SAM. (c) ERGAS. (d) SCC.
Sensors 20 02764 g008
Figure 9. Analysis of the parameters αE.(a) Q4 and SCC. (b) SAM and ERGAS.
Figure 9. Analysis of the parameters αE.(a) Q4 and SCC. (b) SAM and ERGAS.
Sensors 20 02764 g009
Figure 10. Pansharpening results of Washington dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Figure 10. Pansharpening results of Washington dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Sensors 20 02764 g010
Figure 11. Pansharpening results of Sichuan dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Figure 11. Pansharpening results of Sichuan dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Sensors 20 02764 g011
Figure 12. Pansharpening results of Boulder dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Figure 12. Pansharpening results of Boulder dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Sensors 20 02764 g012
Figure 13. Pansharpening results of Lanzhou dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Figure 13. Pansharpening results of Lanzhou dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Sensors 20 02764 g013
Figure 14. Pansharpening results of Xinjiang dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Figure 14. Pansharpening results of Xinjiang dataset. (a) MS. (b) Ground-truth image. (c) PPCNN. (d) GS. (e) PCA. (f) BT. (g) IHS. (h) ATWT. (i) AWLP. (j) CBD. (k) MOF. (l) RAWLP. (m) FSR.
Sensors 20 02764 g014
Figure 15. Comparison results of five datasets. (a) Q4. (b) SAM. (c) ERGAS. (d) SCC.
Figure 15. Comparison results of five datasets. (a) Q4. (b) SAM. (c) ERGAS. (d) SCC.
Sensors 20 02764 g015
Table 1. Datasets for experiments.
Table 1. Datasets for experiments.
ExperimentsDatasetsSatellitesSizeSpatial ResolutionRegion
Pan and MSWashington datasetWV-2PAN: 2048 × 2048
MS: 512 × 512
0.5 m/2 mWashington, D.C., USA
Sichuan datasetIKONOS-2PAN: 2048 × 2048
MS: 512 × 512
1 m/4 mSichuan, China
Boulder datasetQuickBirdPAN: 2048 × 2048
MS: 512 × 512
0.7 m/ 2.8 mBoulder, USA
Lanzhou datasetGF-2PAN: 2048 × 2048
MS: 512 × 512
0.81 m/3.24 mLanzhou, China
Xinjiang datasetGF-2PAN: 2048 × 2048
MS: 512 × 512
0.81 m/3.24 mXinjiang, China
Table 2. Comparison results for Washington dataset.
Table 2. Comparison results for Washington dataset.
CriteriaPPCNNGSPCABTIHSATWT
Q40.88840.78220.74380.77640.78410.8626
SAM (°)7.75278.41269.63138.19368.65297.9265
ERGAS4.94166.56177.57976.74806.67105.5633
SCC0.83750.83490.79330.83720.81750.8317
CriteriaAWLPCBDMOFRAWLPFSR
Q40.86000.86960.86510.87310.8795
SAM (°)7.92478.50657.92487.79678.3575
ERGAS5.78735.48755.48895.31475.0657
SCC0.81870.78120.83090.83690.8271
Table 3. Comparison results for Sichuan dataset.
Table 3. Comparison results for Sichuan dataset.
CriteriaPPCNNGSPCABTIHSATWT
Q40.89590.83540.83680.70860.56580.8772
SAM (°)1.38571.78431.79422.74544.22081.5599
ERGAS1.17691.48841.49111.87712.83381.2873
SCC0.95520.95250.95200.90850.72010.9509
CriteriaAWLPCBDMOFRAWLPFSR
Q40.87140.87620.88100.89170.8926
SAM (°)2.98001.46271.57651.39831.3927
ERGAS1.75191.30271.29601.18651.1819
SCC0.93430.95150.94760.95400.9542
Table 4. Comparison results for Boulder dataset.
Table 4. Comparison results for Boulder dataset.
CriteriaPPCNNGSPCABTIHSATWT
Q40.91260.85110.85130.82910.80840.9057
SAM (°)0.97471.33541.51131.16401.44481.0470
ERGAS0.80491.09921.03461.11041.23170.8957
SCC0.95570.94110.94110.94670.93280.9432
CriteriaAWLPCBDMOFRAWLPFSR
Q40.90500.89720.90260.90620.9097
SAM (°)1.20621.03051.12571.19031.1528
ERGAS0.97311.00520.97630.93120.9441
SCC0.93460.93690.94650.93930.9392
Table 5. Comparison results for Lanzhou dataset.
Table 5. Comparison results for Lanzhou dataset.
CriteriaPPCNNGSPCABTIHSATWT
Q40.91070.82300.77630.81260.80970.8980
SAM (°)1.43041.79922.35021.43071.67561.4545
ERGAS1.66972.27942.84362.22082.25141.7911
SCC0.89440.87130.85160.87170.87770.8844
CriteriaAWLPCBDMOFRAWLPFSR
Q40.89610.79570.88350.90000.8982
SAM (°)1.44802.21171.45831.69171.4550
ERGAS1.76833.21001.97581.73731.7963
SCC0.87710.80210.88290.88220.8878
Table 6. Comparison results for Xinjiang dataset.
Table 6. Comparison results for Xinjiang dataset.
CriteriaPPCNNGSPCABTIHSATWT
Q40.92530.86500.84790.85140.86210.8956
SAM (°)0.99401.25601.32301.04681.06721.1253
ERGAS1.17841.63401.76191.59441.60541.3765
SCC0.91950.88140.87780.88200.89180.8904
CriteriaAWLPCBDMOFRAWLPFSR
Q40.88360.84790.88020.91810.8864
SAM (°)1.12771.54971.17181.01291.1674
ERGAS1.37332.07111.54501.26221.4706
SCC0.88300.84440.88960.91420.8847

Share and Cite

MDPI and ACS Style

Li, X.; Yan, H.; Xie, W.; Kang, L.; Tian, Y. An Improved Pulse-Coupled Neural Network Model for Pansharpening. Sensors 2020, 20, 2764. https://doi.org/10.3390/s20102764

AMA Style

Li X, Yan H, Xie W, Kang L, Tian Y. An Improved Pulse-Coupled Neural Network Model for Pansharpening. Sensors. 2020; 20(10):2764. https://doi.org/10.3390/s20102764

Chicago/Turabian Style

Li, Xiaojun, Haowen Yan, Weiying Xie, Lu Kang, and Yi Tian. 2020. "An Improved Pulse-Coupled Neural Network Model for Pansharpening" Sensors 20, no. 10: 2764. https://doi.org/10.3390/s20102764

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop