Next Article in Journal
Physical Meanings of Fractal Behaviors of Water in Aqueous and Biological Systems with Open-Ended Coaxial Electrodes
Next Article in Special Issue
Azimuth Sidelobes Suppression Using Multi-Azimuth Angle Synthetic Aperture Radar Images
Previous Article in Journal
Constraint-Based Optimized Human Skeleton Extraction from Single-Depth Camera
Previous Article in Special Issue
Performance Analysis of Ionospheric Scintillation Effect on P-Band Sliding Spotlight SAR System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving the Accuracy of Two-Color Multiview (2CMV) Advanced Geospatial Information (AGI) Products Using Unsupervised Feature Learning and Optical Flow

1
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85281, USA
2
School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ 85281, USA
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(11), 2605; https://doi.org/10.3390/s19112605
Submission received: 5 May 2019 / Revised: 3 June 2019 / Accepted: 3 June 2019 / Published: 8 June 2019
(This article belongs to the Special Issue Synthetic Aperture Radar (SAR) Techniques and Applications)

Abstract

:
In two-color multiview (2CMV) advanced geospatial information (AGI) products, temporal changes in synthetic aperture radar (SAR) images acquired at different times are detected, colorized, and overlaid on an initial image such that new features are represented in cyan, and features that have disappeared are represented in red. Accurate detection of temporal changes in 2CMV AGI products can be challenging because of ’speckle noise’ susceptibility and false positives that result from small orientation differences between objects imaged at different times. Accordingly, 2CMV products are often dominated by colored pixels when changes are detected via simple pixel-wise cross-correlation. The state-of-the-art in SAR image processing demonstrates that generating efficient 2CMV products, while accounting for the aforementioned problem cases, has not been well addressed. We propose a methodology to address the aforementioned two problem cases. Before detecting temporal changes, speckle and smoothing filters mitigate the effects of speckle noise. To detect temporal changes, we propose using unsupervised feature learning algorithms in conjunction with optical flow algorithms that track the motion of objects across time in small regions of interest. The proposed framework for distinguishing between actual motion and misregistration can lead to more accurate and meaningful change detection and improve object extraction from an SAR AGI product.

1. Introduction

One important use of synthetic aperture radar (SAR) imagery is in detecting changes between datasets from different imaging passes. Target and coherent change detection in SAR images have been extensively researched [1,2,3,4]. In two-color multiview (2CMV) advanced geospatial information (AGI) products, the changes are colorized and overlaid on an initial image such that new features are represented in cyan, and features that have disappeared are represented in red. In order to create the change maps, images are cross-correlated pixel-by-pixel to detect the changes. 2CMV products show changes at the pixel level and are often misleadingly dominated with red and cyan colors. Figure 1 shows a portion of a sample 2CMV image. In the sample images, there is an airplane visibly parked next to a building near the bottom center. It can be seen that many of the pixels in the 2CMV image are colored either red or cyan even if there is no change in the area.
Useful interpretation of temporal changes represented in 2CMV AGI products can be challenging because of speckle noise susceptibility and false positives that result from small orientation differences between objects imaged at different times. When every small intensity change creates a colored pixel, it becomes more difficult for operators and/or algorithms to detect meaningful changes and identify corresponding objects of interest.
In this work, we introduce a new framework of image processing methods for the efficient generation of 2CMV products toward extraction of advanced geospatial intelligence. Before false positive and object detection algorithms are performed, speckle and smoothing filters are used to mitigate the effects of speckle noise. Then, the number of false positive detections is reduced by applying: (1) unsupervised feature learning algorithms and (2) optical flow algorithms that track the motion of objects across time in small regions of interest.
There have been a number of change detection studies using thresholding [5,6,7,8], extreme learning machine [9,10], Markov random fields [11,12] and combinations of feature learning and clustering algorithms [13,14,15,16,17,18,19]. Optical flow fields can be used to distinguish between objects that have actually moved between frames and those that are in the same location but are slightly misregistered. Both cases of apparent motion can result in 2CMV detection, but they obviously differ greatly in terms of meaning. Investigation of the state-of-the-art in SAR image processing indicates that differentiating between these two general cases is a problem that has not been well addressed. Algorithms that mitigate speckle noise effects well and distinguishing between actual motion and misregistration can lead to better change detection. There is a lack of published methods for efficient generation of 2CMV products from SAR images, which serves as another motivating factor for this work.
The paper is organized in four sections. Following this introduction, Section 2 gives a brief background on the filtering, unsupervised feature learning, and optical flow techniques that were used and describes the stages of the proposed framework. Section 3 presents simulation results. Section 4 discusses the results and the contributions of the proposed methods.

2. Materials and Methods

In this section, we describe the key methods and steps of our image processing approach for generating change maps that drive the 2CMV representation and eliminating false positives in those maps.

2.1. Speckle Noise Filtering

Speckle noise is an inherent problem in SAR images [20] and causes difficulties for image interpretation by increasing the mean grey level of a local region. In order to mitigate speckle noise effects, we tested different speckle filter designs. Filters that were included in the testing were Frost [21], Enhanced Frost [22], Lee [23], Gamma-MAP [24], SRAD [25] and Non-Local Means [26]. In the end, Enhanced Frost filter was used in the algorithm due to its relatively straightforward implementation and comparable performance.
In [22], it was proposed to divide images into areas of three classes. The first class is comprised of homogeneous areas. The second class is comprised of heterogeneous areas wherein speckle noise is to be reduced, while preserving texture. The third class is comprised of areas containing isolated point targets that filtering should preserve. The Enhanced Frost filter output can be given as:
I ^ ( t o ) = I ¯ , for C l ( t o ) < C u , I K 1 exp [ K ( C l ( t o ) C u ) / ( C m a x C l ( t o ) ) | t | ] , for C u C l ( t o ) C m a x , I for C l ( t o ) C m a x ,
where t o = ( x o , y o ) is the spatial coordinate, I ¯ is the mean intensity value inside the kernel, K is the filter parameter, K 1 is a normalizing constant, and | t | is the absolute value of the pixel distance from the center of the kernel at t o . The rest of the parameters are
C u = 1 L , C l ( t o ) = σ / I ¯ , and C m a x = 1 + 2 L ,
where C u is the speckle coefficient of variation of the image, C l ( t o ) is the local coefficient of variation of the filter kernel centered at t o , C m a x is the upper speckle coefficient of variation of the image, and L is the number of looks. In our implementation, instead of L, we used “equivalent number of looks” (ENL). It can be defined as E N L = μ 2 / σ 2 , where μ is the mean and σ is the standard deviation.

2.2. k-Means Clustering

The k-means clustering algorithm attempts to partition p observations into k clusters such that each observation belongs to the nearest cluster mean (centroid) [27]. The k-means algorithm iteratively tries to find k centroids for each cluster, while minimizing a within-cluster sum of squares
a r g m i n i = 1 k x j ϵ S x j μ j 2 ,
where x j is the jth observation and μ j is the mean point (centroid) in the cluster. The basic steps of the algorithm are given in Algorithm 1:
Algorithm 1k-means clustering algorithm
  • Initialize the centroids: Assign k points as the initial group centroids.
  • Calculate the distance of each point to the centroids and assign the point to the cluster that has the closest centroid.
  • After the assignment of all the points, recalculate the new values of the centroids.
  • Repeat Steps 2 and 3 until the centroid locations converge to a fixed value.

2.3. K-SVD

K-SVD is a dictionary learning algorithm that is used for training overcomplete dictionaries for sparse representations of signals [28,29]. It is an iterative method that is a generalization of the k-means clustering algorithm. The K-SVD algorithm alternates between two stages: (1) sparse coding stage, and (2) dictionary update stage. In the first stage, a pursuit algorithm is used to sparsely code the input data based on the current dictionary. Based on Ref. [29], the Batch Orthogonal Matching Pursuit (Batch-OMP) algorithm can be used in this step. In the second stage, the dictionary atoms are updated to better fit the data via a singular value decomposition (SVD) approach. The basic steps of the K-SVD algorithm are given in Algorithm 2.
Algorithm 2 K-SVD algorithm
Task: Find the best dictionary to represent the data samples { y i } i = 1 N , y i ϵ R N as sparse compositions by solving:
m i n D , X { Y D X F 2 } subject to i , x i 0 T 0 .
Initialization: Set the dictionary matrix D ( 0 ) ϵ R n × K with l 2 normalized columns. Set J = 1 .
Iterations: Repeat until convergence:
  • Sparse coding stage: Use any pursuit algorithm to compute the representation vectors x i for each sample y i by approximating the solution of
    i = 1 , 2 , , N , m i n x i { y i D x i 2 2 } subject to x i 0 T 0 .
  • Dictionary update stage: For each column k = 1 , 2 , , K in D J 1 ,
    -
    Define the group of samples that use this atom, w k = { i | 1 i N , x T k ( i ) 0 }
    -
    Compute the overall representation error matrix, E k , by
    E k = Y j k d j x T j
    -
    Restrict E k by choosing only the columns corresponding to w k , and obtain E k R .
    -
    Apply SVD decomposition E k R = U Δ V T . Choose the updated dictionary column d k ˜ to be the first column of U. Update the coefficient vector x R k to be the first column of V multiplied by Δ ( 1 , 1 ) .
  • Set J = J + 1 .

2.4. Optical Flow

Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. In one canonical optical flow paper [30], two kinds of constraints are introduced in order to estimate the optical flow: the smoothness constraint and the brightness constancy constraint. In this section, we give a brief overview of the optical flow algorithm we employ in the proposed methodology.
Optical flow methods estimate the motion between two consecutive image frames that were acquired at times t and t +  δ t . A flow vector for every pixel is calculated. The vectors represent approximations of image motion that are based in large part on local spatial derivatives. Since the flow velocity has two components, two constraints are needed to solve for it. The brightness constancy constraint assumes that the brightness of a small area in the image remains constant as the area moves from image to image. Image brightness at the point (x,y) in the image at time t is denoted here as I ( x , y , t ) . If the point moves by δ x and δ y in time δ t , then, according to the brightness constancy constraint:
d I d t = 0 .
This can also be stated as:
I ( r + δ r , t + δ t ) = I ( r , t ) ,
where r = ( x , y , 1 ) T and r + δ r = ( x + δ x , y + δ y , 1 ) T . However, the brightness constancy constraint is restrictive. A less restrictive brightness constraint was chosen to address the intensity changes in SAR images. In Reference [31], it is proposed that the brightness constancy constraint can be replaced with a more general constraint that allows a linear transformation between the pixel brightness values. This way, the brightness change can be non-zero, or:
d I d t 0 .
The formulation that allows a linear transformation between the pixel brightness values is less restrictive, and can be written as:
I ( r + δ r , t + δ t ) = M ( r , t ) I ( r , t ) + C ( r , t ) .
After using the Taylor series, the revised constraint equation can be obtained:
I t + I r · r t I m t c t = 0 ,
where m t = lim δ t 0 δ m δ t and c t = lim δ t 0 δ c δ t .
The relaxed brightness constraint error is:
ϵ I = I t + I r · r t I m t c t 2 d x d y .
Equation (6) can be combined with the other constraint errors to produce the final functional to be minimized:
ϵ t o t a l = ϵ I + λ s ϵ s + λ m ϵ m + λ c ϵ c ,
where λ s , λ m , and λ c are error weighting coefficients. The remaining errors are given as:
ϵ s = r t 2 2 d x d y , ϵ m = m t 2 2 d x d y , ϵ c = c t 2 2 d x d y .
Substituting the approximated Laplacians into the Euler–Lagrange equations, a single matrix equation can be derived:
Af = g ( f ¯ ) ,
where
A = I x 2 + λ s I x I y I x I I x I x I y I y 2 + λ s I y I I y I x I I y I I 2 + λ m I I x I y I 1 + λ c , f = u v m t c t , g ( f ¯ ) = λ s u ¯ I x I t λ s v ¯ I y I t λ m m ¯ t + I t I λ c c ¯ t + I t .
These equations have to be solved iteratively. The solution is given by:
f = A 1 g ( f ¯ ) ,
where
A 1 = 1 α λ c λ m λ s + λ m λ s + I 2 λ c λ s + I y 2 λ c λ m I x I y λ c λ m I x I λ c λ s I x λ m λ s I x I y λ c λ m λ c λ m λ s + λ m λ s + I 2 λ c λ s + I y 2 λ c λ m I y I λ c λ s I y λ m λ s I x I λ c λ s I y I λ c λ s ( I x 2 + I y 2 ) λ c λ s + λ c λ s 2 + λ s 2 I λ s 2 I x λ m λ s I y λ m λ s I λ s 2 ( I x 2 + I y 2 ) λ m λ s + λ m λ s 2 + I 2 λ s 2
and
α = λ m λ s 2 + I 2 λ c λ s 2 + ( I x 2 + I y 2 + λ s ) λ c λ m λ s .
The equations can then be solved iteratively for other pixels with:
f k + 1 = A 1 g ( f ¯ k ) ,
where k is the iteration number. This way the matrix A 1 need only be computed once. More details about this optical flow algorithm can be found in Ref. [31].

2.5. Image Processing Steps

In this section, we describe the image processing approach for extracting change maps. The inputs are two registered SAR images of the same field of view that were taken at different times, i.e., “reference” image and “mission” image. Due to the large size of the images, images were divided into subimages for processing.
In the denoising step, an Enhanced Frost filter, as described in Section 2.1, with a 5 × 5 window size was first used to mitigate the speckle noise effects. Then, a 9 × 9 low pass filter was used to smooth the test areas in order to obtain more uniform flow fields in the optical flow processing step. The remaining steps are grouped in three stages and described in the following subsections. The detailed flow diagram shown in Figure 2 can be used as a guide for the following descriptions.

2.5.1. First Stage: Generation of Change Maps Using Unsupervised Feature Learning

Two change maps are needed for a 2CMV representation of an SAR image pair. Each change map represents the changes that exist in the corresponding SAR image. In this stage, we generate a combined change map and separate it into two change maps. In order to generate the combined change map, we used an approach similar to that was used in [13]. In the original approach, an eigenvector space is created by performing principle component analysis (PCA) on the difference image and k-means algorithm classifies the projections onto the eigenvector space into two classes: e.g., change and no-change. The basic steps are given in Algorithm 3. It should be noted that, in our framework, PCA was replaced with K-SVD because one can adjust the dictionary size and the sparsity constraint to obtain change maps with different levels of details. Figure 3 shows two change map results with different dictionary sizes.
Algorithm 3 Generating change maps
  • Difference Image:
    X d i f = | R e f e r e n c e M i s s i o n |
  • Training Data: Divide X d i f into h x h non-overlapping blocks.
  • Dictionary Generation: Use the K-SVD algorithm to generate an overcomplete dictionary.
  • Create Feature Space:
    -
    Generate h x h blocks for each pixel in X d i f where the pixel is in the center of the block.
    -
    Use OMP algorithm to generate the projections of the data onto the dictionary.
  • Clustering: Use the k-means algorithm to classify the feature space into two classes, e.g. change and no-change.
  • Change maps: Use the two classes to generate the combined change map. Divide the combined change map into two separate change maps based on the changes that occur in the images.
After the change maps are generated, object properties such as area and location are calculated and, based on a user-defined area threshold, insignificant change areas are excluded from the change maps. The remaining change areas are then overlaid onto the reference image. In the 2CMV image, the areas that exist only in the reference image are colored in cyan and the areas that exist only in the mission image are colored in red. A sample 2CMV image after this stage is shown in Figure 4.
In a previous work, this stage was replaced by adaptive thresholding [32].

2.5.2. Second Stage: Optical Flow

Figure 4 displays a 2CMV image after the first stage wherein it is clear that additional processing is needed to improve results because the ridges of the building in both images are slightly misregistered and they are shown as changes in both images. The primary improvement that is targeted with additional processing is reducing the number of false positives in the image. This goal can be accomplished with the use of the optical flow (OF) method described in Section 2.4. To manage computational complexity, the optical flow algorithm is performed on 256 × 256 pixel image blocks. Note that optical flow is calculated based on the original reference and mission images.
After obtaining the flow vectors, the direction of the majority of flow vectors is determined. The flow vectors that are in this direction are applied to the two first stage change maps to find matches. In the reference image, OF vectors are used to move the detected change areas in the flow direction. The destination of an area is then compared with the same location in the mission image. If there is a matching area based on location and size, then the two change areas are excluded from the change maps. The same process is performed in the opposite direction to match mission image change areas in the reference image. Figure 5 illustrates this step.

2.5.3. Third Stage: OF Assisted Object Extraction

This stage has two main parts: extraction and elimination. Extraction is performed by an adaptive thresholding method that is similar to the one used in [32]. In this stage, the thresholding is performed on the original images to extract/label objects. The resulting two thresholded images are processed in two ways. First, OF vectors are used on the images to match the objects. The main difference from the second stage is that the flow vectors are used on the original thresholded images, not on the change maps. Change maps do not necessarily contain objects, and the goal is to find objects that moved between the two images. Objects with possibility of movement are labeled and compared against the areas in the change maps. It should also be noted that only some parts of an object can be detected as a change, and these detected changes can be used as a guide to extract the full object.
After this process, the labeled areas in the change maps are overlaid on the reference image and checked whether they are a part of a larger object in the image. If the labeled area is found to be a part of a larger object, then the same location in the mission image is checked for the same object. In the case of two similar objects around the same location, it can be assumed that the detected object is a false negative and excluded from the difference map. After these two methods are performed, the output of this stage is generated by simply taking the intersection of the two results. Figure 6 shows how this process converts the reference image in (a) to the final output in (e).

3. Results

The proposed algorithm was compared against three change detection methods: PCAKM [13], GaborTLC [18], and NR-ELM [10]. All three methods are implemented with their default parameters by using the publicly available code provided by the authors. The first dataset consisted of 1024 × 1024 regions from an SAR image pair provided by Lockheed Martin (Bethesda, MD, USA). The data were acquired with various Lockheed Martin SAR units, one example of which is an airborne long range, all weather, day/night, X-band SAR unit with a resolution of 1 m. The selected regions contained speckle noise and false positives that resulted from registration and perspective problems. 2CMV images were generated for each method. The visual results are shown in Figure 7. NR-ELM was more susceptible to noise compared to the other methods. It was noted that unsupervised dictionary learning and clustering algorithms were effective at removing false positives that did not match object profiles. Optical flow was effective for removing difficult false positives that resulted from registration and perspective problems.
From the ground truth map, the actual number of pixels belonging to the unchanged class and changed class are calculated, denoted as N u and N c , respectively. With this information, five objective metrics are adopted for quantitative evaluation. False positive (FP) is the number of pixels belonging to the unchanged class but falsely classified as changed class. False negative (FN) is the number of pixels belonging to the changed class but falsely classified as unchanged class. The overall error (OE) is calculated by FP + FN. Percentage correct classification (PCC) and Kappa coefficient (KC) are as follows:
P C C = ( N c F N ) + ( N u F P ) N u + N c × 100 % ,
K C = P C C P R E 1 P R E ,
where proportional reduction in error (PRE) is defined as
P R E = ( N c F N + F P ) · N c + ( N u F P + F N ) · N u ( N u + N c ) 2 .
The results of the quantitative metrics are given in Table 1.
In addition to these results, the proposed framework was tested on an ensemble of 1024 × 1024 regions from the same SAR dataset. In many representative image regions where registration errors were prevalent, false positive detections were reduced by over 60%. Filtering of speckle noise and adaptive thresholds improved the quality of the object extraction and helped identify false positives. Establishing false positive motion/error thresholds, in accordance with initial image registration, can be key for continued improvement. It is also a challenge to extract only regions with intensity value changes. It is possible that wavelet based methods might be more successful with such a task.
For the second test, a more standard dataset was used. The San Francisco dataset has been used in change detection studies and its ground truth change map was provided in [33]. It consists of two SAR images over the city of San Francisco that were acquired by ERS-2 C-band SAR sensor with VV polarization. The images were provided by the European Space Agency with a resolution of 25-m. These two images were captured in August 2003 and May 2004, respectively. The size of the images were 256 × 256 for this test. The change maps of the methods can be seen in Figure 8.
The results of the quantitative metrics are given in Table 2. The proposed framework performed comparable to PCAKM as a change detection algorithm. The San Francisco dataset doesn’t contain registration and perspective errors with speckle noise.
It should be noted that the proposed framework provided better results compared to the other methods when the datasets contain registration and perspective errors with speckle noise. Otherwise, the performance of the proposed method is comparable to PCAKM as a change detection algorithm since the optical flow processing stage cannot provide matching regions in the images.
Even though the computational complexity was not an issue during the course of this work, the speckle filtering, optical flow processing and merging are computationally expensive processes. On a dual core computer (Intel Core i7 6500U, Santa Clara, CA, USA) with 16 GB of memory, it takes slightly less than 3.5 min to process one region. There are many factors that are contributing to this time. Code was written in the MATLAB environment (R2016a, MathWorks, Natick, MA, USA) and not optimized for performance.

4. Conclusions

It was shown that unsupervised feature learning algorithms can be effectively used in conjunction with optical flow methods to generate 2CMV AGI products. Other image processing methods like noise reduction and adaptive thresholding were used to improve object extraction in the proposed methodology. Results demonstrated the ability of the techniques to reduce false positives by up to 60% in the provided SAR image pairs. However, there is still room for further improvement. For example, it was noticed that optical flow object matches close to image block borders can be overlooked due to the inaccuracy of flow vectors near the block borders. This problem can be addressed with a multigrid approach that leverages overlapping image blocks. Using this approach, if an object pair is close to the border in one block, then it will be near the center of an overlapping block. It has also been noted that only some parts of an object can be detected as a change, and the detected parts can be used as a guide to segment the full object. Objects that are close to one another can be merged to provide a more holistic analysis of the scene and further reduce the number of false positive object detections. However, it must be concurrently ensured that false positive reduction is not overly aggressive to the point that false negatives are generated. More recent optical flow or motion estimation algorithms can be investigated as an alternative to the one utilized in this work. The chosen optical flow method is suitable for the tested dataset and performs adequately as expected since it takes into account the intensity changes between images. The choice of K-SVD over PCA increased the computational complexity while allowing flexibility over the details of the change maps by changing the dictionary size and the number of non-zero coefficients. Dictionaries with higher number of non-zero coefficients provided more detailed change maps. For future work, investigating the correlation between the quantitative metrics and the parameters in the framework (e.g., dictionary size, etc.) can provide insight into tuning the framework for different types of datasets. Other methods can be researched as alternatives to the K-SVD method in the framework.

Author Contributions

Conceptualization, B.K. and D.F.; methodology, B.K.; software, B.K.; validation, B.K. and D.F.; investigation, B.K.; data curation, D.F.; writing—original draft preparation, B.K.; writing—review and editing, D.F.; supervision, D.F.; project administration, D.F.; funding acquisition, D.F.

Funding

This work was supported in part by Lockheed Martin.

Acknowledgments

The authors would like to acknowledge SenSIP for the center’s valuable contributions to this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. El-Darymli, K.; McGuire, P.; Power, D.; Moloney, C. Target Detection in Synthetic Aperture Radar Imagery: A State-of-the-Art Survey. J. Appl. Remote Sens. 2013, 7, 071598. [Google Scholar] [CrossRef]
  2. El-Darymli, K.; Gill, E.W.; McGuire, P.; Power, D.; Moloney, C. Automatic Target Detection in Synthetic Aperture Radar Imagery: A State-of-the-Art Review. IEEE Access 2016, 4, 6014–6058. [Google Scholar] [CrossRef]
  3. Ashok, H.G.; Patil, D.R. Survey on Change Detection in SAR Images. In Proceedings of the IJCA Proceedings on National Conference on Emerging Trends in Computer Technology, Shirpur, India, 28–29 March 2014; pp. 4–7. [Google Scholar]
  4. Ren, W.; Song, J.; Tian, S.; Wu, W. Survey on Unsupervised Change Detection Techniques in SAR Images1. In Proceedings of the 2014 IEEE China Summit International Conference on Signal and Information Processing (ChinaSIP), Xi’an, China, 9–13 July 2014; pp. 143–147. [Google Scholar]
  5. Bazi, Y.; Bruzzone, L.; Melgani, F. An Unsupervised Approach Based on the Generalized Gaussian Model to Automatic Change Detection in Multitemporal SAR Images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 874–887. [Google Scholar] [CrossRef]
  6. Bovolo, F.; Bruzzone, L. A Detail-Preserving Scale-Driven Approach to Change Detection in Multitemporal SAR Images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2963–2972. [Google Scholar] [CrossRef]
  7. Moser, G.; Serpico, S.B. Generalized Minimum-Error Thresholding for Unsupervised Change Detection from SAR Amplitude Imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2972–2982. [Google Scholar] [CrossRef]
  8. Sumaiya, M.N.; Kumari, R.S.S. Logarithmic Mean-Based Thresholding for SAR Image Change Detection. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1726–1728. [Google Scholar] [CrossRef]
  9. Jia, L.; Li, M.; Zhang, P.; Wu, Y. SAR Image Change Detection Based on Correlation Kernel and Multistage Extreme Learning Machine. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5993–6006. [Google Scholar] [CrossRef]
  10. Gao, F.; Dong, J.; Li, B.; Xu, Q.; Xie, C. Change Detection from Synthetic Aperture Radar Images Based on Neighborhood-Based Ratio and Extreme Learning Machine. J. Appl. Remote Sens. 2016, 10, 10–14. [Google Scholar] [CrossRef]
  11. Melgani, F.; Bazi, Y. Markovian Fusion Approach to Robust Unsupervised Change Detection in Remotely Sensed Imagery. IEEE Geosci. Remote Sens. Lett. 2006, 3, 457–461. [Google Scholar] [CrossRef]
  12. Yousif, O.; Ban, Y. Improving SAR-Based Urban Change Detection by Combining MAP-MRF Classifier and Nonlocal Means Similarity Weights. IEEE J. Sel. Top. Appl. Earth Observ. 2014, 7, 4288–4300. [Google Scholar] [CrossRef]
  13. Celik, T. Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  14. Li, W.; Chen, J.; Yang, P.; Sun, H. Multitemporal SAR Images Change Detection Based on Joint Sparse Representation of Pair Dictionaries. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 6165–6168. [Google Scholar]
  15. Lu, X.; Yuan, Y.; Zheng, X. Joint Dictionary Learning for Multispectral Change Detection. IEEE Trans. Cybern. 2017, 47, 884–897. [Google Scholar] [CrossRef]
  16. Ghosh, A.; Mishra, N.; Ghosh, S. Fuzzy Clustering Algorithms for Unsupervised Change Detection in Remote Sensing Images. Inf. Sci. 2011, 181, 699–715. [Google Scholar] [CrossRef]
  17. Nguyen, L.H.; Tran, T.D. A Sparsity-Driven Joint Image Registration and Change Detection Technique for SAR Imagery. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 2798–2801. [Google Scholar]
  18. Li, H.; Celik, T.; Longbotham, N.; Emery, W.J. Gabor Feature Based Unsupervised Change Detection of Multitemporal SAR Images Based on Two-Level Clustering. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2458–2462. [Google Scholar]
  19. Gong, M.; Su, L.; Jia, M.; Chen, W. Fuzzy Clustering With a Modified MRF Energy Function for Change Detection in Synthetic Aperture Radar Images. IEEE Trans. Fuzzy Syst. 2014, 22, 98–109. [Google Scholar] [CrossRef]
  20. Dekker, R.J. Speckle Filtering in Satellite SAR Change Detection Imagery. Int. J. Remote Sens. 1998, 19, 1133–1146. [Google Scholar] [CrossRef]
  21. Frost, V.; Stiles, J.A.; Shanmugan, K.S.; Holtzman, J.C. A Model for Radar Images and Its Application to Adaptive Digital Filtering of Multiplicative Noise. IEEE Trans. Pattern Anal. Mach. Intell. 1982, PAMI-4, 157–166. [Google Scholar] [CrossRef]
  22. Lopes, A.; Touzi, R.; Nezry, E. Adaptive Speckle Filters and Scene Heterogeneity. IEEE Trans. Geosci. Remote Sens. 1990, 28, 992–1000. [Google Scholar] [CrossRef]
  23. Lee, J.S. Digital Image Enhancement and Noise Filtering by Use of Local Statistics. IEEE Trans. Pattern Anal. Mach. Intell. 1980, PAMI-2, 165–168. [Google Scholar] [CrossRef]
  24. Lopes, A.; Nezry, E.; Touzi, R.; Laur, H. Maximum a Posteriori Filtering and First Order Texture Models in SAR Images. In Proceedings of the 10th Annual International Symposium on Geoscience and Remote Sensing, College Park, MD, USA, 20–24 May 1990; pp. 2409–2412. [Google Scholar]
  25. Yu, Y.; Acton, S. Speckle Reducing Anisotropic Diffusion. IEEE Trans. Image Process. 2002, 11, 1260–1270. [Google Scholar]
  26. Coupe, P.; Hellier, P.; Kervrann, C.; Barillot, C. NonLocal Means-based Speckle Filtering for Ultrasound Images. IEEE Trans. Image Process. 2009, 18, 2221–2229. [Google Scholar] [CrossRef]
  27. Gonzalez, R.; Woods, R. Digital Image Processing, 3rd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2006. [Google Scholar]
  28. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  29. Rubinstein, R.; Zibulevsky, M.; Elad, M. Efficient Implementation of the K-SVD Algorithm Using Batch Orthogonal Matching Pursuit; Technical Report; Computer Science Department, Technion: Haifa, Israel, 2008. [Google Scholar]
  30. Horn, B.; Schunck, B. Determining Optical Flow. Artif. Intell. 1980, 17, 185–203. [Google Scholar] [CrossRef]
  31. Gennert, M.; Negahdaripour, S. Relaxing the Brightness Constancy Assumption in Computing Optical Flow; A.I. Lab Memo 975; Massachusetts Institute of Technology: Cambridge, MA, USA, 1987. [Google Scholar]
  32. Kanberoglu, B.; Frakes, D. Extraction of Advanced Geospatial Intelligence (AGI) from Commercial Synthetic Aperture Radar Imagery. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XXIV 2017, Anaheim, CA, USA, 9–13 April 2017; Volume 10201, p. 1020106. [Google Scholar]
  33. Gao, F.; Liu, X.; Dong, J.; Zhong, G.; Jian, M. Change Detection in SAR Images Based on Deep Semi-NMF and SVD Networks. Remote Sens. 2017, 9, 435. [Google Scholar] [CrossRef]
Figure 1. (a) reference image; (b) mission image; (c) two-color multiview (2CMV) image. In both images, there is an airplane visibly parked next to an airport building near the bottom center. In the second image (b), the airplane seems rotated by a small degree. The sharp edges of the building are slightly misregistered in the images and these registration errors are false positives in the 2CMV image.
Figure 1. (a) reference image; (b) mission image; (c) two-color multiview (2CMV) image. In both images, there is an airplane visibly parked next to an airport building near the bottom center. In the second image (b), the airplane seems rotated by a small degree. The sharp edges of the building are slightly misregistered in the images and these registration errors are false positives in the 2CMV image.
Sensors 19 02605 g001
Figure 2. Flow diagram of the proposed framework.
Figure 2. Flow diagram of the proposed framework.
Sensors 19 02605 g002
Figure 3. (a) change map with dictionary size = 30 atoms with 30 non-zero coefficients; (b) change map with dictionary size = 15 with three non-zero coefficients. Note that a larger dictionary size with more non-zero coefficients captures more changes.
Figure 3. (a) change map with dictionary size = 30 atoms with 30 non-zero coefficients; (b) change map with dictionary size = 15 with three non-zero coefficients. Note that a larger dictionary size with more non-zero coefficients captures more changes.
Sensors 19 02605 g003
Figure 4. (a) original 2CMV image; (b) 2CMV image after Stage 1. Note that there are several false positives around the ridges of the building. In the second image, change colors (red and cyan) were made more pronounced to highlight the false positives.
Figure 4. (a) original 2CMV image; (b) 2CMV image after Stage 1. Note that there are several false positives around the ridges of the building. In the second image, change colors (red and cyan) were made more pronounced to highlight the false positives.
Sensors 19 02605 g004
Figure 5. Elimination of false positives using optical flow. Change areas are moved along the flow direction in the reference image change map. Moved areas (shown in red) from the reference image are overlaid onto the mission image change map. The overlapped areas are then removed.
Figure 5. Elimination of false positives using optical flow. Change areas are moved along the flow direction in the reference image change map. Moved areas (shown in red) from the reference image are overlaid onto the mission image change map. The overlapped areas are then removed.
Sensors 19 02605 g005
Figure 6. (a) reference image; (b) mission image; (c) original 2CMV image; (d) 2CMV image after using dictionary learning and clustering (Stage 1); (e) final 2CMV image. False positives are reduced.
Figure 6. (a) reference image; (b) mission image; (c) original 2CMV image; (d) 2CMV image after using dictionary learning and clustering (Stage 1); (e) final 2CMV image. False positives are reduced.
Sensors 19 02605 g006
Figure 7. Results by (a) manual ground truth; (b) NR-ELM; (c) GaborTLC; (d) PCAKM; (e) proposed method.
Figure 7. Results by (a) manual ground truth; (b) NR-ELM; (c) GaborTLC; (d) PCAKM; (e) proposed method.
Sensors 19 02605 g007
Figure 8. (a,b) San Francisco dataset; (c) ground truth; (d) NR-ELM; (e) GaborTLC; (f) PCAKM; (g) proposed method.
Figure 8. (a,b) San Francisco dataset; (c) ground truth; (d) NR-ELM; (e) GaborTLC; (f) PCAKM; (g) proposed method.
Sensors 19 02605 g008
Table 1. Results for the SAR dataset.
Table 1. Results for the SAR dataset.
MethodsFPFNOEPCC (%)KC (%)Time (s)
NR-ELM9737742761016530.90310.2993202.1
GaborTLC201608449286090.97270.580974.6
PCAKM351356251413860.96050.516.4
Proposed method386512852167170.98410.6569199.1
Table 2. Results for the San Francisco dataset.
Table 2. Results for the San Francisco dataset.
MethodsFPFNOEPCC (%)KC (%)
NR-ELM3284407680.98830.9107
GaborTLC13766014360.97810.8539
PCAKM18557319280.97060.8115
Proposed method83668515210.97680.8277

Share and Cite

MDPI and ACS Style

Kanberoglu, B.; Frakes, D. Improving the Accuracy of Two-Color Multiview (2CMV) Advanced Geospatial Information (AGI) Products Using Unsupervised Feature Learning and Optical Flow. Sensors 2019, 19, 2605. https://doi.org/10.3390/s19112605

AMA Style

Kanberoglu B, Frakes D. Improving the Accuracy of Two-Color Multiview (2CMV) Advanced Geospatial Information (AGI) Products Using Unsupervised Feature Learning and Optical Flow. Sensors. 2019; 19(11):2605. https://doi.org/10.3390/s19112605

Chicago/Turabian Style

Kanberoglu, Berkay, and David Frakes. 2019. "Improving the Accuracy of Two-Color Multiview (2CMV) Advanced Geospatial Information (AGI) Products Using Unsupervised Feature Learning and Optical Flow" Sensors 19, no. 11: 2605. https://doi.org/10.3390/s19112605

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop