1. Introduction
One important use of synthetic aperture radar (SAR) imagery is in detecting changes between datasets from different imaging passes. Target and coherent change detection in SAR images have been extensively researched [
1,
2,
3,
4]. In two-color multiview (2CMV) advanced geospatial information (AGI) products, the changes are colorized and overlaid on an initial image such that new features are represented in cyan, and features that have disappeared are represented in red. In order to create the change maps, images are cross-correlated pixel-by-pixel to detect the changes. 2CMV products show changes at the pixel level and are often misleadingly dominated with red and cyan colors.
Figure 1 shows a portion of a sample 2CMV image. In the sample images, there is an airplane visibly parked next to a building near the bottom center. It can be seen that many of the pixels in the 2CMV image are colored either red or cyan even if there is no change in the area.
Useful interpretation of temporal changes represented in 2CMV AGI products can be challenging because of speckle noise susceptibility and false positives that result from small orientation differences between objects imaged at different times. When every small intensity change creates a colored pixel, it becomes more difficult for operators and/or algorithms to detect meaningful changes and identify corresponding objects of interest.
In this work, we introduce a new framework of image processing methods for the efficient generation of 2CMV products toward extraction of advanced geospatial intelligence. Before false positive and object detection algorithms are performed, speckle and smoothing filters are used to mitigate the effects of speckle noise. Then, the number of false positive detections is reduced by applying: (1) unsupervised feature learning algorithms and (2) optical flow algorithms that track the motion of objects across time in small regions of interest.
There have been a number of change detection studies using thresholding [
5,
6,
7,
8], extreme learning machine [
9,
10], Markov random fields [
11,
12] and combinations of feature learning and clustering algorithms [
13,
14,
15,
16,
17,
18,
19]. Optical flow fields can be used to distinguish between objects that have actually moved between frames and those that are in the same location but are slightly misregistered. Both cases of apparent motion can result in 2CMV detection, but they obviously differ greatly in terms of meaning. Investigation of the state-of-the-art in SAR image processing indicates that differentiating between these two general cases is a problem that has not been well addressed. Algorithms that mitigate speckle noise effects well and distinguishing between actual motion and misregistration can lead to better change detection. There is a lack of published methods for efficient generation of 2CMV products from SAR images, which serves as another motivating factor for this work.
The paper is organized in four sections. Following this introduction,
Section 2 gives a brief background on the filtering, unsupervised feature learning, and optical flow techniques that were used and describes the stages of the proposed framework.
Section 3 presents simulation results.
Section 4 discusses the results and the contributions of the proposed methods.
2. Materials and Methods
In this section, we describe the key methods and steps of our image processing approach for generating change maps that drive the 2CMV representation and eliminating false positives in those maps.
2.1. Speckle Noise Filtering
Speckle noise is an inherent problem in SAR images [
20] and causes difficulties for image interpretation by increasing the mean grey level of a local region. In order to mitigate speckle noise effects, we tested different speckle filter designs. Filters that were included in the testing were Frost [
21], Enhanced Frost [
22], Lee [
23], Gamma-MAP [
24], SRAD [
25] and Non-Local Means [
26]. In the end, Enhanced Frost filter was used in the algorithm due to its relatively straightforward implementation and comparable performance.
In [
22], it was proposed to divide images into areas of three classes. The first class is comprised of homogeneous areas. The second class is comprised of heterogeneous areas wherein speckle noise is to be reduced, while preserving texture. The third class is comprised of areas containing isolated point targets that filtering should preserve. The Enhanced Frost filter output can be given as:
where
is the spatial coordinate,
is the mean intensity value inside the kernel,
K is the filter parameter,
is a normalizing constant, and
is the absolute value of the pixel distance from the center of the kernel at
. The rest of the parameters are
where
is the speckle coefficient of variation of the image,
is the local coefficient of variation of the filter kernel centered at
,
is the upper speckle coefficient of variation of the image, and
L is the number of looks. In our implementation, instead of
L, we used “equivalent number of looks” (ENL). It can be defined as
, where
is the mean and
is the standard deviation.
2.2. k-Means Clustering
The
k-means clustering algorithm attempts to partition
p observations into
k clusters such that each observation belongs to the nearest cluster mean (centroid) [
27]. The
k-means algorithm iteratively tries to find
k centroids for each cluster, while minimizing a within-cluster sum of squares
where
is the
jth observation and
is the mean point (centroid) in the cluster. The basic steps of the algorithm are given in Algorithm 1:
Algorithm 1k-means clustering algorithm |
Initialize the centroids: Assign k points as the initial group centroids. Calculate the distance of each point to the centroids and assign the point to the cluster that has the closest centroid. After the assignment of all the points, recalculate the new values of the centroids. Repeat Steps 2 and 3 until the centroid locations converge to a fixed value.
|
2.3. K-SVD
K-SVD is a dictionary learning algorithm that is used for training overcomplete dictionaries for sparse representations of signals [
28,
29]. It is an iterative method that is a generalization of the
k-means clustering algorithm. The K-SVD algorithm alternates between two stages: (1) sparse coding stage, and (2) dictionary update stage. In the first stage, a pursuit algorithm is used to sparsely code the input data based on the current dictionary. Based on Ref. [
29], the Batch Orthogonal Matching Pursuit (Batch-OMP) algorithm can be used in this step. In the second stage, the dictionary atoms are updated to better fit the data via a singular value decomposition (SVD) approach. The basic steps of the K-SVD algorithm are given in Algorithm 2.
Algorithm 2 K-SVD algorithm |
Task: Find the best dictionary to represent the data samples , as sparse compositions by solving: |
Initialization: Set the dictionary matrix with normalized columns. Set .
|
Iterations: Repeat until convergence:Sparse coding stage: Use any pursuit algorithm to compute the representation vectors for each sample by approximating the solution of
Dictionary update stage: For each column in ,
- -
Define the group of samples that use this atom, - -
Compute the overall representation error matrix, , by
- -
Restrict by choosing only the columns corresponding to , and obtain . - -
Apply SVD decomposition . Choose the updated dictionary column to be the first column of U. Update the coefficient vector to be the first column of V multiplied by .
Set .
|
2.4. Optical Flow
Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. In one canonical optical flow paper [
30], two kinds of constraints are introduced in order to estimate the optical flow: the smoothness constraint and the brightness constancy constraint. In this section, we give a brief overview of the optical flow algorithm we employ in the proposed methodology.
Optical flow methods estimate the motion between two consecutive image frames that were acquired at times
t and
t +
. A flow vector for every pixel is calculated. The vectors represent approximations of image motion that are based in large part on local spatial derivatives. Since the flow velocity has two components, two constraints are needed to solve for it. The brightness constancy constraint assumes that the brightness of a small area in the image remains constant as the area moves from image to image. Image brightness at the point (
x,y) in the image at time
t is denoted here as
. If the point moves by
and
in time
, then, according to the brightness constancy constraint:
This can also be stated as:
where
and
. However, the brightness constancy constraint is restrictive. A less restrictive brightness constraint was chosen to address the intensity changes in SAR images. In Reference [
31], it is proposed that the brightness constancy constraint can be replaced with a more general constraint that allows a linear transformation between the pixel brightness values. This way, the brightness change can be non-zero, or:
The formulation that allows a linear transformation between the pixel brightness values is less restrictive, and can be written as:
After using the Taylor series, the revised constraint equation can be obtained:
where
and
.
The relaxed brightness constraint error is:
Equation (
6) can be combined with the other constraint errors to produce the final functional to be minimized:
where
,
, and
are error weighting coefficients. The remaining errors are given as:
Substituting the approximated Laplacians into the Euler–Lagrange equations, a single matrix equation can be derived:
where
These equations have to be solved iteratively. The solution is given by:
where
and
The equations can then be solved iteratively for other pixels with:
where
k is the iteration number. This way the matrix
need only be computed once. More details about this optical flow algorithm can be found in Ref. [
31].
2.5. Image Processing Steps
In this section, we describe the image processing approach for extracting change maps. The inputs are two registered SAR images of the same field of view that were taken at different times, i.e., “reference” image and “mission” image. Due to the large size of the images, images were divided into subimages for processing.
In the denoising step, an Enhanced Frost filter, as described in
Section 2.1, with a 5 × 5 window size was first used to mitigate the speckle noise effects. Then, a 9 × 9 low pass filter was used to smooth the test areas in order to obtain more uniform flow fields in the optical flow processing step. The remaining steps are grouped in three stages and described in the following subsections. The detailed flow diagram shown in
Figure 2 can be used as a guide for the following descriptions.
2.5.1. First Stage: Generation of Change Maps Using Unsupervised Feature Learning
Two change maps are needed for a 2CMV representation of an SAR image pair. Each change map represents the changes that exist in the corresponding SAR image. In this stage, we generate a combined change map and separate it into two change maps. In order to generate the combined change map, we used an approach similar to that was used in [
13]. In the original approach, an eigenvector space is created by performing principle component analysis (PCA) on the difference image and k-means algorithm classifies the projections onto the eigenvector space into two classes: e.g., change and no-change. The basic steps are given in Algorithm 3. It should be noted that, in our framework, PCA was replaced with K-SVD because one can adjust the dictionary size and the sparsity constraint to obtain change maps with different levels of details.
Figure 3 shows two change map results with different dictionary sizes.
Algorithm 3 Generating change maps |
Training Data: Divide into non-overlapping blocks. Dictionary Generation: Use the K-SVD algorithm to generate an overcomplete dictionary. Create Feature Space: - -
Generate blocks for each pixel in where the pixel is in the center of the block. - -
Use OMP algorithm to generate the projections of the data onto the dictionary.
Clustering: Use the k-means algorithm to classify the feature space into two classes, e.g. change and no-change. Change maps: Use the two classes to generate the combined change map. Divide the combined change map into two separate change maps based on the changes that occur in the images.
|
After the change maps are generated, object properties such as area and location are calculated and, based on a user-defined area threshold, insignificant change areas are excluded from the change maps. The remaining change areas are then overlaid onto the reference image. In the 2CMV image, the areas that exist only in the reference image are colored in cyan and the areas that exist only in the mission image are colored in red. A sample 2CMV image after this stage is shown in
Figure 4.
In a previous work, this stage was replaced by adaptive thresholding [
32].
2.5.2. Second Stage: Optical Flow
Figure 4 displays a 2CMV image after the first stage wherein it is clear that additional processing is needed to improve results because the ridges of the building in both images are slightly misregistered and they are shown as changes in both images. The primary improvement that is targeted with additional processing is reducing the number of false positives in the image. This goal can be accomplished with the use of the optical flow (OF) method described in
Section 2.4. To manage computational complexity, the optical flow algorithm is performed on 256 × 256 pixel image blocks. Note that optical flow is calculated based on the original reference and mission images.
After obtaining the flow vectors, the direction of the majority of flow vectors is determined. The flow vectors that are in this direction are applied to the two first stage change maps to find matches. In the reference image, OF vectors are used to move the detected change areas in the flow direction. The destination of an area is then compared with the same location in the mission image. If there is a matching area based on location and size, then the two change areas are excluded from the change maps. The same process is performed in the opposite direction to match mission image change areas in the reference image.
Figure 5 illustrates this step.
2.5.3. Third Stage: OF Assisted Object Extraction
This stage has two main parts: extraction and elimination. Extraction is performed by an adaptive thresholding method that is similar to the one used in [
32]. In this stage, the thresholding is performed on the original images to extract/label objects. The resulting two thresholded images are processed in two ways. First, OF vectors are used on the images to match the objects. The main difference from the second stage is that the flow vectors are used on the original thresholded images, not on the change maps. Change maps do not necessarily contain objects, and the goal is to find objects that moved between the two images. Objects with possibility of movement are labeled and compared against the areas in the change maps. It should also be noted that only some parts of an object can be detected as a change, and these detected changes can be used as a guide to extract the full object.
After this process, the labeled areas in the change maps are overlaid on the reference image and checked whether they are a part of a larger object in the image. If the labeled area is found to be a part of a larger object, then the same location in the mission image is checked for the same object. In the case of two similar objects around the same location, it can be assumed that the detected object is a false negative and excluded from the difference map. After these two methods are performed, the output of this stage is generated by simply taking the intersection of the two results.
Figure 6 shows how this process converts the reference image in (a) to the final output in (e).
3. Results
The proposed algorithm was compared against three change detection methods: PCAKM [
13], GaborTLC [
18], and NR-ELM [
10]. All three methods are implemented with their default parameters by using the publicly available code provided by the authors. The first dataset consisted of 1024 × 1024 regions from an SAR image pair provided by Lockheed Martin (Bethesda, MD, USA). The data were acquired with various Lockheed Martin SAR units, one example of which is an airborne long range, all weather, day/night, X-band SAR unit with a resolution of 1 m. The selected regions contained speckle noise and false positives that resulted from registration and perspective problems. 2CMV images were generated for each method. The visual results are shown in
Figure 7. NR-ELM was more susceptible to noise compared to the other methods. It was noted that unsupervised dictionary learning and clustering algorithms were effective at removing false positives that did not match object profiles. Optical flow was effective for removing difficult false positives that resulted from registration and perspective problems.
From the ground truth map, the actual number of pixels belonging to the unchanged class and changed class are calculated, denoted as
and
, respectively. With this information, five objective metrics are adopted for quantitative evaluation. False positive (FP) is the number of pixels belonging to the unchanged class but falsely classified as changed class. False negative (FN) is the number of pixels belonging to the changed class but falsely classified as unchanged class. The overall error (OE) is calculated by FP + FN. Percentage correct classification (PCC) and Kappa coefficient (KC) are as follows:
where proportional reduction in error (PRE) is defined as
The results of the quantitative metrics are given in
Table 1.
In addition to these results, the proposed framework was tested on an ensemble of 1024 × 1024 regions from the same SAR dataset. In many representative image regions where registration errors were prevalent, false positive detections were reduced by over 60%. Filtering of speckle noise and adaptive thresholds improved the quality of the object extraction and helped identify false positives. Establishing false positive motion/error thresholds, in accordance with initial image registration, can be key for continued improvement. It is also a challenge to extract only regions with intensity value changes. It is possible that wavelet based methods might be more successful with such a task.
For the second test, a more standard dataset was used. The San Francisco dataset has been used in change detection studies and its ground truth change map was provided in [
33]. It consists of two SAR images over the city of San Francisco that were acquired by ERS-2 C-band SAR sensor with VV polarization. The images were provided by the European Space Agency with a resolution of 25-m. These two images were captured in August 2003 and May 2004, respectively. The size of the images were 256 × 256 for this test. The change maps of the methods can be seen in
Figure 8.
The results of the quantitative metrics are given in
Table 2. The proposed framework performed comparable to PCAKM as a change detection algorithm. The San Francisco dataset doesn’t contain registration and perspective errors with speckle noise.
It should be noted that the proposed framework provided better results compared to the other methods when the datasets contain registration and perspective errors with speckle noise. Otherwise, the performance of the proposed method is comparable to PCAKM as a change detection algorithm since the optical flow processing stage cannot provide matching regions in the images.
Even though the computational complexity was not an issue during the course of this work, the speckle filtering, optical flow processing and merging are computationally expensive processes. On a dual core computer (Intel Core i7 6500U, Santa Clara, CA, USA) with 16 GB of memory, it takes slightly less than 3.5 min to process one region. There are many factors that are contributing to this time. Code was written in the MATLAB environment (R2016a, MathWorks, Natick, MA, USA) and not optimized for performance.
4. Conclusions
It was shown that unsupervised feature learning algorithms can be effectively used in conjunction with optical flow methods to generate 2CMV AGI products. Other image processing methods like noise reduction and adaptive thresholding were used to improve object extraction in the proposed methodology. Results demonstrated the ability of the techniques to reduce false positives by up to 60% in the provided SAR image pairs. However, there is still room for further improvement. For example, it was noticed that optical flow object matches close to image block borders can be overlooked due to the inaccuracy of flow vectors near the block borders. This problem can be addressed with a multigrid approach that leverages overlapping image blocks. Using this approach, if an object pair is close to the border in one block, then it will be near the center of an overlapping block. It has also been noted that only some parts of an object can be detected as a change, and the detected parts can be used as a guide to segment the full object. Objects that are close to one another can be merged to provide a more holistic analysis of the scene and further reduce the number of false positive object detections. However, it must be concurrently ensured that false positive reduction is not overly aggressive to the point that false negatives are generated. More recent optical flow or motion estimation algorithms can be investigated as an alternative to the one utilized in this work. The chosen optical flow method is suitable for the tested dataset and performs adequately as expected since it takes into account the intensity changes between images. The choice of K-SVD over PCA increased the computational complexity while allowing flexibility over the details of the change maps by changing the dictionary size and the number of non-zero coefficients. Dictionaries with higher number of non-zero coefficients provided more detailed change maps. For future work, investigating the correlation between the quantitative metrics and the parameters in the framework (e.g., dictionary size, etc.) can provide insight into tuning the framework for different types of datasets. Other methods can be researched as alternatives to the K-SVD method in the framework.
Author Contributions
Conceptualization, B.K. and D.F.; methodology, B.K.; software, B.K.; validation, B.K. and D.F.; investigation, B.K.; data curation, D.F.; writing—original draft preparation, B.K.; writing—review and editing, D.F.; supervision, D.F.; project administration, D.F.; funding acquisition, D.F.
Funding
This work was supported in part by Lockheed Martin.
Acknowledgments
The authors would like to acknowledge SenSIP for the center’s valuable contributions to this work.
Conflicts of Interest
The authors declare no conflict of interest.
References
- El-Darymli, K.; McGuire, P.; Power, D.; Moloney, C. Target Detection in Synthetic Aperture Radar Imagery: A State-of-the-Art Survey. J. Appl. Remote Sens. 2013, 7, 071598. [Google Scholar] [CrossRef]
- El-Darymli, K.; Gill, E.W.; McGuire, P.; Power, D.; Moloney, C. Automatic Target Detection in Synthetic Aperture Radar Imagery: A State-of-the-Art Review. IEEE Access 2016, 4, 6014–6058. [Google Scholar] [CrossRef]
- Ashok, H.G.; Patil, D.R. Survey on Change Detection in SAR Images. In Proceedings of the IJCA Proceedings on National Conference on Emerging Trends in Computer Technology, Shirpur, India, 28–29 March 2014; pp. 4–7. [Google Scholar]
- Ren, W.; Song, J.; Tian, S.; Wu, W. Survey on Unsupervised Change Detection Techniques in SAR Images1. In Proceedings of the 2014 IEEE China Summit International Conference on Signal and Information Processing (ChinaSIP), Xi’an, China, 9–13 July 2014; pp. 143–147. [Google Scholar]
- Bazi, Y.; Bruzzone, L.; Melgani, F. An Unsupervised Approach Based on the Generalized Gaussian Model to Automatic Change Detection in Multitemporal SAR Images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 874–887. [Google Scholar] [CrossRef]
- Bovolo, F.; Bruzzone, L. A Detail-Preserving Scale-Driven Approach to Change Detection in Multitemporal SAR Images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2963–2972. [Google Scholar] [CrossRef]
- Moser, G.; Serpico, S.B. Generalized Minimum-Error Thresholding for Unsupervised Change Detection from SAR Amplitude Imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2972–2982. [Google Scholar] [CrossRef]
- Sumaiya, M.N.; Kumari, R.S.S. Logarithmic Mean-Based Thresholding for SAR Image Change Detection. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1726–1728. [Google Scholar] [CrossRef]
- Jia, L.; Li, M.; Zhang, P.; Wu, Y. SAR Image Change Detection Based on Correlation Kernel and Multistage Extreme Learning Machine. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5993–6006. [Google Scholar] [CrossRef]
- Gao, F.; Dong, J.; Li, B.; Xu, Q.; Xie, C. Change Detection from Synthetic Aperture Radar Images Based on Neighborhood-Based Ratio and Extreme Learning Machine. J. Appl. Remote Sens. 2016, 10, 10–14. [Google Scholar] [CrossRef]
- Melgani, F.; Bazi, Y. Markovian Fusion Approach to Robust Unsupervised Change Detection in Remotely Sensed Imagery. IEEE Geosci. Remote Sens. Lett. 2006, 3, 457–461. [Google Scholar] [CrossRef]
- Yousif, O.; Ban, Y. Improving SAR-Based Urban Change Detection by Combining MAP-MRF Classifier and Nonlocal Means Similarity Weights. IEEE J. Sel. Top. Appl. Earth Observ. 2014, 7, 4288–4300. [Google Scholar] [CrossRef]
- Celik, T. Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
- Li, W.; Chen, J.; Yang, P.; Sun, H. Multitemporal SAR Images Change Detection Based on Joint Sparse Representation of Pair Dictionaries. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 6165–6168. [Google Scholar]
- Lu, X.; Yuan, Y.; Zheng, X. Joint Dictionary Learning for Multispectral Change Detection. IEEE Trans. Cybern. 2017, 47, 884–897. [Google Scholar] [CrossRef]
- Ghosh, A.; Mishra, N.; Ghosh, S. Fuzzy Clustering Algorithms for Unsupervised Change Detection in Remote Sensing Images. Inf. Sci. 2011, 181, 699–715. [Google Scholar] [CrossRef]
- Nguyen, L.H.; Tran, T.D. A Sparsity-Driven Joint Image Registration and Change Detection Technique for SAR Imagery. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 2798–2801. [Google Scholar]
- Li, H.; Celik, T.; Longbotham, N.; Emery, W.J. Gabor Feature Based Unsupervised Change Detection of Multitemporal SAR Images Based on Two-Level Clustering. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2458–2462. [Google Scholar]
- Gong, M.; Su, L.; Jia, M.; Chen, W. Fuzzy Clustering With a Modified MRF Energy Function for Change Detection in Synthetic Aperture Radar Images. IEEE Trans. Fuzzy Syst. 2014, 22, 98–109. [Google Scholar] [CrossRef]
- Dekker, R.J. Speckle Filtering in Satellite SAR Change Detection Imagery. Int. J. Remote Sens. 1998, 19, 1133–1146. [Google Scholar] [CrossRef]
- Frost, V.; Stiles, J.A.; Shanmugan, K.S.; Holtzman, J.C. A Model for Radar Images and Its Application to Adaptive Digital Filtering of Multiplicative Noise. IEEE Trans. Pattern Anal. Mach. Intell. 1982, PAMI-4, 157–166. [Google Scholar] [CrossRef]
- Lopes, A.; Touzi, R.; Nezry, E. Adaptive Speckle Filters and Scene Heterogeneity. IEEE Trans. Geosci. Remote Sens. 1990, 28, 992–1000. [Google Scholar] [CrossRef]
- Lee, J.S. Digital Image Enhancement and Noise Filtering by Use of Local Statistics. IEEE Trans. Pattern Anal. Mach. Intell. 1980, PAMI-2, 165–168. [Google Scholar] [CrossRef]
- Lopes, A.; Nezry, E.; Touzi, R.; Laur, H. Maximum a Posteriori Filtering and First Order Texture Models in SAR Images. In Proceedings of the 10th Annual International Symposium on Geoscience and Remote Sensing, College Park, MD, USA, 20–24 May 1990; pp. 2409–2412. [Google Scholar]
- Yu, Y.; Acton, S. Speckle Reducing Anisotropic Diffusion. IEEE Trans. Image Process. 2002, 11, 1260–1270. [Google Scholar]
- Coupe, P.; Hellier, P.; Kervrann, C.; Barillot, C. NonLocal Means-based Speckle Filtering for Ultrasound Images. IEEE Trans. Image Process. 2009, 18, 2221–2229. [Google Scholar] [CrossRef]
- Gonzalez, R.; Woods, R. Digital Image Processing, 3rd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2006. [Google Scholar]
- Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
- Rubinstein, R.; Zibulevsky, M.; Elad, M. Efficient Implementation of the K-SVD Algorithm Using Batch Orthogonal Matching Pursuit; Technical Report; Computer Science Department, Technion: Haifa, Israel, 2008. [Google Scholar]
- Horn, B.; Schunck, B. Determining Optical Flow. Artif. Intell. 1980, 17, 185–203. [Google Scholar] [CrossRef]
- Gennert, M.; Negahdaripour, S. Relaxing the Brightness Constancy Assumption in Computing Optical Flow; A.I. Lab Memo 975; Massachusetts Institute of Technology: Cambridge, MA, USA, 1987. [Google Scholar]
- Kanberoglu, B.; Frakes, D. Extraction of Advanced Geospatial Intelligence (AGI) from Commercial Synthetic Aperture Radar Imagery. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XXIV 2017, Anaheim, CA, USA, 9–13 April 2017; Volume 10201, p. 1020106. [Google Scholar]
- Gao, F.; Liu, X.; Dong, J.; Zhong, G.; Jian, M. Change Detection in SAR Images Based on Deep Semi-NMF and SVD Networks. Remote Sens. 2017, 9, 435. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).