Next Article in Journal
Image Reconstruction and Evaluation: Applications on Micro-Surfaces and Lenna Image Representation
Previous Article in Journal
Automatic Gleason Grading of Prostate Cancer Using Shearlet Transform and Multiple Kernel Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Concept Paper

A Supervised Classification Method for Levee Slide Detection Using Complex Synthetic Aperture Radar Imagery

by
Ramakalavathi Marapareddy
1,*,
James V. Aanstoos
2 and
Nicolas H. Younan
3
1
School of Computing, University of Southern Mississippi, Hattiesburg, MS 39406, USA
2
Geosystems Research Institute, Mississippi State University, Mississippi State, MS 39759, USA
3
Department of Electrical and Computer Engineering, Mississippi State University, Mississippi State, MS 39762, USA
*
Author to whom correspondence should be addressed.
J. Imaging 2016, 2(3), 26; https://doi.org/10.3390/jimaging2030026
Submission received: 10 August 2016 / Revised: 5 September 2016 / Accepted: 7 September 2016 / Published: 12 September 2016

Abstract

:
The dynamics of surface and sub-surface water events can lead to slope instability, resulting in anomalies such as slough slides on earthen levees. Early detection of these anomalies by a remote sensing approach could save time versus direct assessment. We have implemented a supervised Mahalanobis distance classification algorithm for the detection of slough slides on levees using complex polarimetric Synthetic Aperture Radar (polSAR) data. The classifier output was followed by a spatial majority filter post-processing step that improved the accuracy. The effectiveness of the algorithm is demonstrated using fully quad-polarimetric L-band Synthetic Aperture Radar (SAR) imagery from the NASA Jet Propulsion Laboratory’s (JPL’s) Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). The study area is a section of the lower Mississippi River valley in the southern USA. Slide detection accuracy of up to 98 percent was achieved, although the number of available slides examples was small.

Graphical Abstract

1. Introduction

Earthen levees protect large areas of populated and cultivated land in the United States from flooding. The potential loss of life and property associated with the catastrophic failure of levees can be extremely large. Over the entire US, there are more than 150,000 km of levee structures of varying designs and conditions [1]. One type of problem that occurs along these levees, which can lead to complete failure during a high water event if left unrepaired for too long, is a slough slide [1]. Slough slides are slope failures along a levee, which leave areas of the levee vulnerable to seepage and failure during high water events [2]. The roughness and related textural characteristics of the soil in a slide area affect the amount and pattern of radar backscatter. The type of vegetation that grows in a slide area differs from the surrounding levee vegetation, which can also be used in detecting slides [3].
SAR technology, due to its high spatial resolution and soil penetration capability, is a good choice to identify problematic areas on earthen levees. PolSAR data includes a variety of information that relates to the physical properties of the target. In polSAR, the transmitted signal is polarized and different polarizations of the backscatter signal are detected as: VV (vertical transmit and vertical receive), HV (horizontal transmit and vertical receive), and HH (horizontal transmit and horizontal receive). Hence, it provides much more information on the form of the scattering elements than a single channel SAR [4]. On the other hand, polSAR classification is challenging due to the complexity of available information from its multiple polarimetric channels [5,6]. Feature extraction from the polSAR image is one of the main issues in the classification of polarimetric data. Since the elements of a scattering matrix are related to the properties of the target, several decomposition methods based on the scattering matrix have been proposed to identify target scattering characteristics [7,8]. Kong et al. [9] proposed an optimal polarimetric classifier based on the complex Gaussian distribution with single-look data. Lee et al. [10] proposed a maximum likelihood classifier of multi-look SAR data based on the complex Wishart distribution, and also an improved method using unsupervised classification combined with the H/alpha decomposition [11]. Cloude and Pottier introduced [12] the entropy-alpha-anisotropy (H/α/A) classification based on the eigenvalues of the polarimetric (or coherency) covariance matrix.
The magnitude data itself may be sufficient for the classification of targets, but this data alone may not be enough to describe the complete structure of the targets. The phase data also has very useful information about the target details. In this paper, we implemented a supervised classification algorithm for the identification of slough slides on levees using the magnitude, phase, and complex data (magnitude and phase) of polSAR imagery. The classification result was further followed by a majority filter, which improved the classification accuracy. Higher classification accuracy for the complex data is obtained when compared with the magnitude and phase classification alone.
Three different sample area segments, which each contain at least one active slide, are used for the analysis. The effectiveness of the presented method is demonstrated using fully quad-polarimetric L-band SAR imagery from the NASA JPL’s UAVSAR.

2. Method

The presented method consists of image segmentation of the levee area, training the classifier, testing the area of interest, and validating the results using ground truth data. The classification algorithm adopted here is a supervised Mahalanobis distance classification for the identification of anomalies such as slough or slump slides on the levee. These slides are slope failures along a levee, which leave areas of the levee vulnerable to seepage and failure during high water events. Majority post-classification filtering uses a moving window (kernel) where each central pixel is assigned to the majority class of the pixels within the window. This filter is applied to a classification image to change isolated pixels within a large single class to the dominant class. The classification is performed using the magnitude, phase, and complex data of the Multi-Look Cross products (MLC) of the UAVSAR acquired. The MLC data is derived from an average of 3 pixels in range and 12 pixels in azimuth of the single look complex data (SLC) pixel [13]. Three complex data bands HHHV, HHVV, and HVVV back scatter magnitudes are used as features for the classification. The portion of the levee from the center line to its river side toe is segmented for analysis. The probabilities of occurrence of slides are greater on the river side. The supervised classification method is trained with two training classes: slide (anomalous) and nonslide (healthy) areas. We used ground truth reference data to train and test the classification algorithms. A majority filter is applied to the classifier output to further improve the accuracy of the classification. Finally, the overall, slide, and nonslide accuracies are computed using the confusion matrix. These processing steps for levee slide detection are illustrated in Figure 1.

2.1. Data and Study Area

The study area for this work focuses on the mainline levee system of the Mississippi River along the eastern side of the river in Mississippi, USA. This study used airborne L-band polSAR data acquired by NASA JPL’s UAVSAR instrument. The L-band radar is capable of penetrating dry soil to a few centimeters depth. Thus, it is valuable in detecting changes in levees that are key inputs to a levee condition classification system [13]. The UAVSAR data set consists of the three sets of co-polarized channels HHHH, HVHV, and VVVV multi-look cross products (MLC) for the magnitude data classification. In addition, three sets of cross-polarized channels HHHV, HHVV, and HVVV MLC are used to get the individual polarization channel magnitude and phase data, and also for the complex data classification.
The MLC data consist of 3 sets of complex floating point values. These complex products are ensemble averages derived from an average of 3 pixels in range and 12 pixels in azimuth, i.e., the number of range looks in MLC and number of azimuth looks in MLC are 3 × 12 of the product of each SLC pixel, which correspond to HHHV, HHVV, and HVVV. The slant range pixel spacing for the MLC data is by 7.2 m × 4.99 m for the azimuth and range directions, respectively. The pixel spacing for the SLC data is by 0.6 m x 1.66 m for the azimuth and range directions, respectively. The SLC data sets (HH, HV, and VV) are oversampled in nature and are dominated by speckle noise. We chose the MLC data sets to reduce the speckle effects. For the MLC data used, the projected ground sample distance is of size 5.5 m by 5.5 m.
The image sample 1 consists of 66 × 68 pixels. Sample 2 is 52 × 54, and sample 3 is 61 × 89. The lengths of the levee segments in these samples are 484 m, 381 m, and 633 m, respectively. The locations of each are indicated on the flight segment radar image shown in Figure 2. For the multi-polarized SAR imagery, it is useful to create a color composite image from the HH, HV, and VV channels that are being mapped to red, green, and blue, as shown in Figure 2, which includes both an overview image as well as a close-up view of the test segments, overlaid on the base map. The entire flight segment image has a swath width of 20 km and a total length of 200 km. The radar is fully polarimetric with a bandwidth of 80 MHz (resulting in better than 2 m range resolution) and flies at a nominal altitude of 13,800 m [13]. The radar image was acquired on 25 January 2010.

2.2. Training Data

The availability of ground truth data for training the supervised classification processes is a challenge since the targets of interest are portions of the levee that show signs of impending failure. Once these are detected, they are quickly repaired depending on their severity [14]. The study area is one in which the levees are managed by the US Army Corps of Engineers (USACE) and are well-monitored. The Corps, in association with the local levee boards, maintains a good cumulative history of past problems and has identified particularly problematic sections of levees in the study area as shown in Table 1. These are used as training samples [13]. In addition to the ground truth data provided by the Corps, we have conducted field trips at the time of image acquisition to visually inspect the slides area and levee condition. The active slides (slides 1, 2, and 5) were present and unrepaired during the radar image acquisition time on 25 January 2010. Though the date of slide appearance was not identified by the Corps for slide 5, it is visible in the NAIP (National Agriculture Imagery Program) imagery collected in 2009 and 2010, and was not repaired until after the image acquisition as shown in Table 2. Hence, it was an active slide during the time of the image. Training masks were created for the slide events and labeled as either repaired or unrepaired at the time of acquisition. The training sample data from slide and nonslide (healthy) parts of the levees were obtained from the radar data using the training masks for analysis. The samples from the healthy parts of the levee near the slide events were used for training of the nonslide (healthy levee) class.

2.3. Mahalanobis Distance Classification

The Mahalanobis distance is a direction sensitive distance classifier that uses statistics for each class in a manner similar to the maximum likelihood classifier, but it assumes all class covariances are equal and weighing factors are not required [15,16]. Therefore, it is a faster method. The Mahalanobis distance algorithm is similar to the minimum distance algorithm, except that it uses the covariance matrix instead. It can be more useful than the minimum distance in cases where statistical criteria are taken into account, and it is largely based on a normal distribution of the data in each band, which is used as input to classification [17,18]. Unlike the minimum distance, this method takes the variability of classes into account. The maximum distance error can be a zero threshold for all the classes, or single value (0 to 0.9) for all the classes, or different values (0 to 0.9) for individual classes. The distance threshold is the distance within which a class must fall from the center or mean of the distribution for a class. We used a zero threshold for all the classes. The Mahalanobis distance classification calculates the distance for each pixel in the image to each class using the following equation [15]:
D i   ( x ) =   ( x m i ) T 1 ( x m i )  
where:
  • D = Mahalanobis distance
  • i = the ith class
  • x = n-dimensional data (where n is the number of features)
  • Σ−1 = the inverse of the covariance matrix of a class
  • m i = mean vector of a class

3. Results and Discussion

The Mahalanobis distance supervised classification process was run separately with the magnitude only, phase only, and full complex (magnitude and phase) SAR multi-looked cross product data on each of the three sample images. The cross-polarized products, HHHV, HHVV, and HVVV, are used based on the assumption that they carry more information about relevant surface scattering properties than the co-polarized channels.
Using the reference (ground truth) data, image masks were created bounding the active slide area and a subset of the non-slide area within each sample image. A sample of pixels belonging to each of these two classes was then used to train the classifier. The accuracy of the resulting classification was tested using the remaining reference data pixels for testing, and the conventional statistics of user producer, and overall accuracy were computed for each case.
The class maps resulting from applying the classifier to sample image 1 using the full complex data features, both with and without the majority filter applied, are shown in Figure 3. Similarly, the magnitude and phase data features, both with and without the majority filter applied, are shown in Figure 4 and Figure 5. The training masks are shown in Figure 3c for both slide and non-slide classes. These areas cover 48 and 132 pixels for the slide and non-slide area, respectively. Of these, 24% (180 pixels) were used for training the classifier and the remainder used for testing its accuracy. The accuracy assessment results are tabulated in Table 3 for this case as well as the lower-accuracy magnitude-only and phase-only cases. A graphical summary of the accuracy results for sample 1 is shown in Figure 6. Similarly, the class maps resulting from sample image 2 are shown in Figure 7, Figure 8 and Figure 9. The training masks shown in Figure 7c cover 57 and 124 pixels for the slide and non-slide areas, respectively. Of these, 31% (181 pixels) were used for training the classifier and the remainder used for testing its accuracy. The accuracy assessment results are tabulated in Table 4 for this case as well as the lower-accuracy magnitude-only and phase-only cases. A graphical summary of the accuracy results for sample 2 is shown in Figure 10. Finally, the class maps for sample image 3 are shown in Figure 11, Figure 12 and Figure 13. The training masks, shown in Figure 11c, cover 78 and 84 pixels for the slide and non-slide areas, respectively. Of these, 17% (162 pixels) were used for training the classifier and the remainder used for testing its accuracy. The accuracy assessment results are tabulated in Table 5 for this case as well as the lower-accuracy magnitude-only and phase-only cases. A graphical summary of the accuracy results for sample 3 is shown in Figure 14.
All three sample results show good detection of the slide pixels but numerous false positive detections as well. In each sample, the use of both phase and magnitude data resulted in higher accuracies than either alone, indicating the both of these data components carry useful information relevant to identifying the slides. Furthermore, in each case, the application of a majority filter improved the classification results by eliminating many of the false positives that were isolated pixels or very small groups of pixels. The premise of using the majority filter is that actual slides are not likely to be as small in area as these isolated areas. Thus, the filter reduced the false positives without hurting the true positive performance.
Sample 3 included, in addition to the one active slide, two slides (numbered 3 and 4) which had been repaired by the time of image acquisition. Many of the false positive pixels fall in this area. Because these slide areas were repaired only two months prior to the time of image acquisition, they still have characteristics more similar to the active slide than the “healthy” areas, in terms of surface roughness and differences in the grass cover. These characteristics likely influenced the classification.

4. Conclusions

A supervised classification method based on the Mahalanobis distance for levee slide detection using complex SAR imagery is presented. In addition, we have implemented a majority filter as a post-processing step in order to improve the accuracy. The effectiveness of the algorithms is demonstrated using fully quad-polarimetric L-band SAR imagery from the NASA JPL’s UAVSAR. The cross-polarized products, HHHV, HHVV, and HVVV, are used based on the assumption that they carry more information about the surface scattering properties. The study area is a section of the lower Mississippi River valley in the southern USA. The classification results obtained for all three cases (magnitude, phase, and full complex data), with accuracies for the complex data being higher, indicate that the use of polarimetric SAR can effectively detect slump slides on earthen levees. In addition to the active slide areas, other anomalous areas are also detected. Some of these are previous slide areas that had been repaired just two months prior to the time of image acquisition and still appear similar enough to the active slide to be detected by the classification technique. Furthermore, although the test study area is small, including only one active slide area for each segment, the methodology presented in this paper shows promising results. Planned future work includes the use of larger test areas consisting of more active slides, seasonal images acquired by the SAR, and different geometrical orientations of the levee. We would also like to extend our work to dual-pol SAR data classification methods based on Wishart classification [19,20].

Acknowledgments

This work was supported by the National Science Foundation grant number: OISE-1243539, and by the NASA Applied Sciences Division under grant number: NNX09AV25G. The authors would like to thank the US Army Corps of Engineers, Engineer Research and Development Center and Vicksburg Levee District for providing ground truth data and expertise; NASA Jet Propulsion Laboratory for providing the UAVSAR image; and the GRI levee team.

Author Contributions

Ramakalavathi Marapareddy implemented the classification methods on image processing tools. James V. Aanstoos supervised the work, provided imagery and data and was the principal investigator for the project. Nicolas H. Younan supervised and provided guidance. Ramakalavathi Marapareddy, James V. Aanstoos, and Nicolas H. Younan analyzed the results and wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aanstoos, J.V.; Hasan, K.; O’Hara, C.G.; Prasad, S.; Dabbiru, L.; Mahrooghy, M.; Nobrega, R.; Lee, M.L.; Shrestha, B. Use of remote sensing to screen earthen levees. In Proceedings of the 39th Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 13–15 October 2010; pp. 1–6.
  2. Dunbar, J. Lower Mississippi Valley Engineering Geology and Geomorphology Mapping; Program for Levees; US Army Corps of Engineers: Vicksburg, MS, USA, 16 April 2009. [Google Scholar]
  3. Hossain, A.K.M.A.; Easson, G.; Hasan, K. Detection of levee slides using commercially available remotely sensed data. Environ. Eng. Geosci. 2006, 12, 235–246. [Google Scholar] [CrossRef]
  4. Lin, S.W.; Ying, K.C.; Chen, S.C.; Lee, Z.J. Particle swarm optimization for parameter determination and feature selection of support vector machines. Expert Syst. Appl. 2008, 35, 1817–1824. [Google Scholar] [CrossRef]
  5. Ince, T.; Kiranyaz, S.; Gabbouj, M. Classification of Polarimetric SAR Images Using Evolutionary RBF Networks. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 4324–4327.
  6. Alvarez-Perez, J.L. Coherence, polarization, and statistical independence in cloude-pottier’s radar polarimetry. IEEE Trans. Geosci. Remote Sens. 2011, 49, 426–441. [Google Scholar] [CrossRef]
  7. Han, Y.; Shao, Y. Full Polarimetric SAR classification based on yamaguchi decomposition model and scattering parameters. In Proceedings of the 2010 IEEE International Conference on Progress in Informatics and Computing (PIC), Shanghai, China, 10–12 December 2010; pp. 1104–1108.
  8. Jong-Sen, L.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications, 1st ed.; CRC Press, Taylor & Francis Group: Boca Raton, FL, USA, 2009; ISBN 978-1420054972. [Google Scholar]
  9. Kong, J.A.; Schwartz, A.A.; Yueh, H.A.; Novak, L.M.; Shin, R.T. Identification of terrain cover using the optimal polarimetric classifier. J. Electromagnet. Waves Applicat. 1988, 2, 171–194. [Google Scholar]
  10. Lee, J.S.; Grunes, M.R. Classification of multi-look polarimetric SAR imagery based on complex Wishart distribution. Int. J. Remote Sens. 1994, 15, 2299–2311. [Google Scholar] [CrossRef]
  11. Lee, J.S.; Grunes, M.R.; Anisoworth, T.L.; Du, L.J.; Schuler, D.L.; Coulde, S.R. Unsupervised classification using polarimetric decomposition and the complex Whishart classifier. IEEE Trans. Geosci. Remote Sens. 1999, 35, 2249–2258. [Google Scholar]
  12. Cloude, S.R.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  13. Aanstoos, J.V.; Dabbiru, L.; Gokaraju, B.; Hasan, K.; Lee, M.A.; Mahrooghy, M.; Nobrega, R.A.A.; O’Hara, C.G.; Prasad, S.; Shanker, A. Levee Assessment via Remote Sensing SERRI Projects; SERRI Report 80023–02; Southeast Region Research Initiative: Oak Ridge, TN, USA, 2012. [Google Scholar]
  14. Aanstoos, J.V.; Hasan, K.; O’Hara, C.; Dabbiru, L.; Mahrooghy, M.; Nobrega, R.A.A.; Lee, M.M. Detection of Slump Slides on Earthen Levees Using Polarimetric SAR Imagery. In Proceedings of the 2012 IEEE Applied Imagery Pattern Recognition Workshop, Washington, DC, USA, 9–11 October 2012.
  15. Exelis Visual Information Solutions User Guides and Tutorials. ENVI Version 5.1. Available online: http://www.exelisvis.com/Learn/Resources/Tutorials.aspx (accessed on 8 September 2016).
  16. Richards, J.A. Remote Sensing Digital Image Analysis; Springer-Verlag: Berlin, Germany, 1999; p. 240. [Google Scholar]
  17. Al-Ahmadi, F.S.; Hames, A.S. Comparison of four classification methods to extract land use and land cover from raw satellite images for some remote arid areas, Kingdom of Saudi Arabia. Earth Sci. 2009, 20, 167–191. [Google Scholar] [CrossRef]
  18. Canty, M.J. Image Analysis, Classification and Change Detection in Remote Sensing: With Algorithms for ENVI/IDL and Python, 3rd ed.; CRC Press, Taylor & Francis Group: Boca Raton, FL, USA, 2014; pp. 1–576. ISBN 9781466570375-CAT#K16482. [Google Scholar]
  19. Pajares, G.; López-Martínez, C.; Sánchez-Lladó, F.J.; Molina, Í. Improving Wishart Classification of Polarimetric SAR Data Using the Hopfield Neural Network Optimization Approach. Remote Sens. 2012, 4, 3571–3595. [Google Scholar] [CrossRef] [Green Version]
  20. Sánchez-Lladó, F.J.; Pajares, G.; López-Martínez, C. Improving the Wishart synthetic aperture radar image classifications through deterministic simulated annealing. ISPRS J. Photogram. Remote Sens. 2011, 66, 845–857. [Google Scholar] [CrossRef]
Figure 1. Processing steps for slide detection on levee.
Figure 1. Processing steps for slide detection on levee.
Jimaging 02 00026 g001
Figure 2. Study area with radar color composite 3 band (HH, VV and HV) image overlaid on base map.
Figure 2. Study area with radar color composite 3 band (HH, VV and HV) image overlaid on base map.
Jimaging 02 00026 g002
Figure 3. Complex data classification for the segment Sample 1: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Figure 3. Complex data classification for the segment Sample 1: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Jimaging 02 00026 g003
Figure 4. Magnitude data classification for the segment Sample 1: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Figure 4. Magnitude data classification for the segment Sample 1: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Jimaging 02 00026 g004
Figure 5. Phase data classification for the segment Sample 1: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Figure 5. Phase data classification for the segment Sample 1: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Jimaging 02 00026 g005
Figure 6. Accuracy comparison of the Mahalanobis distance classification and with majority filter, of the segment Sample 1, for the phase, magnitude, and complex data.
Figure 6. Accuracy comparison of the Mahalanobis distance classification and with majority filter, of the segment Sample 1, for the phase, magnitude, and complex data.
Jimaging 02 00026 g006
Figure 7. Complex data classification for the segment Sample 2: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Figure 7. Complex data classification for the segment Sample 2: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Jimaging 02 00026 g007
Figure 8. Magnitude data classification for the segment Sample 2: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Figure 8. Magnitude data classification for the segment Sample 2: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Jimaging 02 00026 g008
Figure 9. Phase data classification for the segment Sample 2: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Figure 9. Phase data classification for the segment Sample 2: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Jimaging 02 00026 g009
Figure 10. Accuracy comparison of the Mahalanobis distance classification and with majority filter, of the segment Sample 2, for the phase, magnitude, and complex data.
Figure 10. Accuracy comparison of the Mahalanobis distance classification and with majority filter, of the segment Sample 2, for the phase, magnitude, and complex data.
Jimaging 02 00026 g010
Figure 11. Complex data classification for the segment Sample 3: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Figure 11. Complex data classification for the segment Sample 3: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Jimaging 02 00026 g011
Figure 12. Magnitude data classification for the segment Sample 3: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Figure 12. Magnitude data classification for the segment Sample 3: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Jimaging 02 00026 g012
Figure 13. Phase data classification for the segment Sample 3: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Figure 13. Phase data classification for the segment Sample 3: (a) without majority filter; (b) with majority filter; (c) optical image overlaid with slide and nonslide class shapes.
Jimaging 02 00026 g013
Figure 14. Accuracy comparison of the Mahalanobis distance (MD) classification and with majority filter (MDF), of the segment Sample 3, for the phase, magnitude, and complex data.
Figure 14. Accuracy comparison of the Mahalanobis distance (MD) classification and with majority filter (MDF), of the segment Sample 3, for the phase, magnitude, and complex data.
Jimaging 02 00026 g014
Table 1. Ground truth data from the Mississippi Levee Board.
Table 1. Ground truth data from the Mississippi Levee Board.
Slide NumberLengthVert. FaceDist. from CrownLatitude NorthLongitude WestDate Slide Appeared Date Slide Repaired
1135′15′12′N33-07′-44.4″W91-04′-46.1″October 2009March 2010
2230′7′9′N32-37′-37.2″W90-59′-56.2″October 2009April 2010
380′2′30′N32-36′-37.7″W90-59′-42.3″October 2009November 2009
4120′3′15′N32-36′-32.0″W90-59′-46.3″August 2008November 2009
5200′8′8′N32-36′-29.1″W90-59′-48.0″-September 2010
Table 2. Updated slides ground truth from the Mississippi Levee Board.
Table 2. Updated slides ground truth from the Mississippi Levee Board.
Slide No.From Levee Board (8 April 2011)From Visual Aerial Photo Inspection
Date Slide AppearedDate Slide RepairedNAIP 2009 (May–September)NAIP 2010 (May–September)
1October 2009March 2010Not Visible (25 July)Unrepaired (3 August)
2October 2009April 2010Not Visible (25 July)Unrepaired (22 June)
3October 2009November 2009Not Visible (25 July)Repaired (22 June)
4August 2008November 2009Unrepaired (25 July)Repaired (22 June)
5-September 2010Unrepaired (25 July)Unrepaired (22 June)
Table 3. Accuracy analysis of the Mahalanobis distance (MD) classification, and with majority filter (MDF) for slide and nonslide areas, of the segment Sample 1, using magnitude, phase, and complex data.
Table 3. Accuracy analysis of the Mahalanobis distance (MD) classification, and with majority filter (MDF) for slide and nonslide areas, of the segment Sample 1, using magnitude, phase, and complex data.
Data TypeClassificationProducer Accuracy %User Accuracy %Overall Accuracy %
MethodClass
Magnitude DataMDslide1665878
nonslide8287
MDFslide1757887
nonslide9291
Phase DataMDslide1524369
nonslide7581
MDFslide1474671
nonslide7980
Complex DataMDslide1726180
nonslide8389
MDFslide1819593
nonslide9893
Table 4. Accuracy analysis of the Mahalanobis distance (MD) classification, and with majority filter (MDF) for slide and nonslide areas, of the segment Sample 2, using magnitude, phase, and complex data.
Table 4. Accuracy analysis of the Mahalanobis distance (MD) classification, and with majority filter (MDF) for slide and nonslide areas, of the segment Sample 2, using magnitude, phase, and complex data.
Data TypeClassification Producer Accuracy %User Accuracy %Overall Accuracy %
MethodClass
Magnitude DataMDslide285 71 84
nonslide83 92
MDFslide292 92 95
nonslide96 96
Phase DataMDslide2593451
nonslide4771
MDFslide2633653
nonslide4974
Complex DataMDslide2857285
nonslide8492
MDFslide29210097
nonslide10096
Table 5. Accuracy analysis of the Mahalanobis distance (MD) classification and with majority filter (MDF) for slide and nonslide areas, of the segment Sample 3, using magnitude, phase, and complex data.
Table 5. Accuracy analysis of the Mahalanobis distance (MD) classification and with majority filter (MDF) for slide and nonslide areas, of the segment Sample 3, using magnitude, phase, and complex data.
Data TypeClassificationProducer Accuracy %User Accuracy %Overall Accuracy %
MethodClass
Magnitude DataMDslide5859390
nonslide9487
MDFslide59410097
nonslide10095
Phase DataMDslide5607169
nonslide7767
MDFSlide5699081
nonslide9276
Complex DataMDslide5919794
nonslide9792
MDFslide59810096
nonslide10098

Share and Cite

MDPI and ACS Style

Marapareddy, R.; Aanstoos, J.V.; Younan, N.H. A Supervised Classification Method for Levee Slide Detection Using Complex Synthetic Aperture Radar Imagery. J. Imaging 2016, 2, 26. https://doi.org/10.3390/jimaging2030026

AMA Style

Marapareddy R, Aanstoos JV, Younan NH. A Supervised Classification Method for Levee Slide Detection Using Complex Synthetic Aperture Radar Imagery. Journal of Imaging. 2016; 2(3):26. https://doi.org/10.3390/jimaging2030026

Chicago/Turabian Style

Marapareddy, Ramakalavathi, James V. Aanstoos, and Nicolas H. Younan. 2016. "A Supervised Classification Method for Levee Slide Detection Using Complex Synthetic Aperture Radar Imagery" Journal of Imaging 2, no. 3: 26. https://doi.org/10.3390/jimaging2030026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop