Next Article in Journal / Special Issue
An Object-Based Semantic Classification Method for High Resolution Remote Sensing Imagery Using Ontology
Previous Article in Journal
Niche Modeling of Dengue Fever Using Remotely Sensed Environmental Factors and Boosted Regression Trees
Previous Article in Special Issue
Reproducibility and Practical Adoption of GEOBIA with Open-Source Software in Docker Containers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stratified Template Matching to Support Refugee Camp Analysis in OBIA Workflows

Department of Geoinformatics (Z_GIS), University of Salzburg, Schillerstrasse 30, 5020 Salzburg, Austria
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(4), 326; https://doi.org/10.3390/rs9040326
Submission received: 30 December 2016 / Revised: 22 March 2017 / Accepted: 27 March 2017 / Published: 30 March 2017

Abstract

:
Accurate and reliable information about the situation in refugee or internally displaced person camps is very important for planning any kind of help like health care, infrastructure, or vaccination campaigns. The number and spatial distribution of single dwellings extracted semi-automatically from very high-resolution (VHR) satellite imagery as an indicator for population estimations can provide such important information. The accuracy of the extracted dwellings can vary quite a lot depending on various factors. To enhance established single dwelling extraction approaches, we have tested the integration of stratified template matching methods in object-based image analysis (OBIA) workflows. A template library for various dwelling types (template samples are taken from ten different sites using 16 satellite images), incorporating the shadow effect of dwellings, was established. Altogether, 18 template classes were created covering typically occurring dwellings and their cast shadows. The created template library aims to be generally applicable in similar conditions. Compared to pre-existing OBIA classifications, the approach could increase the producer’s accuracy by 11.7 percentage points on average and slightly increase the user’s accuracy. These results show that the stratified integration of template matching approaches in OBIA workflows is a possibility to further improve the results of semi-automated dwelling extraction, especially in complex situations.

Graphical Abstract

1. Introduction

The number of refugees and internally displaced persons (IDPs) hit an all time high in 2015. By the end of 2015, more than 65 million people were forcibly displaced [1] caused by natural disasters, changing environmental conditions, and violent conflicts, which overall constitute the main reasons for displacement. Camps and temporary settlements provide refuge for many of the displaced people [2,3]. Accurate, reliable, and up-to-date information about the population in refugee or IDP camps is key to health care, infrastructure planning, or vaccination campaigns. Camp management, often carried out by first-responding humanitarian relief organizations, cannot obtain this information by field assessments alone, due to security reasons and other immediate duties. Furthermore, incorrect information is sometimes provided by stakeholders, who may overestimate the number of refugees for various reasons, or other political interests may lead to reduced numbers of reported IDPs. IDPs have not crossed a recognized international border and therefore remain under legal protection from their home governments, which limits assistance, such as regulated camp access and management, to a large degree. To cope with these challenges, very high-resolution (VHR) satellite imagery is a critical source for deriving the number and spatial distribution of single camp dwellings [4,5,6,7,8,9,10,11,12]. Most of the approaches for automated dwelling extraction from VHR data documented in the literature (cf. a comprehensive review by [13]) rely on OBIA [14,15]. Nevertheless, the accuracy of the extracted dwellings can vary quite a lot depending on various factors, such as the contrast of dwelling types to surroundings, the seasonal or actual weather situation, but also the dwelling density [16,17].
A specific limitation of existing object-based workflows is related to the initial segmentation of the image. This process creates segments for image areas based on the similarity of neighboring pixel values, which can lead to under-segmentation when dwellings and their surroundings have very similar spectral reflectance, for example under dusty conditions. In such situations, dwellings and their associated cast shadows can help distinguish them from non-elevated ground. Still, proper segmentation that separates dwellings and the shadow area is required. The upper left image in Figure 1 shows an example where the shadow of a dwelling is not as clearly captured by a segment as compared to the upper right image. Here, the use of shadow as an identifier for a dwelling is limited, in particular when using automated routines. At the bottom of Figure 1, dark dwellings are shown with very low contrast to their surroundings, where the cast shadow is, in fact, the only indicator for the presence of dwellings at all. In this case, segmentation based on internal homogeneity criteria would fail in extracting relevant candidate objects for the classification process later on.
To enhance single object detection in such difficult situations, we have tested the integration of template matching in object-based image analysis workflows. In the template generation process, the object to be detected and the surroundings can be taken into account, which is in our case different dwelling types and their cast shadows. Thus, matches include information regarding the orientation, and can be used in the analyses to reduce false positives and partly overcome the difficulties in the segmentation process by including the correlation layer in the segmentation steps.
The concept of template matching can be summarized as the process of comparing patterns with regards to their similarity, enabling single object detection. In signal processing, analyzing the correlation of signals is a standard approach. The mathematical methods and algorithms are also used in image processing for the detection of image features [18,19]. A typically applied method in image analysis is 2D-correlation, based on the correlation calculation using a moving window approach as one of the simplest template matching techniques. Templates are represented as vectors. The correlation between the template and the image is calculated by sliding the template over the image. There are different similarity measurements such as the normalized cross-correlation, the sum of absolute differences, the sum of squared differences, and the Euclidean distance. A maximum or a minimum in the correlation matrix represents a match depending on the calculation of the similarity. There can be multiple matches with the same value [18,20].
In the specific field of remote sensing, template matching is widely used to address a variety of problems such as the extraction of road centerlines from high-resolution images [21], and the monitoring of mass movements and their surface velocity for monitoring glaciers, ice shelves, ice caps, and also landslides [22,23,24]. Further examples of object recognition using satellite and aerial images are aircraft and crater detection [25,26], tree crown detection, oil palm and species detection, and counting in forestry [27,28,29,30,31] and dwelling extraction [32].
The literature review as well as a recently published review article by [20] reveal that template matching techniques and OBIA methods for object detection are typically handled separately. There is little research on combining both methods in one workflow. A study by [33], that combined template matching with OBIA methods analyzing the effect of tree growth in camps impairing dwelling detection in a refugee camp, suggested that the overall classification could be improved by applying template matching within an object-based framework.
The combination of the two approaches aims to tackle typical problems of template matching, such as the high number of false positives in complex images, by masking relevant parts of the image, for example focusing on the camp area only or certain spectrally relevant objects, which we term stratified template matching. We expect that the incorporation of the shadow effect of dwellings can improve the dwelling detection rate in an OBIA workflow, improving segmentation and the dwelling detection rate.
In this research, we aim to establish a template library, which is generally applicable under similar conditions and can be easily integrated into existing workflows for object-based dwelling extraction. A pre-condition is the presence of shadows cast by the dwelling. A visual investigation of different refugee camps in Eastern Africa revealed that the same shelter types commonly occur (bright, blue, and dark/brown dwellings) as well as large and small structures with typical geometrical properties (Figure 2). The template library was developed based on sixteen single VHR scenes. In order to check the accuracy of the approach, it was tested on three sites of different image complexities. The results were compared to a visual image interpretation as well as to pre-existing (independent) OBIA classifications.

2. Material and Methods

2.1. Study Area and Data

For the creation of the dwelling template library, subsets of sixteen VHR optical image scenes were used. The spatial resolution ranges between 0.5 m and 0.6 m ground sample distance (GSD) for pan-sharpened VHR data. For testing purposes, the subsets cover parts of different camps and two small towns, all located in Eastern Africa. The distribution of the sites is shown in Figure 3. Existing semi-automated dwelling classifications were provided by the Department of Geoinformatics—Z_GIS, University of Salzburg, within an operational service for Médecins Sans Frontières (MSF) for three sites (El Redis, Sudan 3 December 2015, Yida, South Sudan 10 December 2012, Yida 4 March 2013; Table 1) in order to compare the outcome of the dwelling template library with existing and verified results.

2.2. Template Matching Library

One of the objectives of this work was to create a template library for camp dwellings that can be integrated into existing workflows for dwelling extraction, applicable to different kinds of dwellings. The library is best suited for satellite images with a non-zenith sun position, where the depicted objects cast a shadow depending on their height. The samples for the templates are taken from different camps (Figure 3) to capture the variety of dwelling types at different locations, time slots, and weather conditions such as rainy and dry season or the effects after a sandstorm and different sun azimuth angles. The library is most suitable for resolution levels between 0.5 and 0.6 m GSD, but it also can be scaled to other image resolutions. The applied workflow for creating a template and integration into the library is presented in Figure 4.
The template library is structured according to the properties of the templates (color intensity, dwelling shape, size, and shadow direction) that support several combinations of a dwelling type and the cast shadow. Moreover, new dwelling templates can be added to the library to customize it to the dwelling arrangement. The naming convention follows this logic, allowing the right dwelling templates to be found without screening all available template images. The following subchapters describe the structure of the template library in detail.

2.2.1. Dwelling Shape

Small objects in the satellite image, such as camp dwellings, may appear as if the image was taken from the nadir position, while VHR imagery is usually not. The lean effect is only visible on taller buildings [34]. Therefore, only the dwelling roofs are visible, and dwelling shapes are reduced to two-dimensional geometric shapes. Camp dwellings typically have rather simple shapes. The generic types are quadratic or rectangular cuboids and cones. The template library comprises the following three dwelling types: square, rectangular, and circular.

2.2.2. Template Size

The template size is defined by the height and width of a rectangle. Templates should cover the dwelling, the cast shadow, and a reasonable share of the surrounding area.

2.2.3. Dwelling Brightness

Here, the templates were calculated on one image band only, i.e., using grey scale intensities. The template library is divided into two intensity levels: strong intensity—bright dwellings, and weak intensity—dark dwellings. Empirical testing was performed to compare the correlation of the samples of a given template with the pan-sharpened multi-spectral bands (blue, green, red, and NIR). The contrast of the bright dwellings compared to the surrounding areas proved best using the blue band. For dark dwellings, the best results were achieved using the near-infrared (NIR) band.

2.2.4. Shadow Direction

The third property is the shadow cast by a dwelling. It determines the rotation angle of the samples, which is dependent on the shape of the object. Objects in optical VHR satellite images taken in daylight with the sun as the only relevant source of light cast simple shadows. From the simplified shapes, cones either cast a shadow or no shadow, while rectangular cuboids have three options: no shadow, a shadow on one of four sides and shadows on two sides.
Corresponding to the shadow, objects with a circular layout only require one template, square dwellings require two templates, and dwellings with a rectangular layout require three templates because the shadow can be on one side only, on the short and the long side, or on the long and the short side, which cannot be solved by rotation.
The library is divided into the following main template classes (Figure 5): (a) dwellings with strong intensity (structures appear bright) and dwellings with low intensity (structures appear dark or grey); (b) dwellings of different shapes that can be a square, rectangular, or circular; (c) the direction of the shadow cast by the dwellings, which can be on one side or on two sides.

2.3. Application of the Template Matching Library

The application of the templates from the library is an interactive but straightforward process applied within the eCognition (Trimble Geospatial) OBIA software environment. The implemented template matching algorithm in eCognition is normalized cross-correlation [35]. The choice of the right templates depends on the occurrence of dwelling types. Rotation steps of the templates can be adjusted. If, for example, four rotation steps are applied, the rotation angles of the template are 0°, 90°, 180°, and 270°. A suitable rotation angle is always the prevailing shadow orientation of dwellings in the image. A second parameter is the correlation threshold. If the correlation between the template and the image at a given location is higher than the predefined threshold, a point (local maxima within a given neighborhood) will be created in addition to the correlation layer to represent a dwelling match. The resulting point layer contains numerous false positives that can be reduced by adjusting the correlation thresholds and filtering adequate rotation angles of the templates corresponding to the current shadow direction. The computation time is dependent on the size of the image and on the number of rotation steps selected (Table 1).

2.4. Integration of Template Matching in an Object-Based Image Analysis Workflow

In order to overcome some of the inherent limitations of template matching (dual or multiple matches, ambiguous targets) that inevitably lead to a large number of false positives, we use a stratification strategy. In other words, we integrate the developed template library in an OBIA workflow (Figure 6) using the following steps: (1) The area of interest, in this case the camp extent, is derived using an initial rough OBIA classification of the dwellings and a dwelling density calculation [7] to stratify the template matching to the camp area only and to save computation time; (2) The template library is applied, and the templates and rotation of templates are selected according to visual inspection of the dwelling types and shadow directions; (3) Segmentation of the satellite image is improved by including the correlation layer of the template matching in the segmentation process; (4) Exclusion (masking) of areas that are not relevant for the analysis such as vegetation and bare soil (based on spectral properties) and elongated structures such as fences or walls (based on geometrical properties), see [4,7,17] for a detailed description of the algorithms. This helps to minimize false positives created in the template matching process; (5) Classification of the image objects based on two steps: (i) classification of dwellings using the template point layer and (ii) additional classification of the dwellings by applying OBIA methods (objects that do not match the template, but show other general characteristics of dwellings regarding spectral, form, and spatial features).

3. Results and Discussion

3.1. Results for the Three Test Images

The template library was applied to three images showing different levels of complexity in terms of dwelling extraction. The camps under investigation were El Redis in Sudan and Yida in South Sudan (Table 2). The latter was captured in two slots. The provided images cover parts of the Yida camp and the complete El Redis site. In the following, the results of three different methods are presented: (1) template library based on greyscale image template matching (TM); (2) template library for dwelling extraction within an object-based framework (TMOB); (3) pre-existing OBIA only classifications (OB).
Applying more than one template sometimes creates several hits on one dwelling. To obtain a comparable result for the TM method, multiple hits were not counted.
The complexity of the dwelling extraction for the El Redis camp image, acquired 12 March 2015, can be considered as low (see subset of the camp image in Figure 3). The camp mostly consists of bright dwellings, which can be clearly distinguished from surrounding areas. Still, some darker dwellings are harder to distinguish. The structure of the camp is quite clear. There is almost no vegetation such as trees and bushes that could be mistaken as dwellings within the camp area. Fences, which can look like a shadow cast by a dwelling, rarely occur.
Without a differentiation between dwelling types, the application of the TM approach resulted in the highest number of detected dwellings (2159, for details, see Figure 7), which was expected since TM methods are prone to high numbers of false positives [20]. When integrated into an object-based framework (TMOB approach) the number of classified dwellings reduced to 1542. In comparison to these results, the existing OBIA classification (OB approach) detects 1426 dwellings. When differentiating between dwelling types, the OB and TMOB approaches show similar distributions (Figure 7). Overall, the new TMOB approach detects 116 dwellings more than the pre-existing results in the same area.
A reason for the difference is that the pre-existing OBIA classification lacks the capability of creating discrete segments for dwellings that are built near each other. Thus, a segment can include more than one object. If the area of this segment is greater than the threshold for small structures, it is classified as a large structure instead of two or more smaller structures. The new approach can improve this shortcoming by integrating a correlation layer in the segmentation process.
The complexity of the Yida image, acquired on 12 October 2012, is classed as moderate (see Figure 8, left). It consists mostly of bright and blue dwellings, typically built with local material and covered by white or blue plastic sheets in the rainy season [36]. Bright structures can be clearly distinguished from surrounding areas as compared to darker structures that are more difficult to detect. The detection rate of blue dwellings depends on the intensity and similarity of the surrounding areas. The camp structure can be described as somewhat chaotic. The reason for this is that the development of the camp in its initial phase was not managed by a humanitarian relief organization [37]. Most of the dwellings are located in a dispersed manner. Between these structures, trees and other vegetation such as grassland and rust-colored earthen patches can be found. All of this increases the complexity of dwelling extraction. Especially if structures are partially hidden by tree crowns ([33]) or look similar to surrounding areas, the differentiation of dwellings is hampered. The image was taken at the end of the rainy season in October. Therefore, the vegetation appears intense, and the dwelling roofs are less dusty. Overall, that increases the contrast between the dwellings and surrounding areas. Occasionally fences that appear similar to shadows cast by a dwelling occur. Overall, the TM approach detected 5447 dwellings in this image; the TMOB extracted 4674 dwellings.
In comparison, the pre-existing OB classification again obtained the lowest detection rate with a total of 4430 dwellings (Figure 7). The differences in bright and blue dwellings result from slightly different thresholds that distinguish these structures. Overall, the TMOB approach detects 217 dwellings more than the pre-existing OBIA classification in the same area.
The third image, also covering the Yida camp, was taken six months later in early April 2013. The different pixel dimensions (Table 2) of the subsets are due to the different GSDs of different sensors (QuickBird 0.6m GSD vs. WorldView-2 0.5 m). The scene was chosen in order to compare the effects and impacts of the dry versus rainy seasons. There were only minor changes in terms of the camp structure, distribution of dwellings, and occurrence of fences. However, the image was taken at the end of the dry season. The impact of this change is that the complexity for dwelling extraction increases to high. Dwelling roofs look dusty and are similar to the surroundings, where mainly rust-colored earth and some trees can be found (Figure 8, right). Overall, there is less contrast between the image objects to be distinguished.
Here, the TM approach obtained 4789 dwellings. The pre-existing OB classification detected 4429 dwellings. In comparison, TMOB yielded a total of 5532 dwellings. Broken down by classes, the OB obtains a considerably lower rate for bright and brown dwellings compared to the TMOB approach (Figure 7). Overall, 1103 additional dwellings were extracted by TMOB compared to the OB approach.
It needs to be emphasized that this time the TM approach detected fewer dwellings than the combined TMOB approach. Most likely, the applied templates did not cover the whole variety of occurring dwellings—especially the dusty conditions caused by the dry season resulted in a limited distinction between bright and dark structures, hampering template matching using grey level based templates only.

3.2. Accuracy Assessment

In general, the success of a method coincides with its applicability and accuracy. In order to verify the latter, the results from the three test sites were compared with a visual interpretation based on the satellite images. The visual interpretation was performed on randomly selected 200 m × 200 m sample areas. In the El Redis camp, two sample areas were created because of the small camp size. In the Yida camp, five sample areas were created (Figure 9). The sample areas in the Yida camp are the same for both years (2012 and 2013), which allows a detailed comparison of the development in these parts of the camp. The visual interpretation includes a point file for every occurring dwelling in the sample areas, including properties of the dwelling type.

3.2.1. Accuracy Assessment: Not Differentiated between Dwelling Types

For El Redis, the visual interpretation revealed 657 dwellings within the randomly selected sample areas. Results are shown in Figure 10. TM shows a user’s accuracy (UA) of approximately 74% and a producer’s accuracy (PA) of 92%. TMOB extracted the same number of correctly extracted dwellings (same PA), but the user accuracy increased to approximately 98% based on the reduction of false positives. In comparison, the pre-existing OB already has a very high UA of 96% but a lower PA of 83% compared to the other approaches (Figure 10).
For the image of the Yida camp, captured in October 2012, visual interpretation detected 587 dwellings within the sample areas. Applying the TM approach, a UA of 75% and a PA of 70% was reached. TMOB increased the correctly detected structures, resulting in a PA of 77%. False positives are strongly reduced, which is reflected in the enhanced UA of 93%. For comparison, the pre-existing OB classification extracted 397 dwellings with a high UA of 90% and a PA of 68% (Figure 10).
For the same area, but based on the image acquired six months later, the visual interpretation revealed 504 dwellings. TM shows a UA of 65% and a PA of 49%. TMOB increased the extracted number of dwellings and hence also the PA to 74%. Moreover, the UA rises to 77%. The pre-existing OB classification correctly extracted 284 dwellings with a UA of 78% and a quite low PA of 56% (Figure 10).
Across all test sites, the TMOB method increases the average PA by 11.7 percentage points and slightly increases the UA by 0.9 percentage points compared to pre-existing OB classifications (average over all test sites, not differentiated by dwelling type). This corresponds to the initial problem description that OB methods are good in classifying objects if the initial segmentation works well (high contrast of dwellings and surroundings), which leads to the already quite high UA reported. The increased PA of the TMOB method means that the incorporation of the shadow effect of dwellings can improve the dwelling detection rate in an OBIA workflow, thereby improving the segmentation and dwelling detection rate.

3.2.2. Accuracy Assessment: Differentiated between Dwelling Types

Differentiation between dwelling types was conducted for the TMOB and the pre-existing OB classifications only. The reason is that the implemented template matching method frequently results in multiple hits per dwelling. A dark dwelling template can create a match on structures appearing dark and for structures that appear brighter (but with similar contrast), and vice versa for a bright dwelling template. Consequently, no precise and accurate distinction of dwelling types could be conducted without intensive post-processing.
Differentiated between dwelling types, TMOB correctly detected 598 dwellings in the El Redis camp test site (for details see Table 3). The UA and PA decreased less than 1% compared to the result without the differentiation between dwelling types. Looking at the single classes, the PA is very high for bright dwellings (95%) and large structures (100%) but only moderate for brown dwellings (66%). The overall accuracy reduction of the OB classification compared to the result without the differentiation between the dwelling types is also less than 1%. It is important to mention the high PA rates for bright dwellings (90%) and large structures (100%) while there is a very low value for brown dwellings (34%), which are much better detected by the TMOB approach. The low accuracy values for brown dwellings do not influence the overall accuracy of both classifications because their share is approximately only 15%.
For the image of the Yida camp (October 2012), TMOB correctly extracted 452 dwellings, which is less than that without a dwelling distinction (see Table 4). Hence, this reduces the UA and PA (UA: 91%, PA: 75%). The class-specific UA and PA again highly depend on dwelling types (higher values for the bright dwelling types). As a comparison, the overall UA is 88%, and the PA is 66% for the OB classification. The class-specific UA and PA develop in a similar way. Interestingly, the TMOB method detects blue dwellings much better than the OB classification only.
For the second image taken of the Yida camp (April 2013), showing the most complex situation between the three test sites, the TMOB method correctly detects 344 dwellings (differentiated by type, see Table 5). The PA drops to 68% and the UA to 71%. A detailed look at the class-specific accuracy again reveals that darker structures, usually with a lower contrast to the surroundings, are harder to extract. The OB classification extracted 238 dwellings in total, with a UA of 65% and a PA of 47%. The class-specific accuracy shows that the detection rate for all classes apart from large structures is low in comparison to the TMOB approach. For both Yida test sites, the accuracy using the OB method is quite weak for the large structures, which is most likely a problem of the object generation in the segmentation process. If the spectral difference of dwellings and their surroundings is weak, or if the dwellings are directly neighboring each other, the segmentation may result in one segment instead of several and the size of the delineated object is biased towards a detection of larger dwellings. The integration of the template matching correlation layer in the segmentation step shows an improvement of the results.
Figure 11 presents the accuracy differentiated by the dwelling type summarized across all test sites; total accuracy values are slightly lower than for the calculation without a differentiation of dwelling types (e.g., blue dwellings recognized as dark dwellings are counted as errors). The figure gives a good impression of how the TMOB approach is especially helpful in improving the detection of additional dark dwellings and other non-bright structures, resulting in a considerably increased PA (by 12.4 percentage points).

4. Conclusions

The main finding of this study is that template matching is suitable for dwelling extraction in refugee and IDP camps using VHR satellite imagery, in particular when stratified by means of OBIA. Several reasons account for the combination of template matching with object-based image analysis methods. First, the application of two or more templates may result in dual or multiple hits on the same object, resulting in double (or more often) counting. Second, the point layers created by template matching contain many false positives, which results in a low UA, a typical drawback of template matching methods. Third, the templates in the library do not cover all possible types of dwellings. These limitations can be overcome by stratifying the template matching with the help of OBIA methods. Multiple counting of a dwelling can be avoided by segmentation and object resizing using context information. False alarms caused by, for example, trees and bushes, can be masked as vegetated area and the template matching can be restricted to the camp area only. This can enhance the UA of template matching considerably, which was shown in this study.
Incorporating the shadow effect of dwellings into the templates helped to improve the detection rate in complex camp situations. Furthermore, the orientation of a template caused by the shadow can be used as a clear identifier for a dwelling. This is an advantage compared to conventional object-based workflows for dwelling extraction, where it is difficult to take the shadows of dwellings into account.
We could show that the combined approach for dwelling extraction in refugee camps, stratifying the template matching within an initial object-based analysis, increases the average PA by 11.7 percentage points and slightly increases the UA by 0.9 percentage points compared to pre-existing conventional OBIA classifications (average over all test sites, not differentiated by dwelling type). This also holds true if the accuracy assessment is further differentiated into different dwelling types.
These results show that the integration of template matching in OBIA workflows is able to further improve the results of the current state-of-the-art methods for semi-automated dwelling extraction, especially in complex situations. In particular, we conclude that:
(i)
The extraction rate in difficult (e.g., low contrast, dense dwellings) situations can be improved by incorporating the shadow effect of a dwelling in a template library;
(ii)
It is possible to establish a general template matching library for dwellings to be applied in similar conditions;
(iii)
The combination of template matching with OBIA methods (stratification) can enhance the accuracy of dwelling extraction compared to template matching solely.
Limitations of this template matching implementation include the missing support for multiband images, the fact that templates are not scale invariant, and that the rotation angle could not be set directly but only the number of rotations, which increases the computation time (redundant usage of templates) in the process.
The accuracy of the greyscale template matching can also be increased if a multiband template matching approach is used. First tests in the software Ciratefi (v.1.05) revealed quite promising results, but the combination with OBIA workflows is still limited due to the lack of geospatial data support and reduced radiometric depth allowed per image band. For a multiband approach, the template library also needs to be extended, which could hamper the general applicability along with increasing computation time.

Acknowledgments

The research received funding from the Austrian Research Promotion Agency (FFG) under the Austrian Space Application Programme (ASAP) within the projects EO4HumEn and EO4HumEn+ (EO-based services to support humanitarian operations: monitoring population and natural resources in refugee/IDP camps; contract no: 840081/854041) and from the Karl-Kahane foundation.

Author Contributions

D.T. had the initial research idea, was involved in the research design and research supervision, contributed to revising and improving the presented research, and wrote significant parts of the manuscript. P.K. conducted major parts of the research, drafted most of the figures, and wrote significant parts of the manuscript. P.F. was involved in the research design and was mainly responsible for the provision of the reference data. She contributed to the implementation of the research as well as to writing the manuscript. S.L. was involved in the research design, the provision of the reference data, and in the revision of the final manuscript. All four authors read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. UNHCR Global Trends. Forced Displacement in 2015. Available online: http://www.unhcr.org/news/latest/2016/6/5763b65a4/global-forced-displacement-hits-record-high.html (accessed on 26 December 2016).
  2. Füreder, P.; Lang, S.; Rogenhofer, E.; Tiede, D.; Papp, A. Monitoring Displaced People in Crisis Situations Using Multi-temporal VHR Satellite Data During Humanitarian Operations in South Sudan. In Proceedings of the GI_Forum 2015—Geospatial Minds for Society, Salzburg, Austria, 7–10 July 2015; pp. 391–401. [Google Scholar]
  3. UNHCR Mid-Year Trends 2015. Available online: http://www.unhcr.org/statistics/unhcrstats/56701b969/mid-year-trends-june-2015.html (accessed on 26 December 2016).
  4. Lang, S.; Tiede, D.; Hölbling, D.; Füreder, P.; Zeil, P. Earth observation (EO)-based ex post assessment of internally displaced person (IDP) camp evolution and population dynamics in Zam Zam, Darfur. Int. J. Remote Sens. 2010, 31, 5709–5731. [Google Scholar] [CrossRef]
  5. Grundy, C.; Füreder, P.; Siddiqui, R.; Katsuva Sibongwe, D.; Tiede, D.; Lang, S.; Checci, F. Validation of satellite imagery methods to estimate population size. In MSF Scientific Day, 25 May 2012; MSF Association: London, UK, 2012. [Google Scholar]
  6. Knoth, C.; Pebesma, E. Detecting dwelling destruction in Darfur through object-based change analysis of very high-resolution imagery. Int. J. Remote Sens. 2017, 38, 273–295. [Google Scholar] [CrossRef]
  7. Tiede, D.; Füreder, P.; Lang, S.; Hölbling, D.; Zeil, P. Automated Analysis of Satellite Imagery to provide Information Products for Humanitarian Relief Operations in Refugee Camps—From Scientific Development towards Operational Services. PFG Photogramm. Fernerkund. Geoinf. 2013, 2013, 185–195. [Google Scholar] [CrossRef]
  8. Kemper, T.; Jenerowicz, M.; Pesaresi, M.; Soille, P. Enumeration of dwellings in darfur camps from GeoEye-1 satellite images using mathematical morphology. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 8–15. [Google Scholar] [CrossRef]
  9. Checchi, F.; Stewart, B.T.; Palmer, J.J.; Grundy, C. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations. Int. J. Health Geogr. 2013, 12, 12. [Google Scholar] [CrossRef] [PubMed]
  10. Lang, S.; Füreder, P.; Kranz, O.; Card, B.; Roberts, S.; Papp, A. Humanitarian emergencies: Causes, traits and impacts as observed by remote sensing. In Remote Sensing Handbook, Vol III—Water Resources, Disasters, and Urban; Thenkabail, P.S., Ed.; Taylor and Francis: New York, NY, USA, 2015; pp. 483–512. [Google Scholar]
  11. Giada, S.; De Groeve, T.; Ehrlich, D.; Soille, P. Information extraction from very high resolution satellite imagery over Lukole refugee camp, Tanzania. Int. J. Remote Sens. 2003, 24, 4251–4266. [Google Scholar] [CrossRef]
  12. Wang, S.; So, E.; Smith, P. Detecting tents to estimate the displaced populations for post-disaster relief using high resolution satellite imagery. Int. J. Appl. Earth Obs. Geoinf. 2015, 36, 87–93. [Google Scholar] [CrossRef]
  13. Witmer, F.D. W. Remote sensing of violent conflict: Eyes from above. Int. J. Remote Sens. 2015, 36, 2326–2352. [Google Scholar] [CrossRef]
  14. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  15. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed]
  16. Spröhnle, K.; Tiede, D.; Schoepfer, E.; Füreder, P.; Svanberg, A.; Rost, T. Earth Observation-Based Dwelling Detection Approaches in a Highly Complex Refugee Camp Environment—A Comparative Study. Remote Sens. 2014, 6, 9277–9297. [Google Scholar] [CrossRef]
  17. Füreder, P.; Tiede, D.; Lüthje, F.; Lang, S. Object-based dwelling extraction in refugee/IDP camp—Challenges in an operational mode. South-Eastern Eur. J. Earth Obs. Geomat. 2014, 3, 539–543. [Google Scholar]
  18. Brunelli, R. Template Matching Techniques in Computer Vision: Theory and Practice; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2009; p. 338. [Google Scholar]
  19. Moon, T.K.; Stirling, W.C. Mathematical Methods and Algorithms For Signal Processing; Pearson: London, UK, 2000; Volume 204. [Google Scholar]
  20. Cheng, G.; Han, J. A Survey on Object Detection in Optical Remote Sensing Images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef]
  21. Kim, T.; Park, S.S.; Kim, M.M.; Jeong, S.; Kim, K. Tracking road centerlines from high resolution remote sensing images by least squares correlation matching. Photogramm. Eng. Remote Sens. 2004, 70, 1417–1422. [Google Scholar] [CrossRef]
  22. Heid, T.; Kääb, A. Evaluation of existing image matching methods for deriving glacier surface displacements globally from optical satellite imagery. Remote Sens. Environ. 2012, 118, 339–355. [Google Scholar] [CrossRef]
  23. Debella-Gilo, M.; Kääb, A. Sub-pixel precision image matching for measuring surface displacements on mass movements using normalized cross-correlation. Remote Sens. Environ. 2011, 115, 130–142. [Google Scholar] [CrossRef]
  24. Schubert, A.; Faes, A.; Kääb, A.; Meier, E. Glacier surface velocity estimation using repeat TerraSAR-X images: Wavelet- vs. correlation-based image matching. ISPRS J. Photogramm. Remote Sens. 2013, 82, 49–62. [Google Scholar] [CrossRef]
  25. Liu, G.; Sun, X.; Fu, K.; Wang, H. Aircraft recognition in high-resolution satellite images using coarse-to-fine shape prior. IEEE Geosci. Remote Sens. Lett. 2013, 10, 573–577. [Google Scholar] [CrossRef]
  26. Bandeira, L.; Saraiva, J.; Pina, P. Impact crater recognition on mars based on a probability volume created by template matching. IEEE Trans. Geosci. Remote Sens. 2007, 45, 4008–4015. [Google Scholar] [CrossRef]
  27. Olofsson, K.; Wallerman, J.; Holmgren, J.; Olsson, H. Tree species discrimination using Z/I DMC imagery and template matching of single trees. Scand. J. For. Res. 2006, 21, 106–110. [Google Scholar] [CrossRef]
  28. Gomes, M.F.; Maillard, P. Identification of Urban Tree Crown in a Tropical Environment Using WorldView-2 Data: Problems and Perspectives. In Proceedings of the SPIE—The International Society for Optical Engineering, Dresden, Germany, 23 September 2013. [Google Scholar]
  29. Ke, Y.; Quackenbush, L.J. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing. Int. J. Remote Sens. 2011, 32, 4725–4747. [Google Scholar] [CrossRef]
  30. Erikson, M.; Olofsson, K. Comparison of three individual tree crown detection methods. Mach. Vis. Appl. 2005, 16, 258–265. [Google Scholar] [CrossRef]
  31. Shafri, H.Z.M.; Hamdan, N.; Saripan, M.I. Semi-automatic detection and counting of oil palm trees from high spatial resolution airborne imagery. Int. J. Remote Sens. 2011, 32, 2095–2115. [Google Scholar] [CrossRef]
  32. Laneve, G.; Santilli, G.; Lingenfelder, I. Development of Automatic Techniques for Refugee Camps Monitoring using Very High Spatial Resolution (VHSR) Satellite Imagery. In Proceedings of the IEEE International Conference on Geoscience and Remote Sensing Symposium, Denver, CO, USA, 31 July–4 August 2006; pp. 841–845. [Google Scholar]
  33. Lüthje, F.; Tiede, D.; Füreder, P. Don’t See the Dwellings for the Trees: Quantifying the Effect of Tree Growth on Multi-temporal Dwelling Extraction in a Refugee Camp. In Proceedings of the GI_Forum 2015—Geospatial Minds for Society, Salzburg, Austria, 7–10 July 2015; pp. 406–415. [Google Scholar]
  34. Baltsavias, E.P.; Gruen, A.; VanGool, L. Automatic Extraction of Man-Made Objects from Aerial and Satellite Images III; Verità, M., Ed.; Taylor & Francis: New York, NY, USA, 2001. [Google Scholar]
  35. Lewis, J.P. Fast Normalized Cross-Correlation. Vis. Interface 1995, 10, 1–7. [Google Scholar]
  36. United Nations High Commissioner for Refugees. UNHCR Statistical Yearbook 2012, 12th ed.; United Nations High Commissioner for Refugees: Geneva, Switzerland, 2013; p. 172. [Google Scholar]
  37. Nubareports.org On the Move Again; 70,000 Refugees Compelled to Leave Camp|Nuba Reports. Available online: http://nubareports.org/on-the-move-again-70000-refugees-compelled-to-leave-camp/ (accessed 26 December 2016).
Figure 1. Top: QuickBird 2 subsets of the Yida camp area (acquisition date: 12 October 2012, band combination NIR-R-G (near infrared-red-green)) showing initial image segmentation of two bright dwellings and the cast shadows. Bottom: dark dwellings with very low contrast to the surroundings, where segmentation fails in extracting meaningful objects (Yida camp, 4 March 2013, WorldView-2, R-G-B (red-green-blue)).
Figure 1. Top: QuickBird 2 subsets of the Yida camp area (acquisition date: 12 October 2012, band combination NIR-R-G (near infrared-red-green)) showing initial image segmentation of two bright dwellings and the cast shadows. Bottom: dark dwellings with very low contrast to the surroundings, where segmentation fails in extracting meaningful objects (Yida camp, 4 March 2013, WorldView-2, R-G-B (red-green-blue)).
Remotesensing 09 00326 g001
Figure 2. Examples of typical dwelling types detected in refugee and IDP (internally displaced persons) camps in Eastern Africa. Source: [2], adapted.
Figure 2. Examples of typical dwelling types detected in refugee and IDP (internally displaced persons) camps in Eastern Africa. Source: [2], adapted.
Remotesensing 09 00326 g002
Figure 3. Spatial distribution of the test sites used for the creation of the dwelling template library and the dwelling extraction analyses (band combination of images: R-G-B). The smaller scale bar is valid for the depicted image subsets.
Figure 3. Spatial distribution of the test sites used for the creation of the dwelling template library and the dwelling extraction analyses (band combination of images: R-G-B). The smaller scale bar is valid for the depicted image subsets.
Remotesensing 09 00326 g003
Figure 4. Generalized workflow of template generation applied in this study.
Figure 4. Generalized workflow of template generation applied in this study.
Remotesensing 09 00326 g004
Figure 5. Examples of different template classes and their structural properties, such as shadow direction, shape, type, and size of dwellings, in the template library. In this visualization, the shadows are simulated for the location of the camp Yida, which is located in South Sudan, to illustrate how different sun angles can influence the direction of the shadow. The aerial view is north-oriented while the 45° view is southwest-oriented. The orange arrow shows the applied rotation angle and the green rectangle indicates the (relative) size of the samples for a dwelling template. The shadow direction is encoded within the name of the template (e.g., 1001 shows a shadow on two sides. 1 stands for shadow and 0 for no shadow. The cast shadow is counted clockwise starting at the top).
Figure 5. Examples of different template classes and their structural properties, such as shadow direction, shape, type, and size of dwellings, in the template library. In this visualization, the shadows are simulated for the location of the camp Yida, which is located in South Sudan, to illustrate how different sun angles can influence the direction of the shadow. The aerial view is north-oriented while the 45° view is southwest-oriented. The orange arrow shows the applied rotation angle and the green rectangle indicates the (relative) size of the samples for a dwelling template. The shadow direction is encoded within the name of the template (e.g., 1001 shows a shadow on two sides. 1 stands for shadow and 0 for no shadow. The cast shadow is counted clockwise starting at the top).
Remotesensing 09 00326 g005
Figure 6. Integrated workflow combining object-based image analysis (OBIA) and template matching for dwelling extraction.
Figure 6. Integrated workflow combining object-based image analysis (OBIA) and template matching for dwelling extraction.
Remotesensing 09 00326 g006
Figure 7. Number of detected dwellings for the three test sites. The template matching (TM) approach was applied without differentiation into dwelling types.
Figure 7. Number of detected dwellings for the three test sites. The template matching (TM) approach was applied without differentiation into dwelling types.
Remotesensing 09 00326 g007
Figure 8. Comparison of the same area of the Yida camp (80 m × 40 m) between October 2012 (left column, QuickBird) and April 2013 (right column, WorldView-2). In the rainy season (left), dwellings show higher contrast to their surroundings compared to the dry season (right). The top row shows the true color image (band combination R-G-B) and the bottom row shows the blue band in grayscale, which was used for the template matching of bright dwellings.
Figure 8. Comparison of the same area of the Yida camp (80 m × 40 m) between October 2012 (left column, QuickBird) and April 2013 (right column, WorldView-2). In the rainy season (left), dwellings show higher contrast to their surroundings compared to the dry season (right). The top row shows the true color image (band combination R-G-B) and the bottom row shows the blue band in grayscale, which was used for the template matching of bright dwellings.
Remotesensing 09 00326 g008
Figure 9. El Redis camp (2015, band combination R-G-B, left) and Yida camp (2012, band combination R-G-B, right) with 200 m × 200 m squares randomly selected for visual interpretation and accuracy assessment.
Figure 9. El Redis camp (2015, band combination R-G-B, left) and Yida camp (2012, band combination R-G-B, right) with 200 m × 200 m squares randomly selected for visual interpretation and accuracy assessment.
Remotesensing 09 00326 g009
Figure 10. Accuracy assessment for the randomly selected sample areas. Dwellings are not differentiated by type. UA = User’s accuracy, PA = Producer’s accuracy.
Figure 10. Accuracy assessment for the randomly selected sample areas. Dwellings are not differentiated by type. UA = User’s accuracy, PA = Producer’s accuracy.
Remotesensing 09 00326 g010
Figure 11. Accuracy assessment for randomly selected sample areas summarized across all test sites. Dwellings are differentiated by three types (bright, dark, and other structures). UA = User’s accuracy, PA = Producer’s accuracy.
Figure 11. Accuracy assessment for randomly selected sample areas summarized across all test sites. Dwellings are differentiated by three types (bright, dark, and other structures). UA = User’s accuracy, PA = Producer’s accuracy.
Remotesensing 09 00326 g011
Table 1. Computation time of the template matching implementation for different image sizes and different numbers of rotation steps (time in seconds on a standard PC).
Table 1. Computation time of the template matching implementation for different image sizes and different numbers of rotation steps (time in seconds on a standard PC).
Image Size (Pixel)Number of Template Rotations
1248163264
6400 × 64003.215.9811.0599.16275.14653.061433.05
3200 × 32001.161.562.7525.8070.156154.09332.72
1600 × 16000.220.360.615.1414.21932.7569.69
800 × 8000.080.110.191.223.197.1615.02
Table 2. Sites to which the template library is applied including information about acquisition date, size, and complexity regarding dwelling extraction.
Table 2. Sites to which the template library is applied including information about acquisition date, size, and complexity regarding dwelling extraction.
SiteAcquisition DateSize (Pixel)SensorComplexity
El Redis3 December 20151657 × 1658WV-2Low
Yida10 December 20122698 × 2337QBModerate
Yida4 March 20133237 × 2805WV-2High
Table 3. Accuracy assessment for randomly selected sample areas in the El Redis camp. The dwellings are differentiated by type. Number (No.) of true positives (TP) and false positives (FP), user’s accuracy (UA), producer’s accuracy (PA), template library within an object-based framework (TMOB), and pre-existing OBIA classification (Pre-existing OB).
Table 3. Accuracy assessment for randomly selected sample areas in the El Redis camp. The dwellings are differentiated by type. Number (No.) of true positives (TP) and false positives (FP), user’s accuracy (UA), producer’s accuracy (PA), template library within an object-based framework (TMOB), and pre-existing OBIA classification (Pre-existing OB).
El Redis 2015Visual InterpretationTMOBPre-Existing OB
Bright dwellingTP (No.)548522491
FP (No.)071
UA (%)10098.799.8
PA (%)10095.389.6
Brown dwellingTP (No.)966333
FP (No.)0722
UA (%)1009060
PA (%)10065.634.4
Large structureTP (No.)131313
FP (No.)003
UA (%)10010081.3
PA (%)100100100
TotalTP (No.)657598537
FP (No.)01426
UA (%)10097.795.4
PA (%)1009181.7
Table 4. Accuracy assessment for randomly selected sample areas in the Yida camp (2012). The dwellings are differentiated by type. Number (No.) of true positives (TP) and false positives (FP), user’s accuracy (UA), producer’s accuracy (PA), template library within an object-based framework (TMOB), and pre-existing OBIA classification (Pre-existing OB).
Table 4. Accuracy assessment for randomly selected sample areas in the Yida camp (2012). The dwellings are differentiated by type. Number (No.) of true positives (TP) and false positives (FP), user’s accuracy (UA), producer’s accuracy (PA), template library within an object-based framework (TMOB), and pre-existing OBIA classification (Pre-existing OB).
Yida 2012Visual InterpretationTMOBPre-Existing OB
Bright dwellingTP (No.)373291287
FP (No.)01123
UA (%)10096.492.6
PA (%)1007876.9
Blue dwellingTP (No.)211147101
FP (No.)03426
UA (%)10081.279.5
PA (%)10069.747.9
Large structureTP (No.)322
FP (No.)014
UA (%)10066.733.3
PA (%)10066.766.7
TotalTP (No.)587452397
FP (No.)04653
UA (%)10090.588
PA (%)10074.966.4
Table 5. Accuracy assessment for randomly selected sample areas in the Yida camp (2013). The dwellings are differentiated by type. Number (No.) of true positives (TP) and false positives (FP), user’s accuracy (UA), producer’s accuracy (PA), template library within an object-based framework (TMOB), and pre-existing OBIA classification (Pre-existing OB).
Table 5. Accuracy assessment for randomly selected sample areas in the Yida camp (2013). The dwellings are differentiated by type. Number (No.) of true positives (TP) and false positives (FP), user’s accuracy (UA), producer’s accuracy (PA), template library within an object-based framework (TMOB), and pre-existing OBIA classification (Pre-existing OB).
Yida 2013Visual InterpretationTMOBPre-Existing OB
Bright dwellingTP (No.)14813390
FP (No.)06144
UA (%)10068.667.2
PA (%)10089.960.8
Brown dwellingTP (No.)311185123
FP (No.)05845
UA (%)10076.173.2
PA (%)10059.539.6
Blue dwellingTP (No.)251311
FP (No.)031
UA (%)10081.391.7
PA (%)1005244
Large structureTP (No.)171012
FP (No.)0823
UA (%)10055.634.3
PA (%)10058.870.6
Small structureTP (No.)332
FP (No.)0911
UA (%)1002515.4
PA (%)10010066.7
TotalTP (No.)504344238
FP (No.)010978
UA (%)10071.264.8
PA (%)10068.347.2

Share and Cite

MDPI and ACS Style

Tiede, D.; Krafft, P.; Füreder, P.; Lang, S. Stratified Template Matching to Support Refugee Camp Analysis in OBIA Workflows. Remote Sens. 2017, 9, 326. https://doi.org/10.3390/rs9040326

AMA Style

Tiede D, Krafft P, Füreder P, Lang S. Stratified Template Matching to Support Refugee Camp Analysis in OBIA Workflows. Remote Sensing. 2017; 9(4):326. https://doi.org/10.3390/rs9040326

Chicago/Turabian Style

Tiede, Dirk, Pascal Krafft, Petra Füreder, and Stefan Lang. 2017. "Stratified Template Matching to Support Refugee Camp Analysis in OBIA Workflows" Remote Sensing 9, no. 4: 326. https://doi.org/10.3390/rs9040326

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop