Next Article in Journal
Moving Vehicle Information Extraction from Single-Pass WorldView-2 Imagery Based on ERGAS-SNS Analysis
Previous Article in Journal
An Adaptive Model to Monitor Chlorophyll-a in Inland Waters in Southern Quebec Using Downscaled MODIS Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of Optical and Synthetic Aperture Radar Imagery for Improving Crop Mapping in Northwestern Benin, West Africa

1
Department of Remote Sensing, University of Wurezburg, Oswald-Külpe-Weg 86, 97074 Wuerzburg, Germany
2
Institute for Geography and Geology, University of Wuerzburg, 97074 Am Hubland, Germany
3
Competency Center, West African Science Service Center on Climate Change and Adapted Land Use, Ouagadougou BP 9507, Burkina Faso
*
Author to whom correspondence should be addressed.
Remote Sens. 2014, 6(7), 6472-6499; https://doi.org/10.3390/rs6076472
Submission received: 14 April 2014 / Revised: 7 July 2014 / Accepted: 8 July 2014 / Published: 15 July 2014

Abstract

:
Crop mapping in West Africa is challenging, due to the unavailability of adequate satellite images (as a result of excessive cloud cover), small agricultural fields and a heterogeneous landscape. To address this challenge, we integrated high spatial resolution multi-temporal optical (RapidEye) and dual polarized (VV/VH) SAR (TerraSAR-X) data to map crops and crop groups in northwestern Benin using the random forest classification algorithm. The overall goal was to ascertain the contribution of the SAR data to crop mapping in the region. A per-pixel classification result was overlaid with vector field boundaries derived from image segmentation, and a crop type was determined for each field based on the modal class within the field. A per-field accuracy assessment was conducted by comparing the final classification result with reference data derived from a field campaign. Results indicate that the integration of RapidEye and TerraSAR-X data improved classification accuracy by 10%–15% over the use of RapidEye only. The VV polarization was found to better discriminate crop types than the VH polarization. The research has shown that if optical and SAR data are available for the whole cropping season, classification accuracies of up to 75% are achievable.

1. Introduction

In recent years, agricultural land use has experienced high expansion rates in many parts of the world [1]. This expansion is mainly due to high population growth (especially in developing countries) and the need to grow more food to meet the rising food demand. Accurate and up-to-date information on agricultural land use is essential to appropriately monitor these changes and assess their impacts on water and soil quality, biodiversity and other environmental factors at various scales [24]. This is particularly important considering the looming effects of climate change and variability. Updated information on agricultural land use can help in monitoring changes in cropping systems and gauge farmer’s reaction to the changing climate. Additionally, a wide range of biophysical and economic models can benefit from this information and improve decision-making based on their results.
Remotely sensed (RS) data provide useful information for agricultural land use mapping. Periodic acquisition of RS data enables analysis to be conducted at regular intervals, which aids in identifying changes. Optical systems, which have largely been relied upon for agricultural land use mapping [5,6], measure reflectance from objects in the visible and infrared portions of the electromagnetic spectrum. The amount of reflectance is a function of the bio-physical characteristics of the reflecting feature (e.g., canopy moisture, leaf area and level of greenness of vegetation). Since different crops at varying vegetative stages exhibit different bio-physical characteristics, optical images have been useful in previous crop mapping studies [79].
However, the reliance of optical systems on the Sun’s energy limits image acquisition in cloudy or hazy conditions. Images acquired during these periods are normally of little use in mapping due to high cloud/haze cover. Whereas on irrigated land under arid conditions, the entire growing period can be easily covered by optical data [10,11], agricultural land use mapping efforts in rainfed dominated agricultural regions, like West Africa (WA), are hampered, because the rainfall season coincides with the cropping season. Consequently, little or no in-season images are available for agricultural land use mapping, leading to challenges in discriminating between different crop types or crop groups [1214]. For example, a number of land use studies [1517] in WA have had to lump all crop classes into one thematic class (cropland), due to a poor image temporal sequence.
Synthetic aperture radar (SAR) systems are nearly independent of weather conditions. Unlike optical sensors, active radar systems have their own source of energy, transmitting radio waves and receiving the reflected echoes from objects on the Earth’s surface. The longer wavelengths of radio waves enable transmitted signals to penetrate clouds and other atmospheric conditions [18], which make radar systems highly reliable in terms of data provision, especially during periods in which optical sensors fail [1921].
Moreover, the information content of radar imagery differs from that of optical data owing to differences in how transmitted signals from the two systems interact with features on the ground. A radar sensor transmits an electromagnetic signal to an object and receives/records a reflected echo (backscatter) from the object. Backscatter intensities recorded by radar systems are largely a function of the size, shape, orientation and dielectric constant of the scatterer [22]. Thus, in vegetation studies, radar backscatter intensities will differ based on the size, shape and orientation of the canopy components (e.g., leaves, stalks, fruit, etc.). Crops with different canopy architecture and cropping characteristics (e.g., planting in mounds) can be distinguished based on their backscatter intensities [2325]. The recent introduction of dual and quad-polarization acquisition modes in many radar satellites (e.g., Radarsat-2, PALSAR, TerraSAR-X) further increases the information content in radar data.
Owing to the differences in imaging and information content, data from optical and radar systems have been found to be complementary [26]. Several studies have shown that integrating data from the two sources improves classification accuracies over the use of either of them [27]. The authors of [23] tested the integration of Landsat TM and SAR data (Radarsat, ENVISAT ASAR) for five regions in Canada. They concluded that in the absence of a good time series of optical imagery, the integration of two SAR images and a single optical image is sufficient to deliver operational accuracies (>85% overall accuracy). The authors of [28,29] noted an increase of 20% and 25%, respectively, in overall accuracy when radar and optical imagery were integrated in crop mapping. Other studies found percentage increases between 5% and 8% when the two data sources were merged [13,3034].
In this study, high resolution multi-temporal optical (RapidEye) and dual polarimetric (VV and VH) radar data (TerraSAR-X) have been combined to map crops and crop groups in northwestern Benin, West Africa. Excessive cloud cover during the main cropping season in West Africa has, for many years, hindered crop mapping efforts in the sub-region due to the unavailability of satellite images. A recent study [12] conducted in the sub-region with multi-temporal RapidEye images identified poor image temporal coverage as the limiting factor in accurately discrimination between certain crop types. A further limiting factor is the heterogeneity (small patches of different land use and land cover types) of the landscape [35], which leads to spectral confusion between classes, especially when per-pixel approaches are employed [36]. In order to reduce this confusion, a field-based classification approach was employed [37,38]. Vector field boundaries were derived through image segmentation. A per-pixel classification result was then overlaid and the modal class within each field assigned to it.
The aim of this study was to combine optical and radar data to ascertain the contribution of radar data to crop mapping in WA. The specific research question addressed is: can dual polarized radar images acquired during peak cropping season months complement optical data to improve classification accuracies in crop mapping?

2. Study Area

The study was conducted in a catchment located in the northwestern part of the republic of Benin (Figure 1). Like other parts of West Africa, agriculture here is mainly rainfed. The rainfall distribution in the area is uni-modal and lasts from May to October [39]. Annual rainfall ranges from 800 mm to 1100 mm [40], while the mean monthly temperature for the past 35 years has ranged between 25 °C and 30 °C [41].
The catchment is located in the Materi commune, which administratively falls under the jurisdiction of the Atacora Department. It has a flat terrain with slopes less than 5°. It is a rural catchment with scattered villages in and around it. Dassari is the biggest village, with an estimated population of about 20,000 as of the year 2002 [42]. The northeastern part of the catchment forms part of the Pendjari National Park in West Africa. The main source of employment for inhabitants of the catchment is agriculture. Major crops cultivated are cotton, maize, sorghum, millet, yam and rice. Sorghum and millet may be intercropped, while yam is sometimes intercropped with rice, maize, okra, agushie, etc. Cotton is cultivated exclusively for export (the Government of Benin purchases the produce). The remaining crops are cultivated either for subsistence or for commercial purpose. Millet and sorghum are mostly for house consumption, while maize, rice and yam are normally sold in part to raise income for the household. Farm sizes are small. The authors of [43] estimated that about 50% of farms in northwestern Benin are less than 1.25 ha in size. Due to the ease of marketing and the financial benefits associated with it, cotton fields dominate in this area and are normally bigger than that of other crops. It is estimated that about 50% of farm land in northwestern Benin is under cotton cultivation [43]. Cotton farmers receive support from the government in the form of seeds, fertilizer and pesticides during the cropping season.

3. Data and Image Pre-Processing

3.1. RapidEye (RE)

Multi-temporal RapidEye (RE) images were obtained from the RapidEye Science Archive Team (RESA) of the German Aerospace Center (DLR). Six monthly time-steps acquired on 4 April, 2 May, 13 June, 19 September, 12 October and 15 November 2013, were analyzed. In addition to the traditional multi-spectral bands of blue, green, red and near-infrared (NIR), RE provides data in the red edge channel. Level 3A data (i.e., orthorectified with a spatial resolution of 5 m and georeferenced to the Universal Transverse Mercator (UTM) projection) were used in this study. Atmospheric correction was performed for all images using ENVI ATCOR 2 (atmospheric correction) [44]. This application provides a sensor-specific (e.g., RapidEye, Landsat, SPOT) atmospheric database of look-up-tables (LUT), which contains the results of pre-calculated radiative transfer calculations based on MODTRAN 5 (MODerate resolution atmospheric TRANsmission) [45]. Parameters, such as satellite azimuth, illumination elevation and azimuth and incidence angle, required for the atmospheric correction were obtained from the associated metadata files of the images. A cloud mask was manually created. All images were co-registration (image-to-image) to ensure the alignment of corresponding pixels. A root mean square error of less than one pixel was obtained for all co-registrations. Spectral analysis was conducted for each image by deriving band ratios (NIR/green, NIR/red edge), differences (NIR-green, NIR-red, NIR-red edge) and normalized difference vegetation indices (NDVI, NDVI-red edge). For each RE time step, the original bands were used together with the indices mentioned above.

3.2. TerraSAR-X (TSX)

Multi-temporal dual polarimetric (VV/VH) TerraSAR-X (TSX) images acquired in StripMap (SM) mode were obtained from the German Aerospace Center (DLR). TSX provides high spatial resolution SAR data owing to its operation in the X-band (frequency of 9.6 GHz and 31-mm wavelength). The SM product of TSX achieves a spatial resolution of approximately 3 m (6–7 m for dual polarization), which makes it a suitable product for integration with RE images. VV/VH polarizations were selected in line with the results of previous studies that found these polarizations useful in crop classification [8,23]. Images were acquired in May, June, July and August (Table 1). Due to the limited width of dual polarization SM data (i.e., 15 km), two acquisitions, taken in an interval of 11 days (TSX revisit time), were made monthly in order to cover the study area. Data were supplied in both Single Look Slant Range Complex (SSC) and Multi-Look Ground Range-Detected (MDG) formats.

3.2.1. Polarimetric Analysis

Analysis of the polarimetric information from the two channels (VV and VH) is necessary for discriminating different targets based on the type of backscattering they produce. In polarimetry, scattering matrices (e.g., Sinclair matrix, covariance matrix, Müller M-matrix, Kennaugh K-matrix, etc.) are used to describe the polarization state of electromagnetic waves under different scattering conditions [46]. The fundamental quantities measured by a polarimetric SAR are the scattering matrix elements, i.e., the transmitted and received polarizations, respectively [47]. These matrices contain relevant information about the scattering processes [46]. Thus, the use of these matrices can assist in the development of unique scattering signatures for different features and improve their discrimination.
The dual polarimetric information was analyzed using the Kennaugh scattering matrix [48]. The Kennaugh matrix is a symmetric matrix, where the single elements of the matrix are real and linear combinations of the Sinclair matrix elements [49,50]. It is also referred to as the Stokes matrix and can be converted to a covariance or coherency matrix [50]. The Kennaugh matrix elements for the VV/VH cross-polarization (Equations (1)(5)) were implemented in the “NEST ESA SAR toolbox” application [51]. Equations (2) and (3) represent the total backscatter intensities from both polarizations and their difference, respectively. Equations (4) and (5) represents the information from the real and imaginary parts of the SSC data, respectively. Terrain correction was performed for the four Kennaugh intensity bands with the Range Doppler Terrain Correction (RDTC) routine implemented in NEST [52,53]. Elevation data required for the terrain correction was obtained from the Shuttle Radar Topographic Mission (SRTM) Digital Elevation Model (DEM). The raw digital numbers (DNs) of the Kennaugh intensity bands were converted to sigma nought by applying radiometric normalization. To enable integration with the RE data, the data were resampled to 5-m resolution using bilinear interpolation and georeferenced to the UTM projection (Zone 31N (north)). The two images acquired per month were then mosaicked and subsetted to match the dimensions of the RE data. Visual inspection of the Kennaugh intensity bands revealed a high level of noise in the elements “K5” and “K6” compared to the other two elements. For this reason, elements “K5” and “K6” were not considered in subsequent analysis.
K = | K 0 0 K 5 K 6 0 K 1 0 0 K 5 0 0 0 K 6 0 0 0 | , with
K 0 = 0.5 ( | S XX | 2 + 2 | S XY | 2 )
K 1 = 0.5 ( | S XX | 2 + 2 | S XY | 2 )
K 5 = ( S XX S XY * )
K 6 = 𝔍 ( S XX S XY * )
Apart from the Kennaugh intensity bands, backscatter intensities from the individual polarizations (VV/VH) were processed by performing terrain and radiometric correction. Again, the RDTC routine in NEST was used to convert the raw DNs to sigma nought and georeferenced to UTM Zone 31N. For each monthly time-step, the two Kennaugh intensity bands (K0 and K1) and the backscatter intensities of the two polarizations (VV/VH) were stacked together (i.e., four bands per time step) for subsequent analysis.

3.2.2. SAR Data Filtering

Filtering is an important pre-step to analyzing SAR images. Traditionally, local mean filters (e.g., Lee, Frost, etc.) have been used. However, non-local means (NLM) filters have an advantage over mean filters in that they improve the preservation of structure and texture [54]. The use of NLM filters for SAR images has been demonstrated in recent years [55]. NLM filters work with the assumption that, for every small window (patch) in an image, there are similar windows (i.e., in terms of grey level intensity) (patches) in the whole image or a defined search window. Thus, the estimated value of a pixel under consideration is based on a weighted average of all pixels in the image or a defined search window [54].
A NLM filter implemented with ENVI’s Interactive Data Language (IDL) was used for post filtering of the processed TSX data. The algorithm estimates the similarity (weight) between two pixels using the squared Hellinger distance [56]. A similarity window of 9 × 9 pixels was used, while the search window used was set at 21 × 21 pixels. The algorithm was run twice on the data (i.e., the first result as input for the second run) to achieve enough averaging. Figure 2 demonstrates the advantages of using NLM filters on SAR data by comparing a portion of the July TSX image in its unfiltered state, a corresponding filtered image using the Lee adaptive filter (with window size 7 × 7; [57] and an NLM filtered image. Like in the case of the NLM filter, the adaptive Lee filter was applied twice on the raw SAR. The red ellipses show that the NLM filter better preserves the structure of agricultural fields than the other two methods.

3.3. Training and Validation Data

Field campaigns were organized in July and October 2013, to collect training and validation data for classification and accuracy assessment, respectively. Crops that were mapped and considered in this study are: cotton, maize, millet, sorghum, rice and yam. Figure 3 presents a cropping calendar for the various crops investigated. In each campaign, focal areas, each about 1 km2, were identified for mapping. Within each focal area, representative fields for all crop types were mapped using a handheld Global Positioning System (GPS) device. Occasionally, fields outside these focal areas were mapped due to the absence of certain crop types in the area. For example, rice and yam fields were not always available in the focal areas. As much as possible, trees were avoided in mapping the fields. Five photographs were taken per field (i.e., one each to north, south, east, west and one from north position to the middle of the field). In all, eighty-four fields were mapped in July for training the classifier, while seventy-six fields were mapped in October for accuracy assessment. Table 2 details the number of fields per crop that were used for training and validation.

4. Methodological Approach

The methodological approach adopted in this study includes four main steps (Figure 4). In Step 1, a crop mask (i.e., separation between cropped and non-cropped areas) was derived. This step was necessary to reduce confusion between crops and surrounding natural/semi-natural vegetation, due to high similarities between the phenological cycles of these two classes [36,58]. In the second step, a per-pixel crop classification was conducted on the derived crop mask (i.e., cropped areas only) using a hierarchical classification scheme and the random forest classification algorithm. Crop classification using per-pixel approaches often results in a “speckled” output due to high spectral within-field heterogeneity [8]. In West Africa, this situation is further aggravated by a heterogeneous landscape [12]. Recent studies have overcome this challenge by overlaying per-pixel classification results on parcel/field boundaries and assigning the modal class within each field as its class [5,23]. This approach has been found to improve classification accuracies [32,37]. In line with this, the third step of the methodological approach involved the derivation of field boundaries in the study area using the RE images and a segmentation algorithm. These boundaries were combined with the results of Step 2 to produce a per-field crop map. In Step 4, the accuracy assessment was conducted on the per-field crop map using independently surveyed fields (Table 2). The sections below detail each of the four steps.

4.1. Classification Algorithm

The random forest (RF) classification algorithm [59], which belongs to the class of ensemble classifiers, was used for classification. The RF package in the statistics software “R” was used [60,61]. This algorithm automatically generates a large set of classification trees (forest), each tree based on a random selection of training samples and predictors. Predictors are the spectral bands of RE (i.e., original + indices) and TSX (see Sections 3.1 and 3.2). Training samples are derived by overlaying training areas/polygons on the predictors and extracting the corresponding pixel values. By building several classification trees, RF overcomes the generalization errors associated with single classification trees and, thus, increases the classification accuracy [62]. Each tree in the forest casts a unit vote for the most popular class. The classification output is determined by a majority vote of the trees. RF conducts an internal validation (out-of-bag error rate) based on training samples that are not used in the generation of the trees [63]. This error rate served as an initial assessment of classification accuracy and as a guide to the selection of appropriate parameters for each run. For all classifications, a maximum of five hundred trees were generated, while the default number of predictors (i.e., square root of total number of predictors) to be tried at each node [60] was used. The RF variable importance measure [60] was used to identify the most important predictors in all classifications. The mean decrease in the Gini coefficient served as a measure of variable importance.

4.2. Derivation of a Crop Mask

Derivation of a crop mask prior to crop classification has been found to improve classification accuracies [64]. This is particularly important in heterogeneous landscapes, such as West Africa, where farming is done around hamlets and in bushes. The practice of integrated crop and livestock systems [65] also results in grasslands that are close to fields, which are often left for animal grazing. Consequently, crop mapping on full-image scenes results in considerable confusion between crop/non-crop areas.
Ploughed fields or fields at early vegetative stages have unique spectral characteristics compared to surrounding natural/semi-natural vegetation, due to high reflectance from the background soil. Thus, an image acquired during the ploughing or early crop stages is important for accurately discriminating cropland from surrounding land uses and covers. Since ploughing in the study area begins in late April/early May, the RE image acquired on 13 June was first classified to identify fields that had been ploughed as of the time the image was acquired. Two classes (early ploughed/non-crop) were considered at this stage. The areas identified were masked out from the RE image time series. Due to variable planting dates in the study area and the fact that some crops are cultivated a bit later after the onset of the rainy season (e.g., maize), a considerable number of fields in the study area had not been ploughed at the time of the June acquisition. Therefore, a second classification was performed to identify these fields. This classification was performed using all six available RE images, with only two classes (late ploughed/non-crop) considered as previously. Cropped areas identified in both classifications (early and late ploughed) were combined to derive a crop mask. A per-pixel accuracy assessment was performed by comparing the final results (crop/non-crop) with reference data obtained from the field campaign. Overall accuracy, producer’s accuracy and user’s accuracy [66] were computed.

4.3. Crop Classification

4.3.1. Experimental Design

The objective of this research was to investigate whether SAR data acquired during the cropping season can complement optical data to improve classification accuracies in the study area. In order to achieve this objective, four experiments were conducted with different image combinations (Table 3). In Experiment (A), four RE images acquired in April, May, October and November were used for classification. This selection was made based on analysis of historical Landsat acquisitions (1984–2011) in the region. Historical acquisitions reveal a high possibility of obtaining optical imagery for these months. This is mainly due to the fact that these months fall largely outside the peak rainfall season, during which there is relatively lower cloud cover with better chances of obtaining cloud-free optical images. Thus, this experiment was conducted to determine the accuracies that can be obtained with such a time-series. Experiment (B) assessed the improvement in classification accuracy when SAR imagery acquired during the peak cropping season (May, June, July, August) was added to the RE time series in (A). Experiment (C) assessed the accuracy of classification when all available RE images were used for crop classification, while Experiment (D) considered the use of all available RE and TSX images.

4.3.2. Classification Approach

Crop classification was performed on the generated crop mask to discriminate five crop types/groups. These are cotton, maize, rice, yam and millet/sorghum. Millet and sorghum were combined into one class (cereals) due to similarities in their structure, planting dates and the fact that they are often intercropped [67]. The initial classification of all the five classes using different image combinations resulted in high levels of confusion between the classes.
A study of the RE NDVI temporal profiles of the training data revealed that variable planting dates of the same crops, which leads to temporal within-class variability, was possibly the cause of the confusion. As depicted in Figure 5, two cotton fields (Cotton 1 and 2) exhibit different temporal profiles, with one having a peak in September and the other in October. Maize 1 has a temporal profile similar to that of Cotton 1, with both having a peak in September. Farmers in the study region subjectively decide on when to plough and seed. Some farmers plant late in the season, due to poor rains, while others still follow the traditional cropping calendar regardless of the amount of rainfall received. This situation could lead to different crops (e.g., Cotton 1 and Maize 1) exhibiting similar phenological profiles, while the same crops (e.g., Cotton 1 and Cotton 2) would exhibit different phenological profiles. The authors of [68] identified similar challenges (temporal within-class variability), especially for rice cultivation, in the Khorezm region in Uzbekistan, Central Asia. They noted that temporal segmentation of MODIS time series results in a better representation of crops that exhibit temporal variability in phenology. However, temporal aggregation of information was impossible for this study, due to the heterogeneity of the time series available here (SAR and optical data, irregular acquisitions).
In order to reduce the effect of this confusion, two separate masks, October and September peak, were created from the crop mask based on the NDVI images of the September, October and November RE images (Figure 4). Mask 1 included all fields that have an NDVI peak in September, and Mask 2 included fields with an NDVI peak in October. The October and September peak masks constituted 65% and 35% of the crop mask, respectively, suggesting that the majority of the crops in the study area reach their peak (full development) in October. Separate classifications were performed on the two masks to reduce confusion due to variable planting dates. Fifty-four out of the eighty-four training samples (see Section 3.3) were used to classify the October peak mask, while thirty samples were used for the September peak mask.
Figure 6 details the classification approach adopted to classify the five crop types on each of the masks described above. A three-level hierarchical scheme was implemented to sequentially differentiate the different crop types. At each level, several band/image combinations were tested (depending on the experiment being conducted; Section 4.2.1) during classification to determine the optimal combination for discriminating the classes under consideration. At the first level, an RF classification was performed to separate two broad crop groups (rice/yam and cotton/maize/cereals). These two crop groups were determined based on the results of an initial one-time classification involving all crops, which revealed little confusion between the two groups. A mask was created for each group for subsequent analysis. At the second level, different RF classifications were performed to separate yam from rice and cotton from maize, millet and sorghum. A final classification was conducted at the third level to separate maize from millet/sorghum (cereals). Results obtained for individual crops at Levels 2 and 3 were combined into a final crop map (at the pixel level). A corresponding per-field crop map was produced by overlaying the per-pixel crop classification results with field boundaries derived through image segmentation (Section 4.3). The modal crop class within each field boundary was assigned to it.

4.4. Derivation of Field Boundaries

A cadastral map showing the field boundaries in the study area does not exist. Therefore, field boundaries were derived from the RE image acquired on 19 September. This image was chosen because it presented the best contrast between fields, which can be attributed to structural differences between the different crops at the time of acquisition. For example, maize fields, which are generally cultivated later in the season (late July/early August), will, by mid-September, be at the mid-vegetative stage, while millet/sorghum, which are planted much earlier in the season (May/June), would be at the seed development/senescence stage.
The eCognition Developer Software (8.7) [69] was used to conduct a multi-resolution segmentation of the image. Due to a higher between-field contrast in the NIR and red edge bands, the weights of these bands were doubled. Different parameter sets of scale, shape and compactness were tested in segmenting the image. The result of each test was validated against twenty-four manually-digitized fields (from the September image) by comparing their corresponding areas and calculating the mean absolute error (MAE) and the mean error. The result of the parameter set with the best statistics was selected.
Separation between crop and non-crop segments was achieved by overlaying the segmentation results with the per-pixel crop mask derived in Step 1 (Section 4.1) and assigning the modal class in each segment to it [5,37,38]. For the crop segments, the percentage of crop pixels in each segment was extracted. This was to provide a reliability measure for the derived crop segments.

4.5. Accuracy Assessment

Accuracy assessment was conducted on the per-field crop maps with a total of 76 fields evenly spread over the study area (Figure 1). The overall accuracy, producer’s accuracy and user’s accuracy [65] were determined. Additionally, the F1 score (Equation (6)) [70,71], which combines producer’s and user’s accuracy into a composite measure, was computed for each class. This measure enables a better assessment of class-wise accuracies. The score has a theoretical range between “0” and “1”, where “0” represents the worst results, while “1” is the best.
F 1 score = 2 × precision × recall precision + recall = 2 × user ' s accuracy × producer ' s accuracy user ' s accuracy + producer ' s accuracy

5. Results and Discussion

5.1. Derivation of Crop Mask

Table 4 presents the confusion matrix for the per-pixel evaluation of the crop mask. The approach adopted (mapping plowed fields on the June RE image and the remaining fields on the available time series) reduced the confusion between crop and non-crop areas. An overall accuracy of 94% was achieved, while the producer’s and user’s accuracy were consistently above 90%.

5.2. Image Segmentation

The segmentation results of the different parameter sets (scale, shape, compactness) were tested against twenty-four manually digitized fields from the September RE image. The manually digitized fields ranged in size from 0.5 to 4 ha, which is representative of farm sizes in the study area, although most fields are under 2 ha [43]. MAE was computed for each segmentation result based on the areas (ha) of the corresponding polygons (i.e., manually-digitized and segmentation). The best parameter set was found to be 75, 0.5 and 0.5 for scale, shape and compactness, respectively. Figure 7a shows a plot of the manually-digitized fields against corresponding fields from the best segmentation. An MAE of 0.46 was obtained.
There were more cases of underestimation than overestimation. These errors can be attributed to many factors. First is the irregularity in field sizes and shapes in the study area. Fields vary in size depending on whether the cultivation is for subsistence or for commercial purpose. Cotton and maize fields, for instance, tend to be relatively larger than millet/sorghum, due to the commercial benefits farmers get from these crops. Additionally, some fields tend to be very irregular in shape, because of the use of manual approaches to land clearing. Intra-field color variation, which could be caused by spatial variation in soil fertility or differences in fertilizer application, was found to be one of the causes for the underestimation witnessed. This situation occasionally resulted in multiple segments within a field. The occurrence of natural/semi-natural vegetation (e.g., trees) on or at the boundaries of fields also resulted in under- or over-estimation of segments, since the field boundaries change depending on the position of the tree(s).
The results of the segmentation were divided into crop and non-crop segments by overlaying them with the per-pixel crop mask (Section 4.1) and assigning the majority class (from the crop mask) to the corresponding segment (Figure 8). For each segment labeled as cropland, the percentage of cropland pixels in it was noted. Figure 7b presents a plot of the crop segments and the percentage of cropland pixels in each (percentages were sorted in ascending order). Segments that had less than sixty percent cropland were found to be mainly farms around hamlets. These were mostly over-segmented and sometimes included the hamlets themselves. Cultivation around hamlets is common in West Africa. In this watershed, however, there are not many, hence the relatively few number of fields in this category. Thirty percent of all segments were found to be pure cropland (i.e., 100% cropland pixels). These were found to be in areas of intensive cultivation, with little or no natural/semi-natural vegetation.
Segments with a crop percentage of between eighty and hundred percent were found to have varying numbers of trees in the polygon. Sub-canopy cultivation is common in West Africa, which often leads to a highly fragmented landscape. The trees serve as resting places for farmers when they are on the farms. The category of crop segments that had a cropland percentage of between sixty and eighty were found to be close to or in the midst of natural/semi-natural vegetation. Thus, the relatively low percentage of cropland pixels (60%–80%) noticed in these segments can be attributed to confusion between the two classes (crop and natural/semi-natural vegetation) or over-segmented crop fields that extended into the natural/semi-natural vegetation. For most of these fields, manual corrections were made.

5.3. Crop Classification

5.3.1. Accuracy Assessment

A per-field accuracy assessment was performed for each of the experiments outlined in Section 4.2.1. Tables 5 and 8 present results for each experiments, while Figure 9 is a plot of the class-wise accuracies (F1 score) for the different experiments.
Experiment (A), which was conducted with only RE images acquired in April, May, October and November, achieved an overall accuracy of 52%. There was considerable confusion between all classes, especially between rice and yam, which had an F1 score of 0.47 and 0.25, respectively. The relatively high confusion between the two classes can be explained by the intercropping of yam and rice, mostly on yam fields. Yam is cultivated in mounds (heaps of soil). This practice creates gullies between adjacent mounds, where farmers, in their bid to maximize the utilization of their land, cultivate rice. Some farmers also cultivate maize, okra and agushie on the same field. During flooding months, water collects in the gullies and provides the needed water for the rice. This practice is believed to be the main source of confusion between the two classes. Cereals (millet/sorghum) and maize had an F1 score of 0.5 and 0.52, respectively. Four cereal fields were misclassified as maize and vice versa. This can be attributed to the image time series analyzed in this experiment. The NDVI image of the May acquisition was used to separate these two classes. Since most maize fields were plowed in July/August, the NDVI of these fields were higher than plowed cereals fields in May, allowing for separation between the classes. However, not all cereal fields had been ploughed at the time of the May RE acquisition. This means some cereal fields behaved spectrally similar to that of maize, hence the confusion between the two classes. Cotton had the highest F1 score of 0.74 (owing to a high user’s accuracy of 81%). There was, however, some confusion between cotton and cereals, which can be attributed to similarities in their cropping calendar and the inability of the analyzed temporal sequence to achieve a complete separation between the two.
The overall accuracy achieved in Experiment (B) was 62%, an increment of 10% over that of (A) (Table 6). This experiment considered the RE images used in (A) plus the available TSX time-series. With the exception of maize, all the classes improved in accuracy compared to the results of Experiment (A). Notable are rice and yam, which increased in their F1 score from 0.47 to 0.69 and 0.25 to 0.42, respectively. The F1 score of cotton also increased by about 10% from 0.74 to 0.81. The producer’s accuracy of maize reduced from 53% to 47%, while the user’s accuracy remained the same, resulting in a slight decrease in the F1 score from 0.52 to 0.48. This was due to an increase in confusion between maize and cotton compared to the results of Experiment (A).
In Experiment (C), the use of all available RE time-series (April, May, June, September, October and November) resulted in an overall accuracy of 60%. With respect to Experiments (A) and (B), the cereals class increased in the F1 score by 26% and 9%, respectively, while the corresponding increase in maize was 25% and 35%, respectively. These improvements in class accuracies are attributable to the inclusion of the June RE image in this experiment. As previously explained, the late cultivation of maize was the best way of separating it from the cereals class. Since most cereal fields had been ploughed as of the time of the June acquisition, and most maize fields not; a better separation of the two classes was possible using the June NDVI image. As in Experiment (A), rice and yam performed poorly in this experiment, with yam having an F1 score of 0.21. The F1 score of cotton increased slightly over that of Experiment (A), but decreased marginally compared to results of Experiment (B).
Table 8 shows the results obtained for Experiment (D). An overall accuracy of 75% was achieved. Here, all available RE and TSX time-series were considered in the classification. Class-wise accuracies (producer’s, user’s, F1 score) were better than all other experiments. An F1 score of at least 0.7 was achieved for all classes, except yam. Cotton, like in all previous experiments, had the best class accuracy (F1 score = 0.86), followed by cereals, rice and maize. These improvements can be attributed to the use of all the available RE and TSX time-series, which covers the full cropping season. Figure 10 provides a detailed look of the per-pixel and per-field results obtained for this experiment.
A minor limitation of the hierarchical approach adopted, which could negatively affect reported accuracies, is error propagation [5,72]. First, the commission and omission errors incurred in generating the crop mask are inherent in the reported crop classification accuracies. Second, errors in classifying a crop class/group at any stage of the hierarchical crop classification scheme will be propagated into subsequent classifications. Thus, although the scheme was implemented to reduce confusion between classes, it may have resulted in some errors not being accounted for in the presented accuracies.

5.3.2. Contribution of TSX Data to Crop Mapping

Results obtained for Experiments (B) and (D) indicate that the inclusion of TSX data increased classification accuracies by 10% and 15%, respectively. Owing to the classification approach adopted, it was possible to identify the contribution of radar in improving classification accuracies. For each RF classification performed at the various levels of the hierarchical scheme, the variable importance measure, which indicates the relative importance of the variables/predictors used [73], was extracted. Table 9 shows the various levels of the classification scheme and the five most important predictors (out of all predictors) used to separate the classes at each level. The table indicates that the best separation between rice and yam was achieved by the multi-temporal TSX data. This fact is also evident in Tables 6 and 8. The class accuracies (F1 scores) of yam and rice increased by at least 40% when TSX data were included in the classification ((B) and (D)) compared to the use of RE images only ((A) and (C)). This can be attributed to the sensitivity of radar systems to land surface characteristics, such as soil moisture and roughness [74]. Due to the cultivation of yam in mounds (soil heaps), these fields have a rougher surface characteristic compared to rice-only fields. Thus, backscatter intensities are expected to be higher for yam fields than rice. Additionally, previous studies that used SAR data for crop mapping have distinguished between “broad leafed” and “fine/narrow leaf” crops and noted the usefulness of radar data in differentiating them based on their canopy architecture [24,25]. Broad-leaved crops have higher backscatter intensity than fine-leaved crops, due to a high absorption of the radar signal in the latter [75]. In this regard, yam, which can be categorized as broad leaf, will have higher backscatter intensities than rice, which can be considered as fine leaf. Figure 11a depicts a feature space plot of the July TSX VV and VH intensities for rice and yam. The figure shows higher intensity values for most yam fields compared to rice, although some confusion between the two classes still exists.
The TSX data also contributed to improving the separation between cotton and maize/cereals. For example, the class accuracies (F1 score) of cotton increased by at least 10% when TSX data were included in the classification (Experiments (B) and (D)) compared to the use of only RE data. Out of the multi-temporal TSX data, the August acquisition was found to be important for this separation. This could be due to differences in the canopy structure (e.g., leaf shape and size) of cotton, on the one hand, and maize/cereals, on the other. Figure 11b shows a feature space plot of the August VV and VH intensities for cotton and maize/cereals. The plot shows higher intensities for most cotton fields compared to the other classes, although some confusion is still evident. The relatively shorter wavelength of TSX (compared to, e.g., C-band Radarsat and L-band ENVISAT) and its resultant high sensitivity to vegetation canopy contributed to the improved class separation when TSX was included in the classification. Previous studies that used TSX for the classification of agricultural areas highlighted its capability to observe small-scale vegetation changes due to its lower penetration depth [19,20,25]. For example, in a multi-frequency SAR integration study to map major crops in two sites in Canada, [34] found that multi-temporal TSX produced a better overall classification accuracy than multi-temporal C-band RadarSat-2.
In all classifications involving the TSX data, the VV polarization was found to better discriminate crop types than the other TSX bands used in the classification (VH, K0, K1). In the case of cotton and maize/cereals, for instance, the VV polarization is the only TSX band that came within the five most important variables (based on the RF variable importance measure) in discriminating the two classes. Previous studies [8,23,34] also noted the superiority of the VV polarization in separating certain crop types (potatoes and cereals) over the VH polarization. The sensitivity of the VV polarization to different canopy structures was found to be the main reason for their ability to discriminate different crop types. This reason is applicable in this study, owing to the differences in canopy architecture between cotton and cereals/maize, as well as rice and yam.

5.4. Reliability of Modal Class Assignment

Previous studies that incorporated vector field boundaries and per-pixel results by assigning the modal class to each field polygon have noted the superiority of such approaches over only per-pixel classification results [8,37]. However, the reliability of the results obtained in the modal class assignment depends on the reliability of the per-pixel classification [5]. In instances where the number of classes being considered are high, interclass confusion in the per-pixel result could lead to a particular field having a modal class with a small proportion (e.g., 25%). Thus, an idea of the proportional cover of the modal class within each field could provide information about the level of confusion within the field, as well as the reliability of the approach (i.e., modal class assignment) adopted.
In this study, the proportion of the modal class in each correctly classified field was analyzed together with the local/within field variance (i.e., a measure of the number of classes). The objective is to ascertain the reliability of the approach adopted (modal class assignment) and to gauge the interclass confusion in the per-pixel classification result within each field. This analysis was conducted for Experiments (A) (without radar) and (B) (with radar) due to similar patterns in Experiments (B) and (D). Figure 12a,b presents a plot of the proportion of modal class against local variance per each correctly classified field for the two experiments. The number of correctly classified fields per crop type is indicated in parenthesis. The plots reveal that the proportion of modal class for most correctly classified polygons exceeded 50% in both experiments.
In Experiment (A), the cereal class had the lowest average proportion of modal class of 57% and the highest average within-field variance of 0.81. This suggest a high interclass confusion on cereal fields, which can be attributed to difficulty in separating cereals from maize and cotton with the time-series used. Maize, rice and yam had an average proportion of modal class of 70%, 74%, 88% and average local variance of 0.34, 0.32 and 0.3, respectively. This indicates that correctly classified fields in these classes were relatively homogeneous, and the assigned class was indeed the dominant class. Cotton fields had a similar average proportion of modal class of 74%, but a slightly higher average local variance of 0.51.
The average proportion of modal class for cereals improved to 62% in Experiment (B), while the average variance reduced to 0.58. This was mainly due to a better separation between cereals and cotton, owing to the inclusion of the TSX data. Likewise, the average proportion of modal class for cotton and maize improved to 78% and 72%, while average variance reduced to 0.43 and 0.25, respectively. The situation for rice and yam was, however, different. The average proportion of modal class for rice and yam reduced to 68% and 62%, while average variance increased to 0.42 and 0.91, respectively. This suggests a relatively higher interclass confusion on rice and yam fields. Although the inclusion of the radar data improved the separation between the two classes (by correctly classifying three and two additional rice and yam fields, respectively), the proportion of modal class on these additional fields were typically between 50% and 60% (Figure 12b).

6. Conclusions

This research integrated multi-temporal RapidEye (RE) and multi-temporal dual polarimetric TerraSAR-X (TSX) data (VV/VH) to map crops in northwestern Benin, West Africa. The study demonstrated the ability to map crops and crop groups in a region where the poor availability of optical data, complex cropping systems and a highly fragmented landscape has hindered crop mapping efforts for years. A hierarchical classification scheme that adapts to the challenges highlighted above was implemented to map crops and crop groups using the random forest (RF) classification algorithm. Different image combinations were used to classify crops and crop groups at different levels of the hierarchical scheme. Four experiments were set up to ascertain the contribution of SAR data to improving classification accuracies in crop mapping in the study area.
Results indicate that the integration of RE and TSX data improved classification accuracy by 10%–15% over the use of RE only. The contribution of TSX data was mainly in separating rice and yam, as well as cotton and maize/millet/sorghum. The VV polarization was found to better discriminate crop types than VH polarization. The research has shown that if optical and SAR data are available for the whole cropping season, classification accuracies of up to 75% are achievable. This result is promising for West Africa, where accurate and up-to-date information on agricultural land use is urgently required to develop adaptation and mitigation strategies against the looming effects of climate change and variability. The methodology developed in this paper can be applied to other parts of the region to map crops and crop groups with comparable accuracies.
Varying planting and harvesting dates were found to be a major source of misclassification. In future studies, fields to be used for training and validation will be monitored continuously throughout the cropping season (from the ploughing stage to harvest) to gain a better understanding of the dynamics in the phenological cycles of same crops planted/harvested at different stages of the season. Continuous monitoring (year-to-year) of fields in this manner is necessary to understand the dynamics in cropping patterns and to inure to the benefits of future attempts at operationalizing agricultural land use mapping in the region.
The soon-to-be-launched Sentinel-1 satellite, which will provide free and open access SAR data in dual polarization mode (VV/VH) will greatly enhance crop mapping efforts in West Africa and other tropical regions worldwide. Day and night, all weather acquisitions will ensure the availability of data throughout the cropping season, which, when combined with freely-available optical data (e.g., Landsat 8), can deliver comparable or better classification accuracies than what has been achieved in this study.

Acknowledgments

This work was supported by the German Federal Ministry of Education and Research (BMBF) through the research project: West African Science Service Centre on Climate Change and Adapted Land Use—WASCAL (01LG1001A). The German Research Foundation (DFG) and the University of Wuerzburg provided funds for publication through their Open Access Publishing programme.

Author Contributions

Gerald Forkuor designed this study and conducted the image analysis with guidance from Christopher Conrad, Michael Thiel and Tobias Ullmann. Evence Zoungrana assisted with field data collection. The manuscript was written and revised by Gerald Forkuor with valuable inputs from all co-authors.

Conflicts of Interest

Authors declare no conflict of interest

References

  1. Lambin, E.F.; MeyFroidt, P. Global land use change, economic globalization, and the looming land scarcity. Proc. Natl. Acad. Sci. USA 2011, 108, 3465–3472. [Google Scholar]
  2. Ramankutty, N.; Graumlich, L.; Achard, F.; Alves, D.; Chhabra, A.; DeFries, R.S.; Foley, J.A.; Geist, H.; Houghton, R.A.; Goldewijk, K.K.; et al. Global land-cover change: Recent progress, remaining challenges. In Land Use and Land-Cover Change: Local Processes and Global Impacts; Lambin, E.F., Geist, H., Eds.; Springer-Verlag: Berlin/Heidelberg, Germany, 2006; pp. 9–39. [Google Scholar]
  3. DeFries, R.S.; Foley, J.A.; Asner, G.P. Land use choices: Balancing human needs and ecosystem function. Front Ecol. Environ 2001, 2, 249–257. [Google Scholar]
  4. Foley, J.A.; DeFries, R.; Asner, G.P.; Barford, C.; Bonan, G.; Carpenter, S.R.; Chapin, F.S.; Coe, M.T.; Daily, G.C.; Gibbs, H.K.; et al. Global consequences of land use. Science 2005, 309, 570–574. [Google Scholar]
  5. Turker, M.; Arikan, M. Sequential masking classification of multi-temporal Landsat7 ETM+ images for field-based crop mapping in Karacabey, Turkey. Int. J. Remote Sens 2005, 26, 3813–3830. [Google Scholar]
  6. Fisette, T.; Maloley, M.; Chenier, R.; White, L.; Huffman, T.; Ogston, R.; Pacheco, A.; Gasser, P.Y. Towards a national agricultural land cover classification-evaluating decision tree approach. Proceedings of the 26th Canadian Symposium on Remote Sensing, Wolfville, NS, Canada, 14–16 June 2005.
  7. Conrad, C.; Fritsch, S.; Zeidler, J.; Rücker, G.; Dech, S. Per-field irrigated crop classification in arid central Asia using SPOT and ASTER data. Remote Sens 2010, 2, 1035–1056. [Google Scholar]
  8. De Wit, A.J.W.; Clevers, J.G.P.W. Efficiency and accuracy of per-field classification for operational crop mapping. Int. J. Remote Sens 2004, 25, 4091–4112. [Google Scholar]
  9. Förster, S.; Kaden, K.; Foerster, M.; Itzerott, S. Crop type mapping using spectral-temporal profiles and phenological information. Comput. Electron. Agric 2012, 89, 30–40. [Google Scholar]
  10. Conrad, C.; Dech, S.; Dubovyk, O.; Fritsch, S.; Klein, D.; Löw, F.; Schorcht, G.; Zeidler, J. Derivation of temporal windows for accurate crop discrimination in heterogeneous croplands of Uzbekistan using multitemporal RapidEye images. Comput. Electron. Agric 2014, 103, 63–74. [Google Scholar]
  11. Löw, F.; Michel, U.; Dech, S.; Conrad, C. Impact of feature selection on the accuracy and spatial uncertainty of per-field crop classification using support vector machines. ISPRS J. Photogramm. Remote Sens 2013, 85, 102–119. [Google Scholar]
  12. Forkuor, G.; Conrad, C.; Thiel, M.; Landmann, T. Possibilities of using multi-temporal RapidEye data to map crops and crop groups in West Africa. Environ. Monit. Assess 2014. under review. [Google Scholar]
  13. Blaes, X.; Vanhalle, L.; Defourny, P. Efficiency of crop identification based on optical and SAR image time series. Remote Sens. Environ 2005, 96, 352–365. [Google Scholar]
  14. McNairn, H.; Ellis, J.; van der Sanden, J.J.; Hirose, T.; Brown, R.J. Providing crop information using RADARSAT-1 and satellite optical imagery. Int. J. Remote Sens 2002, 23, 851–870. [Google Scholar]
  15. Forkuor, G.; Cofie, O. Dynamics of land-use and land-cover change in Freetown, Sierra Leone and its effects on urban and peri-urban agriculture—A remote sensing approach. Int. J. Remote Sens 2011, 32, 1017–1037. [Google Scholar]
  16. Ruelland, D.; Levavasseur, F.; Tribotte, A. Patterns and dynamics of land-cover changes since the 1960s over three experimental areas in Mali. Int. J. Appl. Earth Observ. Geoinf 2010, 12, s11–s17. [Google Scholar]
  17. Tappan, G.G.; Sall, M.; Wood, E.C.; Cushing, M. Ecoregions and land cover trends in Senegal. J. Arid Environ 2004, 59, 427–462. [Google Scholar]
  18. Henderson, F.; Chasan, R.; Portolese, J.; Hart, J. Evaluation of SAR-optical imagery synthesis techniques in a complex coastal ecosystem. Photogramm. Eng. Remote Sens 2002, 68, 839–846. [Google Scholar]
  19. Lopez-Sanchez, J.M.; Ballester-Berman, J.D.; Hajnsek, I. First results of rice monitoring practices in Spain by means of time series of TerraSAR-X dual-pol images. IEEE J. Sel. Topics Appl. Earth Obs. Remote Sens 2010, 4, 412–422. [Google Scholar]
  20. Baghdadi, N.; Cresson, R.; Todoroff, P.; Moinet, S. Multitemporal observations of sugarcane by TerraSAR-X images. Sensors 2010, 10, 8899–8919. [Google Scholar]
  21. Schuster, C.; Ali, I.; Lohmann, P.; Frick, A.; Foerster, M.; Kleinschmit, B. Towards detecting swath events in TerraSAR-X time series to establish NATURA 2000 grassland habitat swath management as monitoring parameter. Remote Sens 2011, 3, 1308–1322. [Google Scholar]
  22. Haack, B.N. A comparison of land use/cover mapping with varied radar incident angles and seasons. GISci. Remote Sens. 2007, 44, 1–15. [Google Scholar]
  23. McNairn, H.; Champagne, C.; Shang, J.; Holmstrom, D.; Reichert, G. Integration of optical and synthetic aperture radar (SAR) imagery for delivering operational annual crop inventories. ISPRS J. Photogramm. Remote Sens 2009, 64, 434–449. [Google Scholar]
  24. Soria-Ruiz, J.; Fernandez-Ordonez, Y.; McNairm, H. Crop monitoring and crop yield using optical and microwave remote sensing. In Geoscience and Remote Sensing; Ho, P.G.P, Ed.; Intech: Rijeka, Croatia, 2009; pp. 405–419. [Google Scholar]
  25. Bargiel, D.; Hermann, S. Multi-temporal land-cover classification of agricultural areas in two european regions with high resolution spotlight TerraSAR-X data. Remote Sens 2011, 3, 859–877. [Google Scholar]
  26. Gerstl, S.A. Physics concepts of optical and radar reflectance signatures. Int. J. Remote Sens 1990, 11, 1109–1117. [Google Scholar]
  27. Hong, G.; Zhang, A.; Zhou, F.; Townley-Smith, L.; Brisco, B.; Olthof, I. Crop-type identification potential of Radarsat-2 and MODIS images for the Canadian prairies. Can. J. Remote Sens 2011, 37, 45–54. [Google Scholar]
  28. Rosenthal, W.D.; Blanchard, B.J. Active microwave responses: An aid in improved crop classification. Photogramm. Eng. Remote Sens 1984, 50, 461–468. [Google Scholar]
  29. Brisco, B.; Brown, R.J.; Manore, M.J. Early season crop discrimination with combined SAR and TM data. Can. J. Remote Sens 1989, 15, 44–54. [Google Scholar]
  30. Brisco, B.; Brown, R.J. Multidate SAR/TM synergism for crop classification in western Canada. Photogramm. Eng. Remote Sens 1995, 61, 1009–1014. [Google Scholar]
  31. Gauthier, Y.; Bernier, M.; Fortin, J.P. Aspect and incident angle sensitivity in ERS-1 SAR data. Int. J. Remote Sens 1998, 19, 2001–2006. [Google Scholar]
  32. Ban, Y. Synergy of multitemporal ERS-1 SAR and Landsat TM data for classification of agricultural crops. Can. J. Remote Sens 2003, 29, 518–526. [Google Scholar]
  33. Sheoran, A.; Haack, B. Classification of California agriculture using quad polarization radar data and Landsat Thematic Mapper data. GISci. Remote Sens 2013, 50, 50–63. [Google Scholar]
  34. Shang, J.; McNairn, H.; Champagne, C.; Jiao, X. Application of multi-frequency synthetic aperture radar (SAR) in crop classification. In Advances in Geosciences and Remote Sensing; Jedlovec, G, Ed.; Intech: Rijeka, Croatia, 2009; pp. 557–568. [Google Scholar]
  35. Laurin, G.V.; Liesenberg, V.; Chen, Q.; Guerrieroa, L.; del Frate, F.; Bartolini, A.; Coomes, D.; Wileborec, B.; Lindsell, J.; Valentini, R. Optical and SAR sensor synergies for forest and land cover mapping in a tropical site in West Africa. Int. J. Appl. Earth Observ. Geoinf 2012, 21, 7–16. [Google Scholar]
  36. Cord, A.; Conrad, C.; Schmidt, M.; Dech, S. Standardized FAO-LCCS land cover mapping in heterogeneous tree savannas of West Africa. J. Arid Environ 2010, 74, 1083–1091. [Google Scholar]
  37. Tso, B.; Mather, P.M. Crop discrimination using multi-temporal SAR imagery. Int. J. Remote Sens 1999, 20, 2443–2460. [Google Scholar]
  38. Aplin, P.; Atkinson, P.M. Predicting missing field boundaries to increase per-field classification accuracy. Photogramm. Eng. Remote Sens 2004, 70, 141–149. [Google Scholar]
  39. Aregheore, E.M. Climate and agro-ecological zones. In Country Pasture/Forage Resource Profiles: The Republic of Benin; Food and Agriculture Organization (FAO): Rome, Italy, 2009; Chapter 3; pp. 11–12. [Google Scholar]
  40. Sow, P.; Adaawen, S.A.; Scheffran, J. Migration, social demands and environmental change amongst the Frafra of northern Ghana and the Biali in northern Benin. Sustainability 2014, 6, 375–398. [Google Scholar]
  41. Avohou, H.T.; Sinsin, B. The effects of Topographic factors on aboveground biomass production of grasslands in the Atacora Mountains in northwestern Benin. Mount. Res. Dev 2009, 29, 250–254. [Google Scholar]
  42. Institut National de la Statistique et de l’Analyse Economique (INSAE), Cashier des villages et quartiers de ville Départment de l’ATACORA; Direction des Etudes Démographiques: Cotonou, Benin, 2004.
  43. Igue, A.M.; Floquet, A.; Stahr, K. Land use and farming systems in Benin. In Adapted Farming in West Africa: Issues, Potentials and Perspectives; Graef, F., Lawrence, P., von Oppen, M, Eds.; Verlag Ulrich E. Grauer: Stuttgart, Germany, 2007; pp. 227–238. [Google Scholar]
  44. Richter, R.; Schläpfer, D. Atmospheric/Topographic Correction for Satellite Imagery: ATCOR-2/3 User Guide, Version 8.2.1; ReSe Applications Schläpfer: Wil, Switzerland, 2012; pp. 12–15. [Google Scholar]
  45. Berk, A.; Anderson, G.P.; Acharya, P.K.; Shettle, E.P. MODTRAN 5.2.0.0 User’s Manual; Spectral Sciences, Inc.: Burlington, MA, USA, 2008; pp. 5–74. [Google Scholar]
  46. Boerner, W.M. Basics of SAR polarimetry I. Radar polarimetry and interferometry. Proceedings of the RTO SET Lecture Series, Brussels, Belgium, 14–15 October 2004. Washington, DC, USA, 18–19 October 2004/Ottawa, ON, Canada, 21–22 October 2004.
  47. Souissi, B.; Ouarzeddine, M.; Belhadj-Aissa, A. Investigation of the capability of the compact polarimetry mode to reconstruct full polarimetry mode using RADARSAT2 data. Adv. Electromagnet 2012, 1, 19–28. [Google Scholar]
  48. Guissard, A. Mueller and Kennaugh matrices in radar polarimetry. IEEE Geoscie. Remote Sens 1994, 32, 590–597. [Google Scholar]
  49. Schmitt, A.; Hogg, A.; Roth, A.; Duffe, J. Shoreline classification using dual-polarized TerraSAR-X images. Proceedings of the Synthetic Aperture Radar, EUSAR 9th European Conference, Nuremburg, Germany, 23–26 April 2012; pp. 239–242.
  50. Cloude, S.R. Polarisation—Applications in Remote Sensing; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
  51. Engdahl, M.; Minchella, A.; Marinkovic, P.; Veci, L.; Lu, J. NEST: An esa open source toolbox for scientific exploitation of SAR data. Proceedings of IEEE Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 5322–5324.
  52. Deutsches Zentrum für Luft- und Raumfahrt (DLR). TerraSAR-X Ground Segment Basic Product Specification Document. V. 1.6; TX-GS-DD-3302. 2009, pp. 1–108. Available online: file:///C:/Users/WASCAL/Downloads/TX-GS-DD-3302_Basic-Products-Specification-Document_V1.6%20(1).pdf (accessed on 14 March 2014).
  53. Infoterra. Radiometric Calibration of TerraSAR-X Data: Beat Nought and Sigma Nought Coefficient Calculation. TSXX-ITD-TN-0049. 2008, pp. 1–16. Available online: file:///C:/Users/WASCAL/Downloads/TSXX-ITD-TN-0049-radiometric_calculations_I1.00.pdf (accessed on 14 March 2014).
  54. Buades, A.; Coll, B.; Morel, J.M. A review of image denoising algorithms, with a new one. Multisc. Model. Simul 2005, 4, 490–530. [Google Scholar]
  55. Deledalle, C.A.; Tupin, F.; Denis, L. Polarimetric SAR estimation based on non-local means. Proceedings of IEEE Geoscience and Remote Sensing Symposium (IGARSS), Honolulu, HI, USA, 25–30 July 2010; pp. 2515–2518.
  56. Ullmann, T.; Schmitt, A.; Roth, A.; Banks, S.; Baumhauer, R.; Dech, S. Classification of coastal arctic land cover by means of TerraSAR-X dual co-polarized data (HH/VV). Proceedings of the 5th TerraSAR-X Science Team Meeting, Munich, Germany, 10–11 June 2013.
  57. Wang, X.; Gi, L.; Li, X. Evaluation of filters for ENVISAT ASAR speckle suppression in pasture area. Proceedings of the ISPRS Annals of the XXII ISPRS Congress—Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, VIC, Australia, 25 August–1 September 2012; 1–7, pp. 341–346.
  58. Pringle, M.J.; Denham, R.J.; Devadas, R. Identification of cropping activity in central and southern Queensland, Australia, with the aid of MODIS MOD13Q1 imagery. Int. J. Appl. Earth Observ. Geoinf 2012, 19, 276–285. [Google Scholar]
  59. Breiman, L. Random forests. Mach. Learn 2001, 45, 5–32. [Google Scholar]
  60. Liaw, A.; Wiener, M. Classification and regression by random Forest. R. News 2002, 2, 18–22. [Google Scholar]
  61. R Core Team, R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2013.
  62. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Patt. Recogn. Lett 2006, 27, 294–300. [Google Scholar]
  63. Watts, J.; Lawrence, R. Merging random forest classification with an object-oriented approach for analysis of agricultural lands. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2008, 37, 579–582. [Google Scholar]
  64. Wardlow, B.D.; Egbert, S.L. Large-area crop mapping using time-series MODIS 250 m NDVI data: An assessment for the U.S. Central Great Plains. Remote Sens. Environ 2008, 112, 1096–1116. [Google Scholar]
  65. Bationo, A.; Kimetu, J.; Vanlauwe, B.; Bagayoko, M.; Koala, S.; Mokwunye, A.U. Comparative analysisof the current and potential role of legumes in integrated soil fertility management in West and Central Africa. In Fighting Poverty in Sub-Saharan Africa: The Multiple Roles of Legumes in Integrated Soil Fertility Management; Bationo, A., Waswa, B., Okeyo, J.M., Maina, F., Mokwunye, U., Eds.; Springer: Dordrecht, The Netherlands, 2011; pp. 117–150. [Google Scholar]
  66. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data. Principles and Practices, 2nd ed; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  67. Vierich, H.I.D.; Stoop, W.A. Changes in West African savanna agriculture in response to growing population and continuing low rainfall. Agric. Ecosyst. Environ 1990, 31, 115–132. [Google Scholar]
  68. Conrad, C.; Colditz, R.R.; Dech, S.; Klein, D.; Vlek, P.L.G. Temporal segmentation of MODIS time series for improving crop classification in Central Asian irrigation systems. Int. J. Remote Sens 2011, 32, 8763–8778. [Google Scholar]
  69. Definiens, eCognition Developer 8.7; Trimble Germany GmbH: Munich, Germany, 2011.
  70. Van Rijsbergen, C.J. Information Retrieval, 2nd ed.; Butterworths: London, UK, 1979. [Google Scholar]
  71. Schuster, C.; Foerster, M.; Kleinschmit, B. Testing the red edge channel for improving land-use classifications based on high resolution multi-spectral satellite data. Int. J. Remote Sens 2012, 33, 5583–5599. [Google Scholar]
  72. Van Niel, T.G.; McVicar, T.R. Determining temporal windows for crop discrimination with remote sensing: A case study in South-East Australia. Comput. Electron. Agric 2004, 45, 91–108. [Google Scholar]
  73. Genuer, R.; Poggi, J.M.; Tuleau-Malot, C. Variable selection using random forests. Patt. Recogn. Lett 2010, 31, 2225–2236. [Google Scholar]
  74. Aubert, M.; Baghdadi, N.; Zribi, M.; Douaoui, A.; Loumagne, C.; Baup, F.; El Hajj, M.; Garrigues, S. Analysis of TerraSAR-X data sensitivity to bare soil moisture, roughness, composition and soil crust. Remote Sensi. Environ 2011, 115, 1801–1810. [Google Scholar] [Green Version]
  75. Macelloni, G.; Paloscia, S.; Pampaloni, P.; Marliani, F.; Gai, M. The relationship between the backscattering coefficient and the biomass of narrow and broad leaf crops. Geosci. Remote Sens 2001, 39, 873–884. [Google Scholar]
Figure 1. Map of the study catchment in northwestern Benin.
Figure 1. Map of the study catchment in northwestern Benin.
Remotesensing 06 06472f1
Figure 2. Comparison between (a) a raw TSX image, (b) a corresponding image filtered with the Lee adaptive filter (window size of 7 × 7) and (c) a non-local means (NLM) filtered image (similarity window of 9 × 9 and search window of 21 × 21).
Figure 2. Comparison between (a) a raw TSX image, (b) a corresponding image filtered with the Lee adaptive filter (window size of 7 × 7) and (c) a non-local means (NLM) filtered image (similarity window of 9 × 9 and search window of 21 × 21).
Remotesensing 06 06472f2
Figure 3. Cropping calendar for each of the crops considered in the study based on 2013 field surveys. Each bar represents the start of land preparation to the harvest period. The start or the harvest period indicated may differ by up to two weeks or more.
Figure 3. Cropping calendar for each of the crops considered in the study based on 2013 field surveys. Each bar represents the start of land preparation to the harvest period. The start or the harvest period indicated may differ by up to two weeks or more.
Remotesensing 06 06472f3
Figure 4. Schematic of the methodological approach. Analysis was conducted in the order indicated by the steps. RE, RapidEye; RF, random forest.
Figure 4. Schematic of the methodological approach. Analysis was conducted in the order indicated by the steps. RE, RapidEye; RF, random forest.
Remotesensing 06 06472f4
Figure 5. Differences/similarities in the phenological cycles of same/different crops in the study area. Cotton 1 and 2 exhibit different phenological cycles, while Cotton 1 and Maize 1 having similar phenological cycles. Each profile represents the mean signature of a field.
Figure 5. Differences/similarities in the phenological cycles of same/different crops in the study area. Cotton 1 and 2 exhibit different phenological cycles, while Cotton 1 and Maize 1 having similar phenological cycles. Each profile represents the mean signature of a field.
Remotesensing 06 06472f5
Figure 6. Flowchart of the hierarchical scheme adopted to discriminate the crop classes. Different image sets (optical with or without SAR) were used to classify crops at different levels of the hierarchical scheme.
Figure 6. Flowchart of the hierarchical scheme adopted to discriminate the crop classes. Different image sets (optical with or without SAR) were used to classify crops at different levels of the hierarchical scheme.
Remotesensing 06 06472f6
Figure 7. (a) The manually-digitized fields’ (reference) versus segmented fields’ (b) proportion of cropland pixels in segments classified as cropland. Percentages have been sorted in ascending order.
Figure 7. (a) The manually-digitized fields’ (reference) versus segmented fields’ (b) proportion of cropland pixels in segments classified as cropland. Percentages have been sorted in ascending order.
Remotesensing 06 06472f7
Figure 8. A detailed look of the overlay of the segmentation results on the derived crop mask.
Figure 8. A detailed look of the overlay of the segmentation results on the derived crop mask.
Remotesensing 06 06472f8
Figure 9. Comparison of the F1 score achieved for the various crops in the four experiments.
Figure 9. Comparison of the F1 score achieved for the various crops in the four experiments.
Remotesensing 06 06472f9
Figure 10. Detailed look at the per-pixel and per-field results obtained for Experiment (D), where all available optical and SAR images were in the classification.
Figure 10. Detailed look at the per-pixel and per-field results obtained for Experiment (D), where all available optical and SAR images were in the classification.
Remotesensing 06 06472f10
Figure 11. (a) Feature space plot of yam and rice using VV and VH polarizations of the July TSX acquisition; (b) similar plot as in (a) for cotton and maize/cereals using VV and VH polarizations of the August TSX acquisition.
Figure 11. (a) Feature space plot of yam and rice using VV and VH polarizations of the July TSX acquisition; (b) similar plot as in (a) for cotton and maize/cereals using VV and VH polarizations of the August TSX acquisition.
Remotesensing 06 06472f11
Figure 12. (a) The proportion of modal class for each correctly classified field versus within-field variance for Experiment (A) and (b) for Experiment (B).
Figure 12. (a) The proportion of modal class for each correctly classified field versus within-field variance for Experiment (A) and (b) for Experiment (B).
Remotesensing 06 06472f12
Table 1. Acquisition dates and incidence angle of the TerraSAR-X (TSX) images analyzed.
Table 1. Acquisition dates and incidence angle of the TerraSAR-X (TSX) images analyzed.
Date of AcquisitionIncidence AngleResolution

Ground Range (m)Azimuth (m)
4 May 201344.01.313.15
15 May 201344.01.292.59
6 June 201344.61.313.15
17 June 201344.61.292.59
9 July 201343.51.313.15
20 July 201343.51.292.59
11 August 201344.61.313.15
22 August 201344.61.292.59
Table 2. Number of training and validation fields used in crop classification. Millet and sorghum were subsequently merged into one group (cereals).
Table 2. Number of training and validation fields used in crop classification. Millet and sorghum were subsequently merged into one group (cereals).
CropTrainingValidation
Cotton1919
Maize1915
Millet1310
Sorghum118
Rice1213
Yam1011
Table 3. Experimental design for crop classification. Blue cells indicate the use of RE only; green indicates the use of TSX only, and orange represent the use of RE and TSX.
Table 3. Experimental design for crop classification. Blue cells indicate the use of RE only; green indicates the use of TSX only, and orange represent the use of RE and TSX.
ExperimentAprilMayJuneJulyAugustSeptemberOctoberNovember
A
B
C
D
Table 4. Accuracy estimates for the derived crop mask. Overall Accuracy = 94.02%; Kappa = 0.88.
Table 4. Accuracy estimates for the derived crop mask. Overall Accuracy = 94.02%; Kappa = 0.88.
ClassCroplandNon-CropTotalProducer’s AccuracyUser’s AccuracyF1 Score
ReferenceCropland2024176220092.095.80.94
Non-crop872113220096.092.30.94
Total211122894400
Table 5. Confusion matrix for Experiment (A). Overall Accuracy = 52%.
Table 5. Confusion matrix for Experiment (A). Overall Accuracy = 52%.
ClassCerealsCottonMaizeRiceYamTotalProd. AccUser. AccF1 score
ReferenceCereals10143180.560.450.50
Cotton5131190.680.810.74
Maize4281150.530.500.52
Rice1273130.540.410.47
Yam2162110.180.400.25
Table 6. Confusion matrix for Experiment B. Overall Accuracy = 62%.
Table 6. Confusion matrix for Experiment B. Overall Accuracy = 62%.
ClassCerealsCottonMaizeRiceYamTotalProd. AccUser AccF1 Score
ReferenceCereals11421180.610.550.58
Cotton3151190.790.830.81
Maize4371150.470.500.48
Rice103130.770.630.69
Yam2234110.360.500.42
Table 7. Confusion matrix for Experiment (C). Overall Accuracy = 60%.
Table 7. Confusion matrix for Experiment (C). Overall Accuracy = 60%.
ClassCerealsCottonMaizeRiceYamTotalProd. AccUser AccF1 Score
ReferenceCereals12222180.670.600.63
Cotton3133190.680.870.76
Maize3111150.730.580.65
Rice76130.540.500.52
Yam2342110.180.250.21
Table 8. Confusion matrix for Experiment (D). Overall Accuracy = 75%.
Table 8. Confusion matrix for Experiment (D). Overall Accuracy = 75%.
ClassCerealsCottonMaizeRiceYamTotalProd. AccUser AccF1 Score
ReferenceCereals14211180.780.780.78
Cotton163190.840.890.86
Maize2211150.730.690.71
Rice103130.770.710.74
Yam236110.550.600.57
Table 9. Top five important variables used in discriminating different crop types/groups at the various levels of the hierarchical classification scheme.
Table 9. Top five important variables used in discriminating different crop types/groups at the various levels of the hierarchical classification scheme.
Classes to SeparateTop Five Important Variables
Rice, YamCotton, Maize, CerealsGreen band, Sept RE; green band, April RE; green band, June RE; green band, May RE; NIR band, April RE
CottonMaize, CerealsNIR band, Oct RE; red edge band, Oct RE; VV intensity, Aug TSX; red edge band, Sept RE; green band, Sept RE
MaizeCerealsNDVI June RE; NDVI April RE; NDVI May RE
RiceYamVV intensity, July TSX; VH intensity, July TSX; VV intensity June TSX; VH intensity, May TSX; VV Intensity Aug TSX

Share and Cite

MDPI and ACS Style

Forkuor, G.; Conrad, C.; Thiel, M.; Ullmann, T.; Zoungrana, E. Integration of Optical and Synthetic Aperture Radar Imagery for Improving Crop Mapping in Northwestern Benin, West Africa. Remote Sens. 2014, 6, 6472-6499. https://doi.org/10.3390/rs6076472

AMA Style

Forkuor G, Conrad C, Thiel M, Ullmann T, Zoungrana E. Integration of Optical and Synthetic Aperture Radar Imagery for Improving Crop Mapping in Northwestern Benin, West Africa. Remote Sensing. 2014; 6(7):6472-6499. https://doi.org/10.3390/rs6076472

Chicago/Turabian Style

Forkuor, Gerald, Christopher Conrad, Michael Thiel, Tobias Ullmann, and Evence Zoungrana. 2014. "Integration of Optical and Synthetic Aperture Radar Imagery for Improving Crop Mapping in Northwestern Benin, West Africa" Remote Sensing 6, no. 7: 6472-6499. https://doi.org/10.3390/rs6076472

Article Metrics

Back to TopTop