Next Article in Journal
Emulation of X-ray Light-Field Cameras
Next Article in Special Issue
Olympic Games Event Recognition via Transfer Learning with Photobombing Guided Data Augmentation
Previous Article in Journal
4D Bragg Edge Tomography of Directional Ice Templated Graphite Electrodes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Use of Very High Spatial Resolution Commercial Satellite Imagery and Deep Learning to Automatically Map Ice-Wedge Polygons across Tundra Vegetation Types

by
Md Abul Ehsan Bhuiyan
1,*,
Chandi Witharana
1 and
Anna K. Liljedahl
2
1
Department of Natural Resources and the Environment, University of Connecticut, Storrs, CT 06269, USA
2
Woods Hole Research Center, Falmouth, MA 02540, USA
*
Author to whom correspondence should be addressed.
J. Imaging 2020, 6(12), 137; https://doi.org/10.3390/jimaging6120137
Submission received: 2 October 2020 / Revised: 1 December 2020 / Accepted: 4 December 2020 / Published: 11 December 2020
(This article belongs to the Special Issue Image Retrieval in Transfer Learning)

Abstract

:
We developed a high-throughput mapping workflow, which centers on deep learning (DL) convolutional neural network (CNN) algorithms on high-performance distributed computing resources, to automatically characterize ice-wedge polygons (IWPs) from sub-meter resolution commercial satellite imagery. We applied a region-based CNN object instance segmentation algorithm, namely the Mask R-CNN, to automatically detect and classify IWPs in North Slope of Alaska. The central goal of our study was to systematically expound the DLCNN model interoperability across varying tundra types (sedge, tussock sedge, and non-tussock sedge) and image scene complexities to refine the understanding of opportunities and challenges for regional-scale mapping applications. We corroborated quantitative error statistics along with detailed visual inspections to gauge the IWP detection accuracies. We found promising model performances (detection accuracies: 89% to 96% and classification accuracies: 94% to 97%) for all candidate image scenes with varying tundra types. The mapping workflow discerned the IWPs by exhibiting low absolute mean relative error (AMRE) values (0.17–0.23). Results further suggest the importance of increasing the variability of training samples when practicing transfer-learning strategy to map IWPs across heterogeneous tundra cover types. Overall, our findings demonstrate the robust performances of IWPs mapping workflow in multiple tundra landscapes.

1. Introduction

Ice wedges are common permafrost subsurface features developed by repeated frost cracking and ice-vein growth over centuries to millennia [1,2,3]. These wedge-shaped ice bodies are responsible for creating polygonized land surface patterns (ice-wedge polygons, IWPs) across the Arctic [3,4]. In recent decades, abrupt thaw of ice-rich permafrost has been documented at several locations across the Arctic that alters the microtopography and the type of IWP [5].
Geographical coverage, remoteness, and logistical challenges constrain field-based mapping of IWPs. Very high spatial resolution (VHSR) commercial satellite sensors provide opportunities to observe IWPs at multiple spatial scales and temporal frequencies [6,7,8,9,10,11,12,13]. The bulk of traditional remote sensing image classification methods fail to grapple with sheer data volumes and scene complexities of VHSR imagery [14]. Increasing spectral heterogeneity in VHSR imagery leads to less class variance, which makes it difficult to accurately resolve IWPs using conventional per-pixel-based algorithms [15]. Local-scale analysis based on high-resolution data and regional-scale analysis based on coarse-resolution Landsat data limit our capacity to elucidate the effect of sub-meter scale IWP degradation on regional-scale processes, such as carbon projections [16]. Therefore, there is a need and an opportunity for advanced image analysis approaches for the accurate characterization of ice-wedge polygonal networks [16,17,18,19].
Owing to the upsurge of faster and affordable hardware resources (GPU/CPU) and easy access to cloud computing environments, deep learning (DL) algorithms are securing an increasing popularity in a wide spectrum of scientific disciplines that rely on artificial intelligence (AI). The application horizon spans from drug discovery through autonomous navigation to earth and environmental modeling [20,21,22,23,24]. Deep learning (DL)-based convolutional neural network (CNN) DLCNN have successfully outperformed other conventional machine learning techniques, such as support vector machine (SVM) and random forest (RF) in everyday image understanding. Proven success in CV has been an enticing factor for remote sensing community towards DLCNN [25,26,27,28,29]. There has been an expeditious uptake of DLCNN in VHSR image scene understanding [30,31,32,33]. Similar to everyday image analysis, DLCNN outperforms traditional machine learning classifiers (e.g., RF and SVM) as well as modern paradigms like object-based image analysis [33]. Application spectrum of DLCNN in remote sensing image to assessment pipelines is broad and multitude. It has been demonstrated that DLCNN is applicable image processing applications such as fusion [34,35], segmentation [36,37], and registration [38,39].
A growing body of studies investigated object detection, semantic segmentation, and semantic object instance segmentation using the region-based CNN (RCNN) architectures, such as Fast RCNN [40], Faster R-CNN [41], RetinaNet [42], RFCN [43], Mask R-CNN [44,45]. While object detection performs finding and classifying objects in an image, semantic image segmentation moves further steps ahead, identifying objects within a scene and labelling them according to known classes. U-Net and its successor architectures [46] are also capable of performing semantic object instance segmentation [47]. Among other comparison studies, [46] probed into two key semantic object instance segmentation architectures, U-Net and Mask-RCNN, to exploit their performances. According to their results, Mask-RCNN produced better recall and precision than U-Net, suggesting that it can detect targets of interest more accurately, although Mask-RCNN struggled to predict a good segmentation mask [46]. In considering the amount of under (over) segmentation, Mask-RCNN had much better performances compared to U-Net [46]. Specifically, in remote sensing applications Mask R-CNN was successfully applied to relatively small, selected areas for mapping of IWPs [7,10,11]. The original Mask RCNN is trained based on the COCO image data set, which harbors massive amount of hand annotated everyday images [48]. Moreover, to evaluate the automatic detection and classification of IWPs from sub-meter resolution commercial satellite imagery, transfer learning and adoption of existing Mask RCNN architecture (or pre-trained COCO data-based) could be the key solution in remote sensing applications. It is noted that our candidate algorithm, its architecture, and its underlying training data comfortably fall under the commonly found use-cases where the user is challenged by limited data and computation resources, and perhaps technical competencies. Therefore, in this study, we used Mask R-CNN for mapping IWPs in satellite remote sensing imagery to examine the transferability of model for varying tundra types such as sedge, tussock sedge, and non-tussock sedge.
Translating DLCNN from computer vision applications to the remote sensing image analysis domain undoubtedly create rich opportunities, as well as new challenges. Unlike in everyday image understanding, in which the targets in question operate in a constrained space, remote sensing imagery captures the nadir views of spatially continuous to discontinuous geo objects. Landscapes are complex. The constituent geo objects exhibit complex spectral, spatial, textural, and contextual characteristics that are aggregated across scales [49,50]. Increased resolution of commercial satellite imagery spontaneously inherit the landscape complexity. The high-level semantics that we sought for a given target could be influenced by the landscape complexity. The intriguing question is what the interoperability of a model is when it is trained in one landscape and applied to another landscape to classify the same object of interest. One could rule out this argument when the model is trained and applied in the same landscape or closer proximity. However, it is difficult to overlook this when thinking about regional scale mapping applications where we come across distinct landscapes with their own heterogeneities attributed by ecological and geophysical factors. This holds great validity in our mapping effort. We are not confined to few square kilometers but expands across whole Arctic tundra. Although semantically we pursue an abstracted geo object of polygon, its low-level motifs (spectral, spatial, textural characteristics) and high-level meanings are greatly influenced by the landscape where it stands on. Arctic tundra represents a complex and heterogeneous mosaic shaped by the earth processes, markedly influencing vegetation, hydrology, or soil characteristics [51,52]. When investigating how particular targets of interest or geo objects are presented in image modality across a region essentially, we require a baseline fabric that decomposes the heterogeneous system into meaningful patches with underlain ecological functions. In this respect, tundra cover types can stand as representative analysis units to understand how certain geo objects, for example ice-wedge polygons, present themselves in different tundra types and how their image representations change from tundra cover to another. In this respect, circumpolar Arctic vegetation map (CAVM) [53] presents a unique opportunity to use baseline data layers to aggregate ice-wedge polygons into different cohorts. Because the CAVM classification scheme not only considers vegetation types but prudently takes into account variability of topography, geomorphology, and climatic factors.
The importance of the Arctic tundra types associated with ice-wedge polygons dominates within the Circum-Arctic permafrost region [54]. The broad-scale assemblages of Arctic tundra constitute erect shrublands, graminoid tundra, mountain complexes, barrens, mineral graminoid tundra, prostrate-shrub tundra, and wetlands [55]. Figure 1 presents a snapshot of tundra types (details in [54]), which are considered in this study. Understanding of tundra distributions provides essential insight into IWP mapping application. For example, lake-rich regions such as Alaska’s North Slope demonstrated dominant sedges tundra, which contains more detailed information for IWPs mapping for that tundra type. Alaska represents heterogeneous tundra types such as tussock sedge, dwarf shrub, and moss tundra [55]. Moreover, the central portion of the Seward Peninsula, and mountain complexes concentrated in the Brooks Range of northern Alaska [54]. It is noted that ice-wedge degradation is higher in areas with warmer permafrost, like the Seward Peninsula in Alaska [56]. In addition, Russia has mostly low-shrub tundra in the Arctic, which is a consequence of predominantly wet soil moisture conditions that result from near-surface permafrost [54]. Canada has the most terrain associated with abundant barren types and prostrate dwarf-shrub tundra in in the Arctic region [54]. Therefore, it is important to consider the transient nature and spatial heterogeneity [57,58] of tundra types for the IWP mapping application.
Pilot studies have been conducted, including our efforts [59] and related work, such as [7,10,11] to demonstrate the adaptability of DLCNN in automated ice-wedge polygon detection and classification. These works exercised the transfer learning strategy by adapting one of the semantic object instance segmentation algorithm Mask R-CNN architecture that descends from the region-based CNN family. Degree to which a given DLCNN model interoperable across a heterogeneous landscapes, i.e., training and validation the model across tundra types, have been overlooked in literature. Accordingly, it is unknown how the model performs over a range of tundra cover types, such as sedge, tussock sedge, and non-tussock sedge (Figure 1b), each of which exhibit unique spectral, textural, spatial, and contextual characteristics. Prior to any regional-scale applications, the model’s invulnerability to landscape perturbations needs to be systematically quantified. These unanswered questions provide the impetus for our study. We are in the process of developing a mapping application for Arctic permafrost land environments, which enables the transformation of large volumes of commercial satellite imagery into Arctic science ready applications. Our main goal of the current study is to explore the DLCNN model interoperability across different tundra types and image scene complexities in order to understand the opportunities and challenges prior to any future circumpolar IWPs mapping applications. Migration of landscape complexities to image scenes evidently pose new challenges on automated image processing using DLCNN model predictions. Our experimental design aim to encapsulate low-gradient Arctic upland tundra (sedge, tussock sedge, and non-tussock sedge), including various features such as lakes and vegetated drained thaw lake basins. We aim to (1) examine the transferability of the model in mapping IWPs across tundra types; (2) evaluate the automatic detection and classification of ice-wedge polygons from sub-meter resolution commercial satellite imagery.

2. Study Area and Data

We conducted our study based on four summer-time multi-spectral images acquired by the WorldView-2 satellite sensor (Figure 1b). Pansharpened multispectral images at 0.5 m were provided by the Polar Geospatial Center as orthorectified, atmospherically corrected data products. Four candidate image scenes and their respective features are presented in Table 1. Candidate scenes cover 1500 km2 of coastal and upland tundra from the North Slope, Alaska (Figure 1b). The training datasets, which were represented by different image scenes than the evaluation assessment, were established around different tundra covers and included imagery from Alaska, Canada, and Russia (Figure 1a). Table 2 presents different tundra types for training and validation sites. Spectral characteristics significantly vary across the different tundra types [60]. The training sites provide a substantial landscape heterogeneity for model classifying and detection IWPS. Moreover, dominant landcover types (heterogeneity) controls the global image statistics [61]. Therefore, choosing image scenes from varying tundra could greatly influence model training since the model earns the opportunity to learn different abstractions of the targets of interest.

3. Mapping Application for Permafrost Land Environment

Accurate characterization of IWPs from VHSR imagery directly depend on the segmentation (i.e., isolation of targets from the surrounding) and classification (i.e., assigning the correct label to the targets) processes [62,63]. Semantic object instance segmentation methods are designed to afford target isolation and labeling to thematic classes. Ideally, a mapping application for permafrost land environment should consist of candidate DLCNN models tailored to extract different permafrost features of interest from remote sensing imagery. Among the suite of target features, microtopography, thaw features, capillaries, and plant functionality exhibit high priority. Given the diversity of target features and their heterogeneous characteristics coupled with semantic complexities, multiple model architectures better serve the purpose. In our mapping application, one pipeline targets on mapping ice-wedge polygons in which we used Mask RCNN algorithm. The pipeline is extensible and tailored to work with remote sensing imagery using high-performance computing resources. This allows scalability to larger spatial extents.

3.1. Mapping Workflow, Training and Validation Experiment

We center the current mapping workflow using Mask R-CNN, which uses the multi-level features from the training sample for detection, delineation, and classification of targets of interest. Similar to the other member of RCNN family, in a generic sense, MRCNN is a two-stage detector. Its architecture comprises of sub-networks: (1) generates Region Proposal Network (RPN) (i.e., candidate object bounding boxes); and (2) predicts the class, bounding box, and binary mask for each region of interest (ROI). The Mask R-CNN uses Residual Learning network, ResNet (101 layers deep), a convolutional neural network [44] for feature extraction. The pretrained network can classify images into multiple object categories which helps to converging deeper networks. In the deeper network the additional layers better approximate the mapping which reduces the error by a significant margin. Our workflow is modular. It consists of several key stages as depicted in Figure 2. In stage 1, the main input to the workflow is multispectral satellite imagery with three bands, with radiometric depth of 16 bit. Image scenes from the Polar Geospatial Center are typically provided with the dimension of 20 km (40,000 pixel) × 20 km (40,000 pixel) at 0.5 m pixel resolution. To achieve the optimal combination of spectral bands from input multispectral imagery which contain more than three bands (for instance, WV02 imagery has 8 spectral bands: coastal blue, blue, green, yellow, red, coastal red, NIR1 and NIR2), we used three statistical measure: variances, probability distribution function (PDF), and cumulative distribution function (CDF) (details in [59]). Specifically, a systematic experiment was designed to understand the impact of choosing the optimal three-band combinations in the use of multispectral datasets on DLCNN model prediction [59]. As the first step in the pipeline, the most effective combination of bands is obtained by estimating variances where the best three channels approximately present similar spread [59,64,65]. As three bands produce approximately similar reflectance values from PDF, we consider those three bands for the proposed model [66,67,68]. We also examined the shape of the cumulative density function (CDF) and observed the magnitude of multispectral bands [59,69]. CDF explains the distribution of the reflectance values among multiple spectral bands and, for the workflow, we chose the considerably less deviated three bands. Finally, for each spectral band of the image scene, the best combination of three bands was obtained by estimating three statistical measures: variances, PDF and CDF. In stage-2, the input image scene was portioned into tiles of 200 × 200 pixels. A typical satellite image scene produces ~65,000 tiles (this depends on the input scene size). Tiles are then streamed to the trained model for inferencing. The model estimates detection (IWPs prediction) when input tile contains ice-wedge polygons. The predicted categorical raster is vectorized as a shapefile. In stage-3, all the individual shapefiles (corresponding to each tile) will be post-processed by omitting duplicates along tile borders and merged together to create a single shapefile corresponding to the extent of the input satellite image scene.
For model training purposes, we created annotated data (defining and labelling regions of interest) using an online web tool “VGG Image Annotator” from satellite imagery comprising heterogeneous tundra types. We randomly selected 262 cropped subsets (tiles of 200 pixels by 200 pixels) (~15,000 polygons) from different tundra types (tussock, non-tussock, and sedge) considering the spectral, and spatial variability. Each file had 200 × 200 pixels. Datasets were annotated for two classes: Low-centered (LC) polygons (8962 objects) and high-centered (HC) polygons (6038 objects). It is also notable that IWPs were delineated along their edges (i.e., if troughs are present, then along the trough-sides, if no troughs are present, then along the rim mid-line). Finally, the annotated tiles were randomly divided into a training dataset, validation dataset, and test dataset based on the 8:1:1 split rule. We trained the DLCNN with a mini-batch size of two image tiles, 350 steps per epoch, learning rate of 0.001, learning momentum of 0.9, and weight decay of 0.0001 [7,12,13,59]. After scanning the image, the Mask R-CNN generates Region Proposal Network (RPN), and subsequently, the DLCNN predicted the class, bounding box, and binary mask for each region of interest (ROI) to obtain our mask prediction (the predicted mask is pixel-based). For each ROI, segmentation mask was predicted using a small Faster R-CNN. Finally, Mask R-CNN resized the predicted mask back to the original dimensions of input image scene. Training was implemented using NVIDIA V100 GPUs (PSC–Pittsburgh Supercomputing Center, Pittsburgh, PA, USA) on XSEDE supercomputing resources. We trained the DLCNN with 100 epochs. To optimize Mask R-CNN, we examined different losses such as (a) Smooth-L1 loss, defines box regression on object detection systems, which is less sensitive to outliers, than other regression loss; (b) Mask R-CNN bounding box loss indicates the difference between predicted bounding box correction and true bounding box; (c) Mask R-CNN classifier loss estimates difference of class labels between prediction and ground truth; (d) mask binary cross-entropy loss measures (probability value between 0 and 1) the performance of a classification model by observing predicted class and actual class; (e) RPN bounding box loss identifies the regression loss of bounding boxes only when there is object; and (f) RPN anchor classifier loss indicates the difference between the predicted (RPN) and actual (closest ground truth box to the anchor box) regression.

3.2. Accuracy Estimates

To evaluate the DLCNN performances, various error metrics were performed in the validation experiment. The mean intersection over union (mIoU) (in %) between predicted and ground truth is presented below:
mIoU = A O A U
Here, A O indicates the area of overlap between the predicted segmentation and the ground truth, where A U   is the area of union between the predicted segmentation and the ground truth. A mIoU score > 0.5 is considered a “good” prediction which indicates successful delineation [45,70].
Absolute mean relative error ( AMRE ) is the mean of the relative percentage error, calculated by the normalized average:
AMRE = 1 n i = 1 n | ( y ^ i y i y i ) |
For the quantitative assessments, for each subset the number of predicted polygons y ^ , the number of actual polygons (ground-truth polygons) y i and n is the number of subsets (details in [71,72]).
An accurate prediction of ice-wedge polygon is represented by F1 score, where a score of 1 specifies perfect prediction. Correctness signifies how many of predicted positives were truly positive; completeness determines what percentage of actual positives were detected. An accurate prediction is represented by all metric values closing to 1. The Statistical measures used in the study are shown below.
Correctness = TP TP + FP
Completeness = TP TP + FN
F 1   Score = 2 * Correctness * Completeness Completeness + Completeness
Please note that true positive (TP) is the number of polygons correctly identified, false positive (FP) is the number of polygons identified by model, but not true, and false negative (FN) is undetected polygons.

4. Model Evaluation Results and Discussions

We optimized the DLCNN during the training process with 100 epochs to get the full learning curve of the model. Learning curves are widely used diagnostic tool in machine learning for algorithms that learn from a training dataset incrementally. Overall model learning performance over experience or time are presented by a learning curve as shown in Figure 3. Results show the changes in learning performance for different epochs over time, where an epoch is defined as the number of times an algorithm visits the data set (e.g., an epoch is one backward and one forward pass for all the training). The validation loss values reached their lowest at 2nd epoch (Figure 3). Therefore, we choose the Mask R-CNN model with the lowest validation loss for our experiments (i.e., the 2nd epoch). It is noted that the sample sizes are limited but sufficient to optimize the model for limited number of epochs (2nd). Specifically, from the results of the Smooth-L1 loss (target detection loss), the validation loss values reached their lowest magnitude at 2nd epoch, but the training loss values substantially decreased (Figure 3a). Similarly, Figure 3b–f, in considering other losses (Mask R-CNN bounding box loss; Mask R-CNN classifier loss; Mask binary cross-entropy loss; RPN bounding box loss; RPN classifier loss), showed that around the 2nd epoch, the validation loss value reached its lowest value, where Mask R-CNN was optimized. In our use case, we practiced transfer learning of existing Mask RCNN architecture to optimize the model at low number (e.g., 2nd) of epoch to evaluate the automatic detection and classification of ice-wedge polygons from sub-meter resolution commercial satellite imagery.
We statistically evaluated the performances of the DLCNN in detecting and classifying IWPs. For the quantitative assessments, from each image scene, we randomly selected 40 subsets to manually delineate polygons as a reference (ground-truth polygons). The mIoU values varied between 0.85 to 0.91 (Table 3), which indicted that predicted polygons that agree with the ground-truth polygons.
Close-up views of the original imagery, ground truth, and model classification results show that our predicted IWPs closely matched ground-truth IWPs (Figure 4). We used three quantitative error statistics (correctness, completeness, and F1 score) to show the performances of the framework (Table 4). Candidate scenes V1, V2, V3, and V4 produced high model detection accuracies for the F1 score, ranging from 0.89 to 0.96 (Figure 5, Table 4). Although all the image scenes are geographically close to each other, they still have different tundra variations in the microtopography. Predominance of tussock sedge tundra and the high spatial resolution of imagery information provide landscape-scale variation within the original CAVM map throughout northern Alaska [54,55]. Scene V4, covering tussock-sedge achieves mIoU 0.85 (Table 3), still having a chance to improve model prediction by increasing more training data from that tundra region. Specifically, V1, V2, and V3 represents non-tussock sedge and sedge tundra types of Alaska’s North Slope. DLCNN performances for Image scenes V2–V3 (F1 score: 0.92–0.94) were consistent, which means the training samples were sufficient to predict IWPs for non-tussock sedge and sedge tundra regions (Table 4). This result helped to understand the feasibility and reliability of the remote sensing information extraction for different tundra regions. In terms of elevation, most of the Arctic is <500 m elevation, with the lowest elevations (<100 m) dominated by graminoid-erect dwarf-shrub tundra. In our training datasets, we used three tundra types where elevation varies up to 500 m. Our validation image scenes V(1–4) are found within this elevation range and exhibited relatively consistent performances across the selected validated tundra regions. In future, more training sample from the complex terrain with the highest elevations could increase the greatest variability of the model.
In a similar fashion, scenes V1-4 scored high values for completeness (81–89%). In all four cases, the correctness metric scored 1, allowing less freedom for false alarms. There are few recent studies considering 0.5 × 0.5 m resolution image where F1 scores were 55% [10], and 72% [11], which are substantially lower than the results presented here (>89%). Moreover, classification accuracies for the F1 score varied from 0.94 to 0.97 for candidate scenes, indicating a robust performance of the DLCNN algorithm across different tundra types in northern Alaska (Table 5). Remote sensing image data with more than three bands have not yet been able to be trained in deep learning training networks. Specifically, deep learning (DL)-based past researchers are designed to accept standard RGB bands as they confront with everyday images [73,74,75]. Moreover, in terms of using multispectral perspective, the Arctic tundra vegetation communities have separable view in Arctic mapping application [76,77]. The tundra types such as wet sedge meadow, tussock tundra, etc., showed certain diagnostic reflectance which were significantly different for the other tundra types [76]. On the other hand, our mapping workflow optimized multispectral band combination from satellite imagery [59], which led to a more robust image classification model than a traditional object-detection model. Moreover, results showed significantly low systematic errors (AMRE values from 0.17 to 0.23) for all candidate scenes (Table 6). Overall, both quantitative and qualitative evaluations support the possible interoperability of the IWPs mapping algorithm across different tundra assemblages in northern Alaska.
In this exploratory study, we primarily investigated the interoperability of deep learning model predictions across heterogeneous tundra landscapes. Arctic tundra vegetation exhibits a significantly higher degree of heterogeneity over small spatial scales [52]. Further research is needed to better understand how trained models behave across other tundra vegetation types and regions. Such study would also benefit from incorporating terrain units, soil types, hydro-climatic regimes, and permafrost characteristics. Furthermore, summer temperature variations can cause major changes to vegetation structure via by pose spectral/textural changes in the acquired imagery. Thus, the seasonality could be an important factor deciding the appearance of ice wedge polygon on the satellite imagery because changes to spectral and textural characteristics can alter the overall semantics of the target. The model can therefore be biased if it is only trained on imagery acquired in a particular time window. Operator biasness in hand-annotated data production can also negatively influence model performances. Tasking multiple operators to produce sizeable amount of quality-controlled training datasets can help improving the variability training samples and eventually leveraging the model performances. In future research, we aim to systematically probe further into model interoperability considering multi-faceted factors. Moreover, Arctic tundra landscapes cover spatially isolated ponds, lakes, marshes, river, and stream corridor wetlands, which representing highly heterogeneous features, varying in soil moisture, vegetation composition, elevation, surficial geology, ground ice content, soil thermal regimes and surface hydrology [51,52]. Fine-scale differences in microtopography, limit the ability to comprehend local scale controls on regional to global scale patterns which, is an important factor in characterizing IWPs in Arctic varying tundra areas [62].

5. Conclusions

Here we presented a deep learning CNN-based high-performance mapping application for permafrost environments to automatically characterize ice-wedge polygons from VHSR commercial satellite imagery across three common tundra vegetation types. The DL model exhibited promising performances (high detection accuracies: 89% to 96% and high classification accuracies: 94% to 97%) across the heterogeneous tundra regions. Consideration of contextual information (e.g., edges, vegetation, shape, area, and the consistency of feature distributions) increased the reliability of the model classification and helped generalizing the DL model across tundra vegetation types. Complex topography plays a vital role in controlling the spatial variation in image scenes. In this exploratory study, we used varying tundra types (sedge, tussock sedge, and non-tussock sedge) and image scene complexities to refine the understanding of opportunities and challenges for regional-scale mapping applications. However, Arctic tundra includes additional vegetation types. Therefore, the model can be biased when it is applied to other tundra vegetation types such as prostrate dwarf-shrub, herb, lichen tundra; rush/grass, forb, cryptogam tundra; graminoid, prostrate dwarf-shrub, forb tundra, etc. In the future, this experiment can be extended considering more diverse tundra landscapes, such as graminoid and shrub dominated vegetation cover types, to systemically gauge the improvement of the DL model prediction accuracies.
Effort to further refine model prediction accuracies could include (a) increasing the variability of training samples with additional annotated IWPs from a larger set of tundra vegetation types, and (b) exploring more sophisticated image pre-processing steps such as differing data fusion (pansharpening) approaches. Such model improvements may be able to produce more pronounced IWPs edge information and, therefore, improving the DL model prediction accuracies.

Author Contributions

M.A.E.B. designed the study, and performed the experiments, and led the manuscript writing and revision. C.W. led the overall project, co-designed the study and contributed to the development of the paper. A.K.L. co-designed and contributed to the interpretation of results and the writing of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the U.S. National Science Foundation Grant Nos.: 1927723, 1927872, 1720875, 1722572, and 1721030. Supercomputing resources were provided by the eXtreme Science and Engineering Discovery Environment (Award No. DPP190001).

Acknowledgments

We would like to thank Torre Jorgenson and Mikhail Kanevskiy for their domain expertise on permafrost feature classification. Geospatial support for this work provided by the Polar Geospatial Center under NSF-OPP awards 1043681 and 1559691.We also value the contribution from Amber Agnew in training data production.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Black, R.F. Permafrost: A review. Geol. Soc. Am. Bull. 1954, 65, 839–856. [Google Scholar] [CrossRef]
  2. Steedman, A.E.; Lantz, T.C.; Kokelj, S.V. Spatio-temporal variation in high-centre polygons and ice-wedge melt ponds, Tuktoyaktuk coastlands, Northwest Territories. Permafr. Periglac. Process. 2017, 28, 66–78. [Google Scholar] [CrossRef]
  3. Lachenbruch, A.H. Mechanics of Thermal Contraction Cracks and Ice-Wedge Polygons in Permafrost; Geological Society of America: Boulder, CO, USA, 1962; Volume 70. [Google Scholar]
  4. Dostovalov, B.N.; Popov, A.I. Polygonal systems of ice-wedges and conditions of their development. In Proceedings of the Permafrost International Conference, Lafayette, IN, USA, 11–15 November 1963. [Google Scholar]
  5. Liljedahl, A.K.; Boike, J.; Daanen, R.P.; Fedorov, A.N.; Frost, G.V.; Grosse, G.; Hinzman, L.D.; Iijma, Y.; Jorgenson, J.C.; Matveyeva, N.; et al. Pan-Arctic ice-wedge degradation in warming permafrost and its influence on tundra hydrology. Nat. Geosci. 2016, 9, 312. [Google Scholar] [CrossRef]
  6. Witharana, C.; Bhuiyan, M.A.E.; Liljedahl, A.K. Big Imagery and high-performance computing as resources to understand changing Arctic polygonal tundra. Int. Arch. Photogramm. 2020, 44, 111–116. [Google Scholar]
  7. Witharana, C.; Bhuiyan, M.A.E.; Liljedahl, A.K.; Kanevskiy, M.; Epstein, H.E.; Jones, B.M.; Daanen, R.; Griffin, C.G.; Kent, K.; Jones, M.K.W. Understanding the synergies of deep learning and data fusion of multispectral and panchromatic high resolution commercial satellite imagery for automated ice-wedge polygon detection. ISPRS J. Photogramm. Remote Sens. 2020, 170, 174–191. [Google Scholar] [CrossRef]
  8. Lousada, M.; Pina, P.; Vieira, G.; Bandeira, L.; Mora, C. Evaluation of the use of very high resolution aerial imagery for accurate ice-wedge polygon mapping (Adventdalen, Svalbard). Sci. Total Environ. 2018, 615, 1574–1583. [Google Scholar] [CrossRef]
  9. Mahmoudi, F.T.; Samadzadegan, F.; Reinartz, P. Object recognition based on the context aware decision-level fusion in multiviews imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 8, 812–822. [Google Scholar]
  10. Zhang, W.; Witharana, C.; Liljedahl, A.K.; Kanevskiy, M. Deep Convolutional Neural Networks for Automated Characterization of Arctic Ice-Wedge Polygons in Very High Spatial Resolution Aerial Imagery. Remote Sens. 2018, 10, 1487. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, W.; Liljedahl, A.K.; Kanevskiy, M.; Epstein, H.E.; Jones, B.M.; Jorgenson, M.T.; Kent, K. Transferability of the Deep Learning Mask R-CNN Model for Automated Mapping of Ice-Wedge Polygons in High-Resolution Satellite and UAV Images. Remote Sens. 2020, 12, 1085. [Google Scholar] [CrossRef] [Green Version]
  12. Bhuiyan, M.A.E.; Witharana, C.; Liljedahl, A.K. Harnessing Commercial Satellite Imagery, Artificial Intelligence, and High Performance Computing to Characterize Ice-wedge Polygonal Tundra. In Proceedings of the AGU Fall Meeting 2020, San Francisco, CA, USA, 1–17 December 2020. [Google Scholar]
  13. Witharana, C.; Bhuiyan, M.A.E.; Liljedahl, A.K.; Kanevskiy, M.Z.; Jorgenson, T.; Jones, B.M.; Daanen, R.P.; Epstein, H.E.; Griffin, C.G.; Kent, K.; et al. Automated Mapping of Ice-wedge Polygon Troughs in the Continuous Permafrost Zone using Commercial Satellite Imagery. In Proceedings of the AGU Fall Meeting 2020, San Francisco, CA, USA, 1–17 December 2020. [Google Scholar]
  14. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Feitosa, R.Q.; Van der Meer, F.; Van der Werff, H.; Van Coillie, F.; et al. Geographic object-based image analysis–towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [Green Version]
  15. Laliberte, A.; Rango, A.; Havstad, K.; Paris, J.; Beck, R.; McNeely, R.; Gonzalez, A. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sens. Environ. 2004, 93, 198–210. [Google Scholar] [CrossRef]
  16. Jones, M.K.W.; Pollard, W.H.; Jones, B.M. Rapid initialization of retrogressive thaw slumps in the Canadian high Arctic and their response to climate and terrain factors. Environ. Res. Lett. 2019, 14, 055006. [Google Scholar] [CrossRef]
  17. Muster, S.; Heim, B.; Abnizova, A.; Boike, J. Water body distributions across scales: A remote sensing based comparison of three arctic tundra wetlands. Remote Sens. 2013, 5, 1498–1523. [Google Scholar] [CrossRef] [Green Version]
  18. Skurikhin, A.N.; Wilson, C.J.; Liljedahl, A.; Rowland, J.C. Recursive active contours for hierarchical segmentation of wetlands in high-resolution satellite imagery of arctic landscapes. In Proceedings of the Southwest Symposium on Image Analysis and Interpretation, San Diego, CA, USA, 6–8 April 2014; pp. 137–140. [Google Scholar]
  19. Ulrich, M.; Grosse, G.; Strauss, J.; Schirrmeister, L. Quantifying wedge-ice volumes in Yedoma and thermokarst basin deposits. Permafr. Periglac. Process. 2014, 25, 151–161. [Google Scholar] [CrossRef] [Green Version]
  20. Wallach, I.; Dzamba, M.; Heifets, A. AtomNet: A deep convolutional neural network for bioactivity prediction in structure-based drug discovery. arXiv 2015, arXiv:1510.02855. [Google Scholar]
  21. Unterthiner, T.; Mayr, A.; Klambauer, G.; Hochreiter, S. Toxicity Prediction Using Deep Learning. arXiv 2015, arXiv:1503.01445. Available online: https://arxiv.org/abs/1503.01445 (accessed on 11 December 2020).
  22. Benoit, B.A. Lambert3-D deep learning approach for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar]
  23. Wei, X.; Fu, K.; Gao, X.; Yan, M.; Sun, X.; Chen, K.; Sun, H. Semantic pixel labelling in remote sensing images using a deep convolutional encoder-decoder model. Remote Sens. Lett. 2018, 9, 199–208. [Google Scholar] [CrossRef]
  24. Yang, J.; Zhu, Y.; Jiang, B.; Gao, L.; Xiao, L.; Zheng, Z. Aircraft detection in remote sensing images based on a deep residual network and super-vector coding. Remote Sens. Lett. 2018, 9, 228–236. [Google Scholar] [CrossRef]
  25. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.; Mohamed, A.; Jaitly, N.; Kingsbury, B. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
  26. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Xing, F.; Xie, Y.; Su, H.; Liu, F.; Yang, L. Deep learning in microscopy image analysis: A survey. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 4550–4568. [Google Scholar] [CrossRef] [PubMed]
  28. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. arXiv 2014, arXiv:1408.5093, 2014. [Google Scholar]
  29. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition; CoRR: Ithaca, NY, USA, 2015. [Google Scholar]
  30. Cheng, G.; Yang, C.; Yao, X.; Guo, L.; Han, J. When deep learning meets metric learning: Remote sensing image scene classification via learning discriminative CNNs. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2811–2821. [Google Scholar] [CrossRef]
  31. Han, W.; Feng, R.; Wang, L.; Cheng, Y. A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification. ISPRS J. Photogramm. Remote Sens. 2017, 145, 23–43. [Google Scholar] [CrossRef]
  32. Xu, X.; Li, W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 937–949. [Google Scholar] [CrossRef]
  33. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  34. Liu, Y.; Chen, X.; Wang, Z.; Wang, Z.J.; Ward, R.K.; Wang, X. Deep learning for pixel-level image fusion: Recent advances and future prospects. Inf. Fusion 2018, 42, 158–173. [Google Scholar] [CrossRef]
  35. Shao, Z.; Cai, J. Remote sensing image fusion with deep convolutional neural network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1656–1669. [Google Scholar] [CrossRef]
  36. Liu, Y.; Minh, D.; Deligiannis, N.; Ding, W. A MunteanuHourglass-shapenetwork based semantic segmentation for high resolution aerial imagery. Remote Sens. 2017, 9, 522. [Google Scholar] [CrossRef] [Green Version]
  37. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE International Conference Computer Vision and Pattern Recognition CVPR, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  38. Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef] [Green Version]
  39. Wang, S.; Quan, D.; Liang, X.; Ning, M.; Guo, Y.; Jiao, L. A deep learning framework for remote sensing image registration. Isprs J. Photogramm. Remote Sens. 2018, 145, 48–164. [Google Scholar] [CrossRef]
  40. Gkioxari, G.; Girshick, R.; Malik, J. Actions and attributes from wholes and parts. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2470–2478. [Google Scholar]
  41. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montréal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
  42. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  43. Dai, J.; Li, Y.; He, K.; Sun, J. R-fcn: Object detection via region-based fully convolutional networks. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 379–387. [Google Scholar]
  44. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask RCNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  45. Abdulla, W. Mask R-Cnn for Object Detection and Instance Segmentation on Keras and Tensorflow. GitHub. Repos. 2017. Available online: https://github.com/matterport/Mask_RCNN (accessed on 10 December 2020).
  46. Vuola, A.O.; Akram, S.U.; Kannala, J. Mask-RCNN and U-net ensembled for nuclei segmentation. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging, Venice, Italy, 8–11 April 2019; pp. 208–212. [Google Scholar]
  47. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer: Berlin, Germany; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef]
  48. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft Coco: Common Objects in Context. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
  49. Burnett, C.; Blaschke, T. A multi-scale segmentation/object relationship modeling methodology for landscape analysis. Ecol. Model. 2003, 168, 233–249. [Google Scholar] [CrossRef]
  50. Hay, G.J. Visualizing ScaleDomain Manifolds: A Multiscale GeoObjectBased Approach. In Scale Issues in Remote Sensing; Wiley: Hoboken, NJ, USA, 2014; pp. 139–169. [Google Scholar]
  51. Lara, M.J.; Nitze, I.; Grosse, G.; McGuire, A.D. Tundra landform and vegetation productivity trend maps for the Arctic Coastal Plain of northern Alaska. Sci. Data 2018, 5, 180058. [Google Scholar] [CrossRef] [PubMed]
  52. Greaves, H.E.; Eitel, J.U.; Vierling, L.A.; Boelman, N.T.; Griffin, K.L.; Magney, T.S.; Prager, C.M. 20 cm resolution mapping of tundra vegetation communities provides an ecological baseline for important research areas in a changing Arctic environment. Environ. Res. Commun. 2019, 10, 105004. [Google Scholar] [CrossRef] [Green Version]
  53. Walker, D.A.F.J.A.; Daniëls, N.V.; Matveyeva, J.; Šibík, M.D.; Walker, A.L.; Breen, L.A.; Druckenmiller, M.K.; Raynolds, H.; Bültmann, S.; Hennekens, M.; et al. Wirth Circumpolar arctic vegetation classification. Phytocoenologia 2018, 48, 181–201. [Google Scholar] [CrossRef]
  54. Raynolds, M.K.; Walker, D.A.; Balser, A.; Bay, C.; Campbell, M.; Cherosov, M.M.; Daniëls, F.J.; Eidesen, P.B.; Ermokhina, K.A.; Frost, G.V.; et al. A raster version of the Circumpolar Arctic Vegetation Map (CAVM). Remote Sens. Environ. 2019, 232, 111297. [Google Scholar] [CrossRef]
  55. Walker, D.A.; Raynolds, M.K.; Daniëls, F.J.; Einarsson, E.; Elvebakk, A.; Gould, W.A.; Katenin, A.E.; Kholod, S.S.; Markon, C.J.; Melnikov, E.S.; et al. The circumpolar Arctic vegetation map. J. Veg. Sci. 2005, 16, 267–282. [Google Scholar] [CrossRef]
  56. Shur, Y.; Kanevskiy, M.; Jorgenson, T.; Dillon, M.; Stephani, E.; Bray, M. Permafrost degradation and thaw settlement under lakes in yedoma environment. In Proceedings of the Tenth International Conference on Permafrost, Salekhard, Russia, 25–29 June 2012; Hinkel, K.M., Ed.; The Northern Publisher: Salekhard, Russia, 2012. [Google Scholar]
  57. Quezel, P. Les grandes structures de végétation en région méditerranéenne: Facteurs déterminants dans leur mise en place post-glaciaire. Geobios 1999, 32, 19–32. [Google Scholar] [CrossRef]
  58. Hellesen, T.; Matikainen, L. An object-based approach for mapping shrub and tree cover on grassland habitats by use of LiDAR and CIR orthoimages. Remote Sens. 2013, 5, 558–583. [Google Scholar] [CrossRef] [Green Version]
  59. Bhuiyan, M.A.E.; Witharana, C.; Liljedahl, A.K.; Jones, B.M.; Daanen, R.; Epstein, H.E.; Kent, K.; Griffin, C.G.; Agnew, A. Understanding the Effects of Optimal Combination of Spectral Bands on Deep Learning Model Predictions: A Case Study Based on Permafrost Tundra Landform Mapping Using High Resolution Multispectral Satellite Imagery. J. Imaging 2020, 6, 97. [Google Scholar] [CrossRef]
  60. Walker, D.A. Toward a new circumpolar arctic vegetation map. Arct. Alp. Res. 1995, 31, 169–178. [Google Scholar]
  61. Davidson, S.J.; Santos, M.J.; Sloan, V.L.; Watts, J.D.; Phoenix, G.K.; Oechel, W.C.; Zona, D. Mapping Arctic tundra vegetation communities using field spectroscopy and multispectral satellite data in North Alaska, USA. Remote Sens. 2016, 8, 978. [Google Scholar] [CrossRef] [Green Version]
  62. Galleguillos, C.; Belongie, S. Context based object categorization: A critical survey. Comput. Vis. Image Underst. 2010, 114, 712–722. [Google Scholar] [CrossRef] [Green Version]
  63. Guo, J.; Zhou, H.; Zhu, C. Cascaded classification of high resolution remote sensing images using multiple contexts. Inf. Sci. 2013, 221, 84–97. [Google Scholar] [CrossRef]
  64. Chuvieco, E. Fundamentals of Satellite Remote Sensing: An Environmental Approach; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  65. Bovolo, F.; Bruzzone, L. A detail-preserving scale-driven approach to change detection in multitemporal SAR images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2963–2972. [Google Scholar] [CrossRef]
  66. Inamdar, S.; Bovolo, F.; Bruzzone, L.; Chaudhuri, S. Multidimensional probability density function matching for preprocessing of multitemporal remote sensing images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1243–1252. [Google Scholar] [CrossRef]
  67. Pitié, F.; Kokaram, A.; Dahyot, R. N-dimensional probability function transfer and its application to color transfer. In Proceedings of the EEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 2, pp. 1434–1439. [Google Scholar]
  68. Pitié, F.; Kokaram, A.; Dahyot, R. Automated colour grading using colour distribution transfer. Comput. Vis. Image Underst. 2007, 107, 123–137. [Google Scholar] [CrossRef]
  69. Jhan, J.P.; Rau, J.Y. A normalized surf for multispectral image matching and band co-registration. In International Archives of the Photogrammetry; Remote Sensing & Spatial Information Sciences: Enschede, The Netherlands, 2019. [Google Scholar]
  70. Gidaris, S. Effective and Annotation Efficient Deep Learning for Image Understanding. Ph.D. Thesis, Université Paris-Est, Marne-la-Vallée, France, 2018. [Google Scholar]
  71. Bhuiyan, M.A.; Nikolopoulos, E.I.; Anagnostou, E.N. Machine Learning–Based Blending of Satellite and Reanalysis Precipitation Datasets: A Multiregional Tropical Complex Terrain Evaluation. J. Hydrometeor. 2019, 20, 2147–2161. [Google Scholar] [CrossRef]
  72. Bhuiyan, M.A.; Begum, F.; Ilham, S.J.; Khan, R.S. Advanced wind speed prediction using convective weather variables through machine learning application. Appl. Comput. Geosci. 2019, 1, 100002. [Google Scholar]
  73. Diamond, S.; Sitzmann, V.; Boyd, S.; Wetzstein, G.; Heide, F. Dirty pixels: Optimizing image classification architectures for raw sensor data. arXiv 2017, arXiv:1701.06487. [Google Scholar]
  74. Borkar, T.S.; Karam, L.J. DeepCorrect: Correcting DNN models against image distortions. IEEE Trans. Image Process. 2019, 28, 6022–6034. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Ghosh, S.; Shet, R.; Amon, P.; Hutter, A.; Kaup, A. Robustness of Deep Convolutional Neural Networks for Image Degradations. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 2916–2920. [Google Scholar]
  76. Buchhorn, M.; Walker, D.A.; Heim, B.; Raynolds, M.K.; Epstein, H.E.; Schwieder, M. Ground-based hyperspectral characterization of Alaska tundra vegetation along environmental gradients. Remote Sens. 2013, 5, 3971–4005. [Google Scholar] [CrossRef] [Green Version]
  77. Bratsch, S.N.; Epstein, E.; Bucchorn, M.; Walker, D.A. Differentiating among four Arctic tundra plant communities at Ivotuk, Alaska using field spectroscopy. Remote Sens. 2016, 8, 51. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Geographical locations for training and independent validation sites: (a) training sites from Russia, Canada, and Alaska, and (b) independent validation sites from Alaska. Tundra vegetation map and the legend are adapted from [54]. Satellite imagery Copyright DigitalGlobe, Inc.
Figure 1. Geographical locations for training and independent validation sites: (a) training sites from Russia, Canada, and Alaska, and (b) independent validation sites from Alaska. Tundra vegetation map and the legend are adapted from [54]. Satellite imagery Copyright DigitalGlobe, Inc.
Jimaging 06 00137 g001
Figure 2. Simplified schematic of the automated ice-wedge polygon mapping workflow (left) and the general architecture of the Mask R-CNN algorithm (right).
Figure 2. Simplified schematic of the automated ice-wedge polygon mapping workflow (left) and the general architecture of the Mask R-CNN algorithm (right).
Jimaging 06 00137 g002
Figure 3. Training and validation loss of Mask R-CNN model (a) Smooth-L1 loss; (b) Mask R-CNN bounding box loss; (c) Mask R-CNN classifier loss; (d) Mask binary cross-entropy loss; (e) RPN bounding box loss; (f) RPN classifier loss.
Figure 3. Training and validation loss of Mask R-CNN model (a) Smooth-L1 loss; (b) Mask R-CNN bounding box loss; (c) Mask R-CNN classifier loss; (d) Mask binary cross-entropy loss; (e) RPN bounding box loss; (f) RPN classifier loss.
Jimaging 06 00137 g003aJimaging 06 00137 g003b
Figure 4. Zoomed-in views of (a) original imagery, (b) ground truth (manual delineation, blue outline) and (c) model result (yellow outline) for candidate scene V4. Imagery © [2016] DigitalGlobe, Inc.
Figure 4. Zoomed-in views of (a) original imagery, (b) ground truth (manual delineation, blue outline) and (c) model result (yellow outline) for candidate scene V4. Imagery © [2016] DigitalGlobe, Inc.
Jimaging 06 00137 g004
Figure 5. Sample views of original imagery (left) and model classification (right) for candidate scenes. Yellow outlines denote automatically detected IWPs. Imagery © [2010, 2012, 2015, 2016] DigitalGlobe, Inc. Satellite image scenes are obtained from different tundra regions: (a) non-tussock sedge tundra; (b) non-tussock sedge tundra; (c) sedge tundra; (d) tussock sedge tundra.
Figure 5. Sample views of original imagery (left) and model classification (right) for candidate scenes. Yellow outlines denote automatically detected IWPs. Imagery © [2010, 2012, 2015, 2016] DigitalGlobe, Inc. Satellite image scenes are obtained from different tundra regions: (a) non-tussock sedge tundra; (b) non-tussock sedge tundra; (c) sedge tundra; (d) tussock sedge tundra.
Jimaging 06 00137 g005
Table 1. General characteristics of four candidate image scenes.
Table 1. General characteristics of four candidate image scenes.
SiteSensorAcquisition DateImage Off NadirSun ElevationAzimuth
V1WorldView207/29/201038.6°35.8°135.5°
V2WorldView207/04/201227.2°42.2°47.6°
V3WorldView207/05/201515.4°42.4°203.8°
V4WorldView209/03/201625.9°27.8°207.6°
Table 2. Different tundra types for training and validation sites.
Table 2. Different tundra types for training and validation sites.
DatasetStudy SitesDominant Tundra
TrainingRussiaT1G1-Rush/grass, forb, cryptogam tundra
G2-Graminoid, prostrate dwarf-shrub, forb tundra
P1: Prostrate dwarf-shrub, herb, lichen tundra
P2: Prostrate/Hemiprostrate dwarf-shrub
AlaskaT2G4 Tussock-sedge, dwarf-shrub, moss tundra
CanadaT3G4:Tussock-sedge, dwarf-shrub, moss tundra
G3:Non-tussock sedge, dwarf-shrub, moss tundra
W2:Sedge-wetland complexes
ValidationAlaskaV1G3:Non-tussock sedge, dwarf-shrub, moss tundra
W2:Sedge-wetland complexes
V2G3:Non-tussock sedge, dwarf-shrub, moss tundra
W2:Sedge-wetland complexes
V3W2:Sedge-wetland complexes
V4G4:Tussock-sedge, dwarf-shrub, moss tundra
Table 3. Summary statistics of mean intersection over union (mIoU) for candidate image scenes.
Table 3. Summary statistics of mean intersection over union (mIoU) for candidate image scenes.
Validation SitesmIoU
V10.91
V20.87
V30.86
V40.85
Table 4. Accuracy assessment of detection for candidate image scenes.
Table 4. Accuracy assessment of detection for candidate image scenes.
Validation SitesNumber of Reference PolygonsCorrectnessCompletenessF1 Score
V15820.9989%0.96
V2567185%0.94
V3579183%0.92
V4573181%0.89
Table 5. Accuracy assessment of classification for candidate image scenes.
Table 5. Accuracy assessment of classification for candidate image scenes.
Validation SitesNumber of Reference PolygonsCorrectnessCompletenessF1 Score
V15820.9899%0.97
V25670.9996%0.95
V35790.9897%0.96
V45730.9995%0.94
Table 6. Absolute mean relative error ( AMRE ) for candidate scenes.
Table 6. Absolute mean relative error ( AMRE ) for candidate scenes.
Validation SitesAMRE
V10.17
V20.18
V30.21
V40.23
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bhuiyan, M.A.E.; Witharana, C.; Liljedahl, A.K. Use of Very High Spatial Resolution Commercial Satellite Imagery and Deep Learning to Automatically Map Ice-Wedge Polygons across Tundra Vegetation Types. J. Imaging 2020, 6, 137. https://doi.org/10.3390/jimaging6120137

AMA Style

Bhuiyan MAE, Witharana C, Liljedahl AK. Use of Very High Spatial Resolution Commercial Satellite Imagery and Deep Learning to Automatically Map Ice-Wedge Polygons across Tundra Vegetation Types. Journal of Imaging. 2020; 6(12):137. https://doi.org/10.3390/jimaging6120137

Chicago/Turabian Style

Bhuiyan, Md Abul Ehsan, Chandi Witharana, and Anna K. Liljedahl. 2020. "Use of Very High Spatial Resolution Commercial Satellite Imagery and Deep Learning to Automatically Map Ice-Wedge Polygons across Tundra Vegetation Types" Journal of Imaging 6, no. 12: 137. https://doi.org/10.3390/jimaging6120137

APA Style

Bhuiyan, M. A. E., Witharana, C., & Liljedahl, A. K. (2020). Use of Very High Spatial Resolution Commercial Satellite Imagery and Deep Learning to Automatically Map Ice-Wedge Polygons across Tundra Vegetation Types. Journal of Imaging, 6(12), 137. https://doi.org/10.3390/jimaging6120137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop