Next Article in Journal
Spatial Explicit Assessment of Urban Vitality Using Multi-Source Data: A Case of Shanghai, China
Previous Article in Journal
Exploring the Current Challenges and Opportunities of Life Cycle Sustainability Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coastal Aquaculture Mapping from Very High Spatial Resolution Imagery by Combining Object-Based Neighbor Features

1
Institute of Agricultural Remote Sensing and Information Technology, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058, China
2
Department of Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou 310058, China
3
Zhejiang Mariculture Research Institute, Wenzhou 325005, China
*
Author to whom correspondence should be addressed.
Sustainability 2019, 11(3), 637; https://doi.org/10.3390/su11030637
Submission received: 20 December 2018 / Revised: 19 January 2019 / Accepted: 22 January 2019 / Published: 26 January 2019

Abstract

:
Coastal aquaculture plays an important role in the provision of seafood, the sustainable development of regional and global economy, and the protection of coastal ecosystems. Inappropriate planning of disordered and intensive coastal aquaculture may cause serious environmental problems and socioeconomic losses. Precise delineation and classification of different kinds of aquaculture areas are vital for coastal management. It is difficult to extract coastal aquaculture areas using the conventional spectrum, shape, or texture information. Here, we proposed an object-based method combining multi-scale segmentation and object-based neighbor features to delineate existing coastal aquaculture areas. We adopted the multi-scale segmentation to generate semantically meaningful image objects for different land cover classes, and then utilized the object-based neighbor features for classification. Our results show that the proposed approach effectively identified different types of coastal aquaculture areas, with 96% overall accuracy. It also performed much better than other conventional methods (e.g., single-scale based classification with conventional features) with higher classification accuracy. Our results also suggest that the multi-scale segmentation and neighbor features can obviously improve the classification performance for the extraction of cage culture areas and raft culture areas, respectively. Our developed approach lays a solid foundation for intelligent monitoring and management of coastal ecosystems.

1. Introduction

Marine aquaculture, particularly coastal aquaculture, offers an important potential for food production and the sustained development of many coastal areas, especially in regions with limited land, coastal space, and freshwater resources [1,2]. However, inappropriate planning of disordered and intensive coastal aquaculture may cause serious environmental problems and socioeconomic losses. The potential issues include water pollution [3,4], impacts on the surrounding sediment [5], damages to coral reefs, and biodiversity loss in coastal sea areas [6]. Globally, from 2000 to 2016, the total production of marine aquaculture has increased from 14.2 to 28.7 million tons [7,8]. As the largest producer, China has contributed to more than 50% of global aquaculture production since 1991 [8]. In China, a series of laws and regulations at national or local level, such as the Marine Environmental Protection Law, overall marine functional zonation, and nature reserve schemes, have been formulated for the management of coastal areas. However, comprehensive coastal management in China is still a big challenge. Thus, the monitoring and management of coastal aquaculture are imperative to ensure sustainable development of marine aquaculture industry.
Remote sensing technology has substantially improved our ability to observe remote or inaccessible areas at a fraction of the cost of traditional surveys [9]. Mapping and monitoring of aquaculture facilities provide decision-makers with important baseline data on production, cultivated area boundaries, and environmental impacts [10,11]. Remote sensing can provide consistent and wide-range monitoring using various sensors to support aquaculture management [12,13,14], which means the mapping of aquaculture facilities can be performed accurately and periodically at selected scales.
Previous studies have used visual interpretation, spatial structure enhanced analysis, and object based image analysis (OBIA) to extract coastal aquaculture areas from remotely sensed imagery. Although visual interpretation can achieve the highest accuracy, it takes a lot of time and effort. Thus, it is less used presently. Spatial structure enhancement techniques, such as neighborhood and texture analysis, are the most commonly adopted methods in the classification process [15,16]. OBIA has become a widely used method in the past decades, especially for the classification of very high spatial resolution (VHSR) imagery [17]. The basic processing unit in OBIA is image object, which is generated by grouping pixels with similar features into an object, instead of a single pixel [18]. Thus, OBIA can avoid “salt-and-pepper” effects caused by pixel-based methods [19], and performs better with VHSR imagery [20].
However, due to the complexity of the sea environment, coastal aquaculture areas often exhibit many different characteristics with each other in traditional spectrum, shape, and texture features. In addition, they may be spectrally confused with surrounding waters in VHSR imagery. As a result, it is much harder for accurate classification when relying on only individual segment information. Furthermore, there are also different-scale coexisting objects in the sea area, such as various islands and watercrafts. These unfavorable factors challenge accurate coastal aquaculture mapping from VHSR imagery.
More recent studies highlight a new trend that uses multi-scale segmentation and spatial contextual information for classification. Kim et al [21] used an OBIA method with VHSR aerial imagery (0.3 m spatial resolution) to extract marsh vegetation, channel, and bare mud areas from a salt marsh area. Their results suggest that the multi-scale OBIA method produced the highest classification accuracy. Zhou and Troy [22] developed a multi-scale object-based classification method, which provided an effective and flexible way to classify and inventory forest cover from digital aerial imagery (0.6 m spatial resolution). Liu et al [23] investigated the shape of segments and topological relations among them, and proposed a framework for the extraction of roads and moving vehicles from an aerial photo (0.3 m spatial resolution). Zheng et al [24] attempted to extract different types of rural settlements by combining a multi-scale segmentation strategy and a landscape analysis method, which achieved high classification accuracy. Wang et al [25] proposed the region-line primitive association framework (RLPAF) for discriminating raft aquaculture areas, which is mainly constructed from the direction and topology relationships between regions and line primitives.
Here, we present a framework to solve the previously mentioned problems by synthesizing the multi-scale segmentation and spatial contextual information. We adopted a multi-scale segmentation strategy to generate semantically meaningful image objects for different types of land cover classes. Subsequently, we utilized the object-based neighbor features to extract coastal aquaculture areas in the complex sea environment. We employed the neighbor features because almost all the coastal aquaculture areas have a darker or brighter tone than the surrounding sea water, which shows higher robustness than the features based on the statistics of spectrum, shape, or texture of a single image segment. The main objective of this paper is to establish a framework and methodology to extract different types of existing coastal aquaculture areas by exploiting object-based neighbor information.

2. Study Area

We selected a coastal area of approximately 110 km2 around Sandu Island as the study area, which is located in Ningde City, Fujian Province, China (119°40′54″E, 26°39′21″N, Figure 1). This location is characterized by its ideal environment for coastal aquaculture, as it has unique hydrological and geographical conditions; namely, the semi-closed natural harbor and several small islands located in the harbor mouth, weakening typhoons, and accumulating nutrients in seawater, which provide an ideal environment for aquaculture. Therefore, a large number of aquaculture areas, which mainly include raft culture areas (RCA) and cage culture areas (CCA), were developed in this region (Figure 2).
CCA are composed of abundant fish cages and several simply constructed accommodation units. The materials used for these fish cages include easily obtained wooden boards, bamboo, nylon nets, or foam. Almost all of the CCA are not assembly line productions with engineering standards, so they have varied structures and present very complex and different characteristics in spectrum, shape, and texture.
RCA are generally widely and sparsely distributed, and cultivated with agar or kelp. Those plants are stuck to the cultivation belt, and fixed on the rope-linked styrofoam floats. Therefore, the color and shape of RCA are very different from each other due to the various density of cultivation belts. Besides, the complex sea environment, such as unstable and irregular distribution of waves and silt, makes it difficult for the extraction of RCA.

3. Materials and Methods

Figure 3 shows the overall methodological framework of this research. Following preprocessing and pan-sharpening, the land surface, which is not be classified in this study, was firstly masked by using a rule-based approach. We obtained the optimal features and threshold by applying an automatic feature selection methodology. Subsequently, we operated a coarser scale segmentation in the sea area. Based on the coarser scale segments, we separated the submerged and unsubmerged areas. We performed the separation process because we wanted to create semantically meaningful objects for CCA and RCA in the separated unsubmerged area and submerged area, respectively. After that, we defined and calculated several object-based neighbor features based on the multi-scale segments. We then conducted the final classification process to identify the RCA and CCA by combining these neighbor features. To verify whether our proposed multi-scale based neighbor information classification (MNIC) method can effectively improve the classification accuracy, we adopted a single-scale based conventional information classification (SCIC) scheme for comparison. Besides, we also studied the effects of multi-scale segmentation and neighbor features by using the multi-scale based conventional information classification (MCIC) method and the single-scale based neighbor information classification (SNIC) method, respectively.

3.1. Data and Preprocessing

We selected the Worldview-2 (WV-2) image as the data source because of its very high spatial resolution compared to similar satellites, such as IKONOS, GF-2, QuickBird, and GeoEye-1. WV-2 is the first commercial satellite that can provide eight multispectral bands with sub-meter resolution. The satellite provides a spatial resolution of approximately 2 m for 8 multispectral bands (MSS): coastal (400–450 nm), blue (450–510 nm), green (510–580 nm), yellow (585–625 nm), red (630–690 nm), red edge (705–745 nm), near infrared-1 (770–895 nm), and near infrared-2 (860–1040 nm). The satellite also provides a panchromatic band (PAN, 450–800 nm) with about 0.5 m spatial resolution [26]. A WV2 image of the study area was acquired on May 20, 2011, with a cloud-free and haze-free atmospheric condition for the whole aquaculture area, thus the atmospheric correction was not necessary in the preprocessing step [27]. The MSS image and PAN image were orthorectified into the Universal Transverse Mercator (UTM) projection system, and fused using Gram–Schmidt pan-sharpening method in ENVI (v5.1, Exelis Visual Information Solutions, Boulder, CO, USA, 2014).

3.2. Land and Sea Areas Separation

Following preprocessing and pan-sharpening, we adopted the widely used multiresolution Segmentation (MRS) algorithm [28], which was implemented in the eCognition software (v9.0, Trimble Germany GmbH, Munich, Germany, 2014) to produce semantically meaningful image objects. This algorithm is based on a bottom-up region-merging method and mainly controlled by three key parameters: scale parameter (SP), shape, and compactness.
As the land area occupies a large and continuous area with high heterogeneity, we determined the optimal SP by a trial-and-error optimization. The ideal segmentation results are expected to have clear boundaries, high separability between classes, and enough segments for the selection of representative samples. We tested Eight different SPs (scales: 100, 300, 700, 1000, 3000, 5000, 7000, and 9000) to obtain the ideal segmentation results that can be helpful to discriminate between land and sea areas. Shape was given less importance by assigning a value of 0.1 for various shapes of coast line. The weight of compactness was assigned a value of 0.5, because we wanted to treat them equally.
Based on the segmentation results, we used a total of 34 spectral and geometrical features (Table 1) for analysis. We then employed the SEparability and THresholds (SEaTH) method [29] to find an optimal feature that can effectively discriminate between land and sea areas. In this method, Jeffries–Matusita Distance J is used to measure the separability between two classes on a scale (0–2). Complete discrimination of the two classes can be indicated by J = 2. In other words, based on the selected samples, the two classes can be separated without misclassification using the selected feature for classification, and the number of misclassified objects will increase if the value of J is lower. It is calculated as:
B = 1 8 ( m 1 m 2 ) 2 2 σ 1 + σ 2 + 1 2 ln [ σ 1 2 + σ 2 2 2 σ 1 σ 2 ]
J = 2 ( 1 e B )
where mi and σi, i = 1,2, are the mean and the variance of land and sea areas feature distributions, respectively.
Following the selection of optimal feature with the highest J value, threshold T for the separation of the two classes is calculated as:
A = log ( σ 1 σ 2 × n 2 n 1 )
T = m 2 σ 1 2 m 1 σ 2 2 ± σ 1 σ 2 ( m 1 m 2 ) 2 + 2 A ( σ 1 2 σ 2 2 ) σ 1 2 σ 2 2
where ni, i = 1,2, are the number of samples of land and sea area, respectively.

3.3. Two-Level Hierarchical Segmentation

After the separation of land and sea areas, we operated a two-level hierarchical segmentation scheme in the sea area to generate coarser and finer scale image objects, which were expected to represent semantically meaningful image objects of CCA and RCA, respectively. Instead of the commonly used trial-and-error process, we adopted an objective method, which is called Estimation of Scale Parameter (ESP) tool [30], to select the candidate SPs. The ESP tool iteratively performs segmentation in fixed step sizes and calculates the local variance (LV) for each scale. Figure 4 shows the LV values that were plotted against the corresponding SPs. Based on this figure, the peaks in the LV curve indicate appropriate SPs. At these peaks, they are expected to represent semantically meaningful objects characterized by relatively equal degrees of homogeneity. The graph shows that the scale of 144 represents an obvious sharp break after a continuous and abrupt increase. Thus, we set 144 as the finer segmentation scale. After a visual evaluation of the candidate SPs near 180, we selected 186 as the coarser segmentation scale.
We gave the shape factor less importance by assigning a value of 0.1 for various shapes of RCA at the finer scale, and 0.5 for the regular shape of CCA at the coarser scale. We assigned the weight of compactness a value of 0.5 at each level, because we wanted to treat them equally. We tested and found these parameters were suitable for the corresponding targets. Meanwhile, we used eight bands of WV-2 image as input raster layers for the MRS algorithm, and assigned them the same weight of 1 at each level.

3.4. Creating Semantically Meaningful Objects for CCA and RCA

With the selection of optimal SPs, we adopted a two-level segmentation strategy to create semantically meaningful objects for CCA and RCA, respectively. First, we operated an MRS algorithm using the coarser scale of 186 in the sea area, generating 7594 objects. Then, we applied a rule-based method to separate the submerged and unsubmerged area, again using the SEaTH method. As a result, the segments of sea water and RCA were classified as submerged area. Meanwhile, the segments of CCA and watercraft were classified as unsubmerged area. After the separation of unsubmerged area and submerged area, we merged the neighbor image objects of unsubmerged area to create semantically meaningful objects for CCA. To create semantically meaningful objects for RCA, we segmented the unsubmerged area again using the finer scale of 144, generating 9602 objects.

3.5. Neighbor Features Calculation and Final Classification

3.5.1. Features Based on Neighborhood Relationship

Let object O be a set of pixels within image I. O = {oi = (xi, yi) | i∈[1, k], k = |O|} where x and y are the image object coordinates, and |·| is the cardinality of a set. Let Bo be the boundary pixels of O. In this case, two image objects u and v are considered neighbors if they contain pixels that neighbor each other:
BuBv ≠ ∅.
Let D (u, c) be a set of darker/brighter neighbor image objects for image object u at a given feature c. Let b (u, v) as the sum of edges of the image object u shared with other image objects in D (u, c). We define the Real border to darker/brighter objects feature (RBDs/RBBs) of an image object as follows:
R B D s ( u ) / R B B s ( u ) = v D ( u , c ) b ( u , v ) b u .
The RBDs/RBBs measures the extent to which an image object is surrounded by darker/brighter neighbor image objects on a scale [0-1]. A value of 1 means the image object is completely surrounded by darker/brighter neighbors. The lower RBDs is, the more neighbors were brighter image objects.
Let N (u, c) be a set of neighbor image objects for image object u at a given feature c. The Mean difference to neighbors feature (MDNs) is defined as follows:
M D N s ( u ) = v N ( u , c ) ( c ( u ) ( v ) ) .
The MDNs calculates the difference between an image object and its neighbor image objects. A negative/positive value means the image object is darker/brighter than the neighbor environment. Meanwhile, a larger absolute value means larger difference with the neighbor environment.

3.5.2. CCA and RCA Mapping

In this classification procedure, we identified the CCA, RCA, sea water, and watercraft by using nearest neighbor classification (NNC) method based on the multi-scale segments. The NNC method generally has good results with carefully chosen valuable features [31]. It is suitable for this study because we wanted to fully explore the potential value of neighbor features and traditional features, such as spectrum, shape, and texture. Besides, NNC is straightforward to implement and does not require hyperparameter definitions. Thus, NNC is considered as an appropriate method for our classification and comparison purposes.
In the feature selection phase, we employed feature space optimization (FSO), a tool available in eCognition, to select the optimal feature combination. Based on selected samples, FSO calculates the Euclidean distance in feature space between classes and chooses the best combination of features, resulting in the largest minimum distances between the least separable classes [32]. Eventually, the best feature combination included RBBs (bands 1–8), MDNs (bands 3–8), NDVI, NDWI, mean of bands 1–3, standard deviation of band 1, and geometrical features (length, density, asymmetry, roundness, compactness, shape index, border index, and rectangular fit). Finally, we selected a total of 96 samples from 9992 segments, which is approximately 1% of the whole image objects, including 24 samples of CCA and 25 samples of RCA.

3.6. Comparison Methods

3.6.1. Single-Scale Based Conventional Information Classification Method

To provide a comparison, we also applied a conventional object-based method to extract the CCA and RCA. Based on the image objects at the finer scale, a SCIC scheme was applied.
The same sample areas of CCA, RCA, and other classes employed in our proposed method were selected again, and then the NNC method was used for the classification. Compared with our proposed MNIC approach, the SCIC method only used the traditional features and was performed at a single scale. FSO was again applied for the feature selection from features in Table 1. Finally, the best feature subset included mean of bands 3–8, standard deviation of band 1 and bands 4–8, brightness, maximum difference, NDVI, NDWI, and geometrical features (density, asymmetry, roundness, compactness, border index, shape index, rectangular fit, elliptic fit). We found that the RCA, CCA, watercraft, and water still have differences in these attributes. Thus, to some extent, they could be discriminated by using this method.

3.6.2. Multi-Scale Based Conventional Information and Single-Scale Based Neighbor Information Classification Methods

To study the effects of multi-scale segmentation and neighbor features, we also applied the MCIC and SNIC methods for further comparison by controlling the segmentation strategy and feature set. For the two methods, we selected the same sample areas of CCA, RCA, and other classes employed in our proposed method, and selected the best feature combination by applying the FSO again. We then trained the nearest neighbor classifier for classification.
Both of the methods were designed based on our proposed MNIC method. In the MCIC method, we studied the effects of neighbor features by removing neighbor features before the feature selection process. The best feature subset included the mean of bands 1–4 and bands 6–8, standard deviation of band 1 and bands 3–8, NDVI, NDWI, and geometrical features (border index, roundness, compactness, shape index, length, rectangular fit, density, elliptic fit, asymmetry).
In the SNIC method, we studied the effects of multi-scale segmentation strategy by applying classification method based on the single-scale segmentation results that were same as the SCIC method. The best feature subset included RBBs (bands 1–8), MDNs (bands 2–8), mean of bands 1 and 2 and bands 6–8, standard deviation of band 1 and bands 3–8, brightness, maximum difference, NDVI, NDWI, and geometrical features (asymmetry, elliptic fit, density, rectangular fit, shape index, border index, roundness, compactness, border index).

3.7. Accuracy Assessment and Comparison

In this paper, we compared our proposed MNIC method with the SCIC, MCIC, and SNIC methods. We conducted accuracy assessment on the final classification maps, with a total of 1549 randomly selected segments in the sea area. To construct the error matrix, we confirmed whether the segments were correctly identified by visual interpretation. Finally, accuracy statistics, including producer accuracy (PA), user accuracy (UA), overall accuracy (OA), and kappa coefficient, were calculated based on the error matrix.
To compare the accuracies of the classification results between different methods, we only counted the CCA and RCA accuracies. We adopted three commonly used evaluation metrics for the CCA and RCA separately, including F-measure, precision, and recall [33], which are calculated as follows:
Precision = TP TP + FP
Recall = TP TP + FN
F - measure = 2 × Precision × Recall Precision + Recall
where TP, TN, FP, and FN refer to true positives, true negatives, false positives, and false negatives, respectively.
To avoid the influence of different segmentation strategies in these methods, we conducted an accuracy assessment on the final classification results by visual interpretation, with 10,000 randomly selected points in the sea area. In this research, we regarded the extraction of CCA or RCA as binary classification. Therefore, we defined the TP as the number of correctly labeled points of CCA or RCA.

4. Results

4.1. Segmentation and Classification Results of Land and Sea Area Separation

Figure 5 shows the segmentation results with different SP settings. After a visual inspection on the output image objects, ponds that were expected to be included in the land area are delineated at scales of 100–3000, which can then easily be misclassified as sea area. Consequently, there is obvious under-segmentation at scales of 100–3000 for land and sea areas separation. Although the land and sea areas are delineated accurately, both the scales of 7000 and 9000 have a very limited amount of segments for training (scales of 7000 and 9000 generated a total number of 25 and 21 segments, respectively). Besides, small islands in the sea area may also be included in a segment of sea area with the increase of SP. Therefore, we set the scale of 5000 as the optimal SP, generating 47 segments.
Based on the segmentation results, we selected 8 representative samples of land and sea areas. Depending on the J values (Table 2), we classified an image segment as land area only if it satisfies this rule:
Mean Layer 6 > 248.93.
The classification results of land and sea areas are shown in Figure 6. After a visual assessment of the output results, semantically meaningful image objects are delineated accurately at the scale of 5000, and both of them can be identified successfully. The land areas are totally masked and all the CCA and RCA are included in the sea area, which is fully consistent with the expectation of the framework presented in Figure 3.

4.2. Final Classification Results and Accuracy Assessment

To separate the submerged and unsubmerged area, 16 representative samples were selected from the coarser level segmentation results in sea area, including 6 samples of unsubmerged area and 10 samples of submerged area. Depending on the J values (Table 3), an image segment was classified as unsubmerged area only if it satisfies this rule:
NDWI < −0.53.
The final classification results are shown in Figure 7, with the multi-scale segmentation results. It visually shows that the RCA and CCA with varied sizes are delineated accurately, and both of them are identified successfully. We also notice that a few of RCA were misclassified as sea water. This is because these segments were partly submerged in the wave and surrounded by the turbid sea water, leading to a decrease in discrimination ability with the surrounding environment.
To quantitatively assess the accuracy of the classification results, we randomly selected over 1540 segments, with no less than 570 image objects of RCA and CCA. Table 4 shows the confusion matrix of the final classification results. We find that the RCA have the highest UA of 99%, indicating that almost all the identified RCA in the classification results are truly RCA. The UA of the CCA is also over 95%. The sea water has the highest PA. The RCA and watercraft have similar high PA values of 92%, which means that over 90% of the RCA and watercraft are identified successfully. Therefore, RCA and CCA are identified successfully, with all PA and UA values greater than 87%. We also find that the PA for CCA and the UA for watercraft are the two lowest classification accuracy values, which is 87% and 85%, respectively. This is because some small CCA can be easily misclassified with some ships loaded with cargo or collective boats.

4.3. Accuracy Comparison

Visual comparison between these classification maps in Figure 8 and Figure 9 show that our proposed method can improve the classification performance. Compared with our proposed method (Figure 8a), we find that: CCA and RCA still remain misclassified by using the SCIC method (Figure 8b); only the RCA remain obviously misclassified by using MCIC method, especially when surrounded by turbid sea water (Figure 9a); only the CCA remain obviously misclassified by using SNIC method (Figure 9b). Thus, the multi-scale segmentation and neighbor features can provide valuable information and improve the classification accuracy.
To further quantitatively assess the performance of the proposed method, precision, recall, and F-measure for each method were calculated (Table 5). First, our proposed MNIC method achieved good balance between precision and recall, with the highest precision value for RCA at 97.93%, indicating that almost 98% of RCA in the classification map are truly RCA. We also notice that the SCIC method achieved higher recall values for CCA than our method at 94.67%, indicating that 94.67% of the CCA in the real world were identified. However, the precision value for CCA is found to be the lowest value at 29.28% using the SCIC method, indicating that over 70% of CCA in the classification map are misclassified. Second, our method achieved good balance between accuracy values of CCA and RCA. Both of the f-measure values of CCA and RCA using our proposed method are at nearly 95%. However, we find that the MCIC method achieved a high f-measure value for CCA at 92.52% with a low f-measure value for RCA at 48.67%, and the SNIC method achieved a low f-measure value for CCA at 44.59% with a high f-measure value for RCA at 92.94%. Finally, our proposed method achieved the highest classification accuracy, with the highest F-measure value for CCA at 94.56%, and RCA at 95.13%.

5. Discussion

5.1. Extraction of CCA and RCA and Multi-Scale Based Neighbor Features

In this paper, we proposed the MNIC method to generate representative neighbor features for the accurate mapping of CCA and RCA in VHSR imagery. Although VHSR imagery can provide more detailed information for small targets, high within-class variance as well as low between-class variance that characterize this kind of imagery make the detection and classification of land cover a difficult task [34]. Current studies suggest that multi-scale segmentation and spatial contextual information can provide valuable information in VHSR image analysis [35,36], and such methodology can be adopted for CCA and RCA extraction. Although the CCA and RCA may have high within-class variance and be spectrally confused with the sea water or other land cover classes, most of them are widely and sparsely distributed, and show an obvious darker or brighter tone than the surrounding sea water. Therefore, we can firstly adopt the multi-scale segmentation to generate semantically meaningful image objects for CCA and RCA, and then use the neighbor features to capture these characteristics and improve the classification performance. Although our method takes many steps, it is not complex and achieved better classification performance. Since almost all the developed procedures can be performed in an automatic way, our proposed method can be relatively easily implemented in a routine analysis and management.

5.2. Related Methods and Advantages

We firstly compared our proposed method with the conventional object-based methods, such as SCIC, MCIC, and SNIC methods (Figure 3). The differences between our proposed method and the SCIC method emphasizes two aspects. First, we adopted the multi-scale segmentation rather than a single-scale segmentation. It is suggested that single-scale segmentation is an unrealistically simple scene model [37]. In this new approach, we firstly excluded the land area at a rough scale, and then we extracted the CCA and RCA at different optimal scales. Thus, the boundaries of RCA and CCA can be accurately delineated before classification, which also provided a set of robust features for analysis. Second, these two methods adopted different features in the classification procedure. The compared single-scale based classification method adopted conventional features, such as spectrum, geometry, and texture features. In our proposed method, we adopted the neighbor features generated by a multi-scale segmentation strategy, which obviously improved the classification performance.
Compared to the MCIC method, the main difference is that we adopted different feature sets. In our proposed method, we added neighbor features in the feature set instead of using only conventional features that have been used in the compared method. Our results suggest that the adoption of neighbor features shows an obvious improvement for the extraction of RCA, especially when some of the RCA are partly submerged in waves and surrounded by turbid sea water. This is because most of the RCA have an obvious darker or brighter tone than the surrounding sea water in the complex sea environment. Meanwhile, the waves or turbid sea water can easily influence the conventional features of RCA in VHSR images. Thus, the neighbor features are more stable and helpful for the extraction of RCA compared with the conventional features.
Compared to the SNIC method, the main difference is that we adopted multi-scale segmentation. In our proposed method, we adopted multi-scale segmentation to accurately delineate the boundaries of RCA and CCA, which then provided a set of robust features for analysis. In contrast, the boundaries of CCA are not delineated well in the SNIC method and it mainly has two influences on the classification. First, the divided segments of CCA may have unstable neighbor features due to the unpredictable surrounding environment; for example, it may have a totally different neighbor features when it is surrounded by other CCA instead of the sea water. Besides, because of the high variance of construction materials used in each CCA, each part of the CCA may have totally different characteristics in spectrum, geometry, and texture with each other. Thus, our method can obviously improve the classification performance by using multi-scale segmentation.
Compared to conventional pixel-based contextual information in a local neighbor area, such as Markov Random Field [38], grey-level co-occurrence matrix [39], or lacunarity [40], our object-based neighbor information extraction method is essential for the extraction of CCA and RCA from VHSR imagery. Conventional contextual information used in the pixel-based method depends largely on the selection of window size [41,42]. However, due to the various sizes of CCA and RCA in VHSR imagery, it is difficult to find an optimal window size for both of them. Besides, as this conventional pixel-based contextual information is based on the statistic features in a fixed-size local area, most of them ignore the relationship between semantically meaningful image objects in OBIA, and few of them consider the geographic characteristics between different land covers.
Some image scene classification methods may also benefit from the neighbor information. In these classification methods, each scene image is resized to a rectangular patch for labeling [43,44,45]. In previous studies, researchers have used sliding window [46] or chessboard segmentation [47] to create rectangular patches. Based on these patches, representative features are calculated for classification. However, sliding window and chessboard segmentation are not the most suitable methods for acquiring land cover units. They cannot acquire accurate land cover boundaries and the neighbor information is largely limited by the fixed patch size, which lead to a decrease in the classification accuracy. To avoid this problem, we have adopted multi-scale segmentation and an MRS algorithm to generate semantically meaningful image objects. By using this method, we can accurately delineate the boundaries of CCA and RCA and efficiently utilize their neighbor information. We think the multi-scale segmentation strategy with MRS algorithm is a more suitable approach to generate neighbor information for the extraction of CCA and RCA.
There are also some related studies with the extraction of RCA from synthetic aperture radar (SAR) images [15,48,49]. Many of them have developed technologies to extract features based on the pixel-level neighborhood relationship, such as gray-level co-occurrence matrix [50], local binary patterns, and Gabor transform [51]. However, speckle noises may easily pollute features extracted by these approaches, which may decrease the detection precision [52]. Besides, these features ignore the relationship between CCA or RCA with the surrounding environment, which can be helpful to improve the classification performance if properly used. Thus, we proposed a framework to extract the CCA and RCA from the VHSR imagery by combining object-based neighbor features. Our results indicate that the optical images are also an appropriate data source.

5.3. Scale Effects and Limitations

In our study, it is crucial to generate a set of meaningful segments, because the contextual features utilized in this paper are based on the object-based neighbor information. For example, if a segment of CCA is totally surrounded by other segments of CCA instead of sea water, it will have totally different neighbor features. Thus, we tried to improve the segmentation performance by several approaches. First, we chose the MRS algorithm, since it follows the region-merging principle and can generate satisfied segmentation results with our imagery. Second, in our two-level segmentation framework, we employed an objective method, the ESP tool, to find the optimal SPs for RCA and CCA, respectively. Then, we tried different methods to reduce the influence of over or under-segmentation. For example, we performed a class merge algorithm after the unsubmerged area was extracted, so that all the segments of CCA can be surrounded by sea water.
However, there are still some limitations of our proposed method. First, it is relatively time consuming for the ESP tool to obtain optimal SPs with the VHSR imagery, because it is based on iterative segmentation and calculation of LV for each scale. Second, uncertainties still exit in our multi-scale segmentation strategy, such as the selection of appropriate SP for land and sea area separation, and further research of fully automated methods is essential. Therefore, improvements and experiments in the selection of appropriate SPs and segmentation methods that can directly and accurately delineate the boundaries of different land cover classes are still required. Finally, our proposed method only applies to surface water cover and use detection. However, in some places, such as Shandong Province in northeast China, there are a few submersible cages.

6. Conclusions

The mapping of coastal aquaculture areas lays a solid foundation for intelligent monitoring and management of coastal ecosystems. In this study, we proposed a framework to extract different types of coastal aquaculture areas by combining object-based neighbor information. Our proposed approach effectively identified and discriminated two types of coastal aquaculture, with 96% overall accuracy. This method integrates multi-scale segmentation and neighbor features. It firstly applied multi-scale segmentation to generate semantically meaningful image objects for different land covers, and then calculated neighbor features based on the multi-scale segments. These neighbor features were regarded as spatial contextual information, and adopted in the final classification procedure for the extraction of CCA and RCA.
Classification accuracy has been obviously improved using multi-scale segmentation and neighbor features compared to other conventional OBIA methods, such as SCIC, MCIC, and SNIC methods. Our results show that neighbor features generated by multi-scale segmentation can provide valuable information for the extraction of CCA and RCA. Furthermore, it also shows that the multi-scale segmentation and neighbor features can obviously improve the classification performance for the extraction of CCA and RCA, respectively. Our approach shows the applicability and effectiveness of the combination of multi-scale segmentation and neighbor information.
Compared to the widely used conventional pixel-based contextual information or image scene classification methods, object-based neighbor features are more effective in quantifying the contextual information of CCA and RCA. Based on semantically meaningful image objects, the neighbor features take geographic characteristics between different land covers into consideration.
Future studies may apply our developed approach with some minor adjustments to extract other kinds of complex objects in a homogeneous environment, like sea, grassland, or desert. Besides, more segmentation methods should be investigated and refined for effective delineation of CCA and RCA. In addition, more potential neighbor features should be explored for modeling geographic characteristics between different land covers.

Author Contributions

Funding acquisition, J.D., M.G. and W.Y.; investigation, J.W.; methodology, Y.F.; supervision, J.D. and K.W.; validation, Z.Y. and G.X.; writing—original draft, Y.F.; writing—review and editing, J.D., M.G., K.W., and W.Y.

Acknowledgments

Funding for this work was provided by Zhejiang Provincial Natural Science Foundation (LY18G030006), Fundamental Research Funds for the Central Universities (2017QNA6010), National Natural Science Foundation of China (41701171), Programs of Science and Technology Department of Zhejiang Province (2018F10016,2019C02045), and Basic Public Welfare Research Program of Zhejiang Province (LGN18D010001).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kapetsky, J.M.; Aguilar-Manjarrez, J.; Jenness, J. A global Assessment of Offshore Mariculture Potential from a Spatial Perspective; FAO: Rome, Italy, 2013; ISBN 9789251073896. [Google Scholar]
  2. Forster, J.; Radulovich, R. Seaweed and food security. In Seaweed Sustainability: Food and Non-Food Applications; Tiwari, B.K., Troy, D.J., Eds.; Elsevier: Amsterdam, The Netherlands, 2015; pp. 289–313. ISBN 9780124199583. [Google Scholar]
  3. Islam, M.S. Nitrogen and phosphorus budget in coastal and marine cage aquaculture and impacts of effluent loading on ecosystem: Review and analysis towards model development. Mar. Pollut. Bull. 2005, 50, 48–61. [Google Scholar] [CrossRef]
  4. Boyd, C.E.; Tucker, C.; McNevin, A.; Bostick, K.; Clay, J. Indicators of resource use efficiency and environmental performance in fish and crustacean aquaculture. Rev. Fish. Sci. 2007, 15, 327–360. [Google Scholar] [CrossRef]
  5. Beveridge, M.C.M. Cage Aquaculture, 3rd ed.; Blackwell Publishing: Ames, IA, USA, 2004; ISBN 1405108428. [Google Scholar]
  6. Zanuttigh, B.; Angelelli, E.; Bellotti, G.; Romano, A.; Krontira, Y.; Troianos, D.; Suffredini, R.; Franceschi, G.; Cantù, M.; Airoldi, L.; et al. Boosting blue growth in a mild sea: Analysis of the synergies produced by a multi-purpose offshore installation in the Northern Adriatic, Italy. Sustainability 2015, 7, 6804–6853. [Google Scholar] [CrossRef]
  7. FAO. The State of World Fisheries and Aquaculture; FAO: Rome, Italy, 2004; ISBN 9251051771. [Google Scholar]
  8. FAO. The State of World Fisheries and Aquaculture; FAO: Rome, Italy, 2018; ISBN 9789251305621. [Google Scholar]
  9. Lillesand, T.; Kiefer, R.W.; Chipman, J. Remote Sensing and Image Interpretation, 5th ed.; John Wiley & Sons: Hobokan, NJ, USA, 2004; ISBN 0471152277. [Google Scholar]
  10. Li, M.S.; Mao, L.J.; Shen, W.J.; Liu, S.Q.; Wei, A.S. Change and fragmentation trends of Zhanjiang mangrove forests in southern China using multi-temporal Landsat imagery (1977–2010). Estuar. Coast. Shelf Sci. 2013, 130, 111–120. [Google Scholar] [CrossRef]
  11. Volpe, J.P.; Gee, J.L.M.; Ethier, V.A.; Beck, M.; Wilson, A.J.; Stoner, J.M.S. Global aquaculture performance index (GAPI): The first global environmental assessment of marine fish farming. Sustainability 2013, 5, 3976–3991. [Google Scholar] [CrossRef]
  12. Rajitha, K.; Mukherjee, C.K.; Vinu Chandran, R. Applications of remote sensing and GIS for sustainable management of shrimp culture in India. Aquac. Eng. 2007, 36, 1–17. [Google Scholar] [CrossRef]
  13. Carswell, B.; Cheesman, S.; Anderson, J. The use of spatial analysis for environmental assessment of shellfish aquaculture in Baynes Sound, Vancouver Island, British Columbia, Canada. Aquaculture 2006, 253, 408–414. [Google Scholar] [CrossRef]
  14. Alexandridis, T.K.; Topaloglou, C.A.; Lazaridou, E.; Zalidis, G.C. The performance of satellite images in mapping aquacultures. Ocean Coast. Manag. 2008, 51, 638–644. [Google Scholar] [CrossRef]
  15. Fan, J.; Chu, J.; Geng, J.; Zhang, F. Floating raft aquaculture information automatic extraction based on high resolution SAR images. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 3898–3901. [Google Scholar]
  16. Lu, Y.; Li, Q.; Du, X.; Wang, H.; Liu, J. A Method of Coastal Aquaculture Area Automatic Extraction with High Spatial Resolution Images. Remote Sens. Technol. Appl. 2015, 30, 486–494. [Google Scholar] [CrossRef]
  17. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  18. Walter, V. Object-based classification of remote sensing data for change detection. ISPRS J. Photogramm. Remote Sens. 2004, 58, 225–238. [Google Scholar] [CrossRef]
  19. Laliberte, A.S.; Rango, A.; Havstad, K.M.; Paris, J.F.; Beck, R.F.; McNeely, R.; Gonzalez, A.L. Object-oriented image analysis for mapping shrub encroachment from 1937 to 2003 in southern New Mexico. Remote Sens. Environ. 2004, 93, 198–210. [Google Scholar] [CrossRef]
  20. Zheng, Y.; Wu, J.; Wang, A.; Chen, J. Object-and pixel-based classifications of macroalgae farming area with high spatial resolution imagery. Geocarto Int. 2017, 1–16. [Google Scholar] [CrossRef]
  21. Kim, M.; Warner, T.A.; Madden, M.; Atkinson, D.S. Multi-scale GEOBIA with very high spatial resolution digital aerial imagery: Scale, texture and image objects. Int. J. Remote Sens. 2011, 32, 2825–2850. [Google Scholar] [CrossRef]
  22. Zhou, W.; Troy, A. Development of an object-based framework for classifying and inventorying human-dominated forest ecosystems. Int. J. Remote Sens. 2009, 30, 6343–6360. [Google Scholar] [CrossRef]
  23. Liu, Y.; Guo, Q.; Kelly, M. A framework of region-based spatial relations for non-overlapping features and its application in object based image analysis. ISPRS J. Photogramm. Remote Sens. 2008, 63, 461–475. [Google Scholar] [CrossRef]
  24. Zheng, X.; Wu, B.; Weston, M.V.; Zhang, J.; Gan, M.; Zhu, J.; Deng, J.; Wang, K.; Teng, L. Rural settlement subdivision by using landscape metrics as spatial contextual information. Remote Sens. 2017, 9, 486. [Google Scholar] [CrossRef]
  25. Wang, M.; Cui, Q.; Wang, J.; Ming, D.; Lv, G. Raft cultivation area extraction from high resolution remote sensing imagery by fusing multi-scale region-line primitive association features. ISPRS J. Photogramm. Remote Sens. 2017, 123, 104–113. [Google Scholar] [CrossRef]
  26. Wolf, A. Using WorldView 2 Vis-NIR MSI Imagery to Support Land Mapping and Feature Extraction Using Normalized Difference Index Ratios. In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery; SPIE: Baltimore, MD, USA, 2012; Volume 8390, p. 83900N. [Google Scholar]
  27. Lin, C.; Wu, C.C.; Tsogt, K.; Ouyang, Y.C.; Chang, C.I. Effects of atmospheric correction and pansharpening on LULC classification accuracy using WorldView-2 imagery. Inf. Process. Agric. 2015, 2, 25–36. [Google Scholar] [CrossRef] [Green Version]
  28. Baatz, M.; Schäpe, A. Multiresolution Segmentation: An optimization approach for high quality multi-scale image segmentation. In Angewandte Geographische Informationsverarbeitung XII; Strobl, J., Blaschke, T., Griesebner, G., Eds.; Wichmann: Heidelberg, Germany, 2000; pp. 12–23. [Google Scholar]
  29. Nussbaum, S.; Niemeyer, I.; Canty, M.J. Seath—A New Tool for Automated Feature Extraction in the Context of Object-Based Image Analysis. In Proceedings of the 1st International Conference on Object-Based Image Analysis, Salzburg, Austria, 4–5 July 2006; pp. 1–6. [Google Scholar]
  30. Drǎguţ, L.; Tiede, D.; Levick, S.R. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [Google Scholar] [CrossRef]
  31. Nisbet, R.; Elder, J.; Miner, G. Handbook of Statistical Analysis and Data Mining Applications; Academic Press: Amsterdam, The Netherlands, 2009; ISBN 9780123747655. [Google Scholar]
  32. eCognition Developer. Trimble eCognition Developer 9.0 User Guide; Trimble Germany GmbH: Munich, Germany, 2014. [Google Scholar]
  33. Powers, D.M.W. Evaluation: From precision, recall and f-measure to roc, informedness, markedness and correlation. J. Mach. Learn. Technol. 2011, 2, 37–63. [Google Scholar]
  34. Volpi, M.; Tuia, D.; Bovolo, F.; Kanevski, M.; Bruzzone, L. Supervised change detection in VHR images using contextual information and support vector machines. Int. J. Appl. Earth Obs. Geoinf. 2013, 20, 77–85. [Google Scholar] [CrossRef]
  35. Huang, X.; Zhang, L. An SVM ensemble approach combining spectral, structural, and semantic features for the classification of high-resolution remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2013, 51, 257–272. [Google Scholar] [CrossRef]
  36. Pacifici, F.; Chini, M.; Emery, W.J. A neural network approach using multi-scale textural metrics from very high-resolution panchromatic imagery for urban land-use classification. Remote Sens. Environ. 2009, 113, 1276–1292. [Google Scholar] [CrossRef]
  37. Guo, Q.; Zhang, J.; Li, T.; Lu, X. Change detection for high-resolution remote sensing imagery based on multi-scale segmentation and fusion. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1919–1922. [Google Scholar]
  38. Tso, B.; Olsen, R.C. A contextual classification scheme based on MRF model with improved parameter estimation and multiscale fuzzy line process. Remote Sens. Environ. 2005, 97, 127–136. [Google Scholar] [CrossRef] [Green Version]
  39. Kayitakire, F.; Hamel, C.; Defourny, P. Retrieving forest structure variables based on image texture analysis and IKONOS-2 imagery. Remote Sens. Environ. 2006, 102, 390–401. [Google Scholar] [CrossRef]
  40. Ma, L.; Wu, D.; Deng, J.; Wang, K.; Li, J.; Gu, Q.; Dai, Y. Discrimination of residential and industrial buildings using LiDAR data and an effective spatial-neighbor algorithm in a typical urban industrial park. Eur. J. Remote Sens. 2015, 48, 1–15. [Google Scholar] [CrossRef] [Green Version]
  41. Chen, D.; Stow, D.A.; Gong, P. Examining the effect of spatial resolution and texture window size on classification accuracy: An urban environment case. Int. J. Remote Sens. 2004, 25, 2177–2192. [Google Scholar] [CrossRef]
  42. Puissant, A.; Hirsch, J.; Weber, C. The utility of texture analysis to improve per-pixel classification for high to very high spatial resolution imagery. Int. J. Remote Sens. 2005, 26, 733–745. [Google Scholar] [CrossRef]
  43. Cheng, G.; Han, J.; Lu, X. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef] [Green Version]
  44. Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, GIS ’10, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
  45. Sheng, G.; Yang, W.; Xu, T.; Sun, H. High-resolution satellite scene classification using a sparse coding based multiple feature combination. Int. J. Remote Sens. 2012, 33, 2395–2412. [Google Scholar] [CrossRef]
  46. Sharma, A.; Liu, X.; Yang, X.; Shi, D. A patch-based convolutional neural network for remote sensing image classification. Neural Netw. 2017, 95, 19–28. [Google Scholar] [CrossRef]
  47. Zhang, Z.; Wang, Y.; Liu, Q.; Li, L.; Wang, P. A CNN based functional zone classification method for aerial images. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 2153–7003. [Google Scholar]
  48. Sugimoto, M.; Ouchi, K.; Nakamura, Y. Comprehensive contrast comparison of laver cultivation area extraction using parameters derived from polarimetric synthetic aperture radar data. J. Appl. Remote Sens. 2013, 7, 073566. [Google Scholar] [CrossRef]
  49. Huo, Y.; Han, H.; Shi, H.; Wu, H.; Zhang, J.; Yu, K.; Xu, R.; Liu, C.; Zhang, Z.; Liu, K.; et al. Changes to the biomass and species composition of Ulva sp. on Porphyra aquaculture rafts, along the coastal radial sandbank of the Southern Yellow Sea. Mar. Pollut. Bull. 2015, 93, 210–216. [Google Scholar] [CrossRef]
  50. He, C.; Zhuo, T.; Zhao, S.; Yin, S.; Chen, D. Particle filter sample texton feature for SAR image classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1141–1145. [Google Scholar] [CrossRef]
  51. Dumitru, C.O.; Datcu, M. Information Content of Very High Resolution SAR Images: Study of Feature Extraction and Imaging Parameters. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4591–4610. [Google Scholar] [CrossRef] [Green Version]
  52. Geng, J.; Fan, J.; Wang, H. Weighted Fusion-Based Representation Classifiers for Marine Floating Raft Detection of SAR Images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 444–448. [Google Scholar] [CrossRef]
Figure 1. Map of Sandu Island at Ningde City, Fujian Province, China. The study area has an ideal geophysical environment for aquaculture. Here shows a Worldview-2 image of the study area in true color.
Figure 1. Map of Sandu Island at Ningde City, Fujian Province, China. The study area has an ideal geophysical environment for aquaculture. Here shows a Worldview-2 image of the study area in true color.
Sustainability 11 00637 g001
Figure 2. Image examples of aquaculture areas on the ground: (a) Cage culture areas (CCA); (b) Raft culture areas (RCA).
Figure 2. Image examples of aquaculture areas on the ground: (a) Cage culture areas (CCA); (b) Raft culture areas (RCA).
Sustainability 11 00637 g002
Figure 3. Overall framework for the multi-scale based neighbor information classification (MNIC) method in this paper.
Figure 3. Overall framework for the multi-scale based neighbor information classification (MNIC) method in this paper.
Sustainability 11 00637 g003
Figure 4. Local variance values against corresponding scale parameters produced by the Estimation of Scale Parameter tool. The gray dotted vertical lines indicate the optimal scale parameters.
Figure 4. Local variance values against corresponding scale parameters produced by the Estimation of Scale Parameter tool. The gray dotted vertical lines indicate the optimal scale parameters.
Sustainability 11 00637 g004
Figure 5. Examples of image segmentation results for land and sea areas separation. The numbers of 100, 300, 700, 1000, 3000, 5000, 7000, and 9000 represent eight different segmentation scales. Red lines and blue lines represent segments in land and sea areas, respectively.
Figure 5. Examples of image segmentation results for land and sea areas separation. The numbers of 100, 300, 700, 1000, 3000, 5000, 7000, and 9000 represent eight different segmentation scales. Red lines and blue lines represent segments in land and sea areas, respectively.
Sustainability 11 00637 g005
Figure 6. Classification results for the separation of land and sea areas by using the SEaTH method (a); some examples for the segmentation results between sea area and islands (b) or continuous land areas (c).
Figure 6. Classification results for the separation of land and sea areas by using the SEaTH method (a); some examples for the segmentation results between sea area and islands (b) or continuous land areas (c).
Sustainability 11 00637 g006
Figure 7. Classification results of CCA and RCA using our proposed method (a); subset examples from segmentation results, including RCA in finer scale (b) and CCA in coarser scale (c).
Figure 7. Classification results of CCA and RCA using our proposed method (a); subset examples from segmentation results, including RCA in finer scale (b) and CCA in coarser scale (c).
Sustainability 11 00637 g007
Figure 8. Classification results of CCA and RCA by using our proposed method (a) and the single-scale based conventional information classification (SCIC) method (b). The black arrows indicate that some RCA and CCA can be successfully identified by using multi-scale segmentation and neighbor features instead of using single-scale segmentation and conventional features. Typical examples are collected from the final map: classification results of RCA using our proposed method (a1) and the SCIC method (b1), when the RCA are surrounded by turbid sea water; classification results of CCA using our proposed method (a2) and the SCIC method (b2), when the CCA have similar apparent features with the sea waves.
Figure 8. Classification results of CCA and RCA by using our proposed method (a) and the single-scale based conventional information classification (SCIC) method (b). The black arrows indicate that some RCA and CCA can be successfully identified by using multi-scale segmentation and neighbor features instead of using single-scale segmentation and conventional features. Typical examples are collected from the final map: classification results of RCA using our proposed method (a1) and the SCIC method (b1), when the RCA are surrounded by turbid sea water; classification results of CCA using our proposed method (a2) and the SCIC method (b2), when the CCA have similar apparent features with the sea waves.
Sustainability 11 00637 g008
Figure 9. Classification results of CCA and RCA by using multi-scale based conventional information classification (MCIC) method (a) and the single-scale based neighbor information classification (SNIC) method (b). The black arrows indicate that some RCA and CCA can be successfully identified by using multi-scale segmentation and neighbor features instead of using only one of them. Typical examples are collected from the final map: classification results of RCA using MCIC method (a1) and SNIC method (b1), when the RCA are surrounded by turbid sea water; classification results of CCA using MCIC method (a2) and SNIC method (b2), when the CCA have similar apparent features with the sea waves.
Figure 9. Classification results of CCA and RCA by using multi-scale based conventional information classification (MCIC) method (a) and the single-scale based neighbor information classification (SNIC) method (b). The black arrows indicate that some RCA and CCA can be successfully identified by using multi-scale segmentation and neighbor features instead of using only one of them. Typical examples are collected from the final map: classification results of RCA using MCIC method (a1) and SNIC method (b1), when the RCA are surrounded by turbid sea water; classification results of CCA using MCIC method (a2) and SNIC method (b2), when the CCA have similar apparent features with the sea waves.
Sustainability 11 00637 g009
Table 1. Object features used for image analysis with the separability and threshold (SEaTH) method.
Table 1. Object features used for image analysis with the separability and threshold (SEaTH) method.
Feature TypeFeaturesDescriptions
Normalized difference indexNormalized Difference Vegetation Index (NDVI)(band8 − band5)/(band8 + band5)
Normalized Difference Water Index (NDWI)(band1 − band8)/(band1 + band8)
Spectral featuresMean Layer i (i = 1,2,3,4,5,6,7,8)Means of band i (i = 1,2,3,4,5,6,7,8)
Standard deviation Layer i (i = 1,2,3,4,5,6,7,8)Standard deviations of band i (i = 1,2,3,4,5,6,7,8)
BrightnessAverage of means of bands 1–8
Maximum difference(Maximum difference of bands 1–8)/brightness
Geometry featuresAreaArea of an image object
AsymmetryRelative length of an image object, compared to a regular polygon
LengthLength of an image object
WidthWight of an image object
Length/WidthLength-to-width ratio of an image object
Border indexThe jagged degree of an image object
Border lengthSum of edges of the image object
CompactnessThe compact degree of an image object
DensityThe distribution in space of the pixels of an image object
Elliptic FitThe degree of an image object fits into an ellipse of similar size and proportions
Rectangular FitThe degree of an image object fits into a rectangle of similar size and proportions
RoundnessThe similarity an image object with an ellipse
Shape indexSmoothness of an image object border
VolumeNumber of voxels of an image object
Table 2. Summarized results of the SEaTH analysis for the separation of land and sea areas (top 5).
Table 2. Summarized results of the SEaTH analysis for the separation of land and sea areas (top 5).
FeaturesJ-M DistanceOmenThreshold
Mean Layer 61.88great248.93
Mean Layer 71.74great147.47
NDWI1.73great−0.39
Brightness1.71great281.88
Mean Layer 81.69great207.32
Table 3. Summarized results of the SEaTH analysis for the separation of submerged and unsubmerged areas (top 5).
Table 3. Summarized results of the SEaTH analysis for the separation of submerged and unsubmerged areas (top 5).
FeaturesJ-M DistanceOmenThreshold
NDWI1.52small−0.53
Mean Layer 61.50great218.31
Standard deviation Layer 51.48 great9.66
Standard deviation Layer 81.47great37.38
Standard deviation Layer 71.46great23.77
Table 4. Confusion matrix for the final classification results.
Table 4. Confusion matrix for the final classification results.
Predicted ClassGround Truth
CCAWatercraftRCASea WaterSumUA:
CCA893109396%
Watercraft535104185%
RCA00433443799%
Sea water803893297895%
Sum10238473936
PA:87%92%92%99%
Overall accuracy:96%
Kappa coefficient:0.93
Table 5. Quantitative comparison between our method and methods using different segmentation strategies and feature sets at the pixel level. Our-MNIC: our proposed Multi-scale based Neighbor Information Classification method. SCIC: Single-scale based Conventional Information Classification method. MCIC: Multi-scale based Conventional Information Classification method. SNIC: Single-scale based Neighbor Information Classification method.
Table 5. Quantitative comparison between our method and methods using different segmentation strategies and feature sets at the pixel level. Our-MNIC: our proposed Multi-scale based Neighbor Information Classification method. SCIC: Single-scale based Conventional Information Classification method. MCIC: Multi-scale based Conventional Information Classification method. SNIC: Single-scale based Neighbor Information Classification method.
MethodsOur-MNICSCICMCICSNIC
CCARCACCARCACCARCACCARCA
Evaluation CriteriaPrecision96.53%97.93%29.28%58.45%94.44%33.32%29.29%95.84%
Recall92.67%92.48%94.67%83.95%90.67%90.21%93.33%90.21%
F-measure94.56%95.13%44.72%68.92%92.52%48.67%44.59%92.94%

Share and Cite

MDPI and ACS Style

Fu, Y.; Deng, J.; Ye, Z.; Gan, M.; Wang, K.; Wu, J.; Yang, W.; Xiao, G. Coastal Aquaculture Mapping from Very High Spatial Resolution Imagery by Combining Object-Based Neighbor Features. Sustainability 2019, 11, 637. https://doi.org/10.3390/su11030637

AMA Style

Fu Y, Deng J, Ye Z, Gan M, Wang K, Wu J, Yang W, Xiao G. Coastal Aquaculture Mapping from Very High Spatial Resolution Imagery by Combining Object-Based Neighbor Features. Sustainability. 2019; 11(3):637. https://doi.org/10.3390/su11030637

Chicago/Turabian Style

Fu, Yongyong, Jinsong Deng, Ziran Ye, Muye Gan, Ke Wang, Jing Wu, Wu Yang, and Guoqiang Xiao. 2019. "Coastal Aquaculture Mapping from Very High Spatial Resolution Imagery by Combining Object-Based Neighbor Features" Sustainability 11, no. 3: 637. https://doi.org/10.3390/su11030637

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop