Next Article in Journal
Feasibility of Using Green Laser for Underwater Infrastructure Monitoring: Case Studies in South Florida
Previous Article in Journal
Vector-Algebra Algorithms to Draw the Curve of Alignment, the Great Ellipse, the Normal Section, and the Loxodrome
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unsupervised Image Segmentation Parameters Evaluation for Urban Land Use/Land Cover Applications

by
Guy Blanchard Ikokou
* and
Kate Miranda Malale
Geomatics Department, Faculty of Engineering and Build Environment, Pretoria Campus, Tshwane University of Technology, Pretoria 0183, South Africa
*
Author to whom correspondence should be addressed.
Geomatics 2024, 4(2), 149-172; https://doi.org/10.3390/geomatics4020009
Submission received: 25 February 2024 / Revised: 1 May 2024 / Accepted: 7 May 2024 / Published: 12 May 2024
(This article belongs to the Topic Urban Land Use and Spatial Analysis)

Abstract

:
Image segmentation plays an important role in object-based classification. An optimal image segmentation should result in objects being internally homogeneous and, at the same time, distinct from one another. Strategies that assess the quality of image segmentation through intra- and inter-segment homogeneity metrics cannot always predict possible under- and over-segmentations of the image. Although the segmentation scale parameter determines the size of the image segments, it cannot synchronously guarantee that the produced image segments are internally homogeneous and spatially distinct from their neighbors. The majority of image segmentation assessment methods largely rely on a spatial autocorrelation measure that makes the global objective function fluctuate irregularly, resulting in the image variance increasing drastically toward the end of the segmentation. This paper relied on a series of image segmentations to test a more stable image variance measure based on the standard deviation model as well as a more robust hybrid spatial autocorrelation measure based on the current Moran’s index and the spatial autocorrelation coefficient models. The results show that there is a positive and inversely proportional correlation between the inter-segment heterogeneity and the intra-segment homogeneity since the global heterogeneity measure increases with a decrease in the image variance measure. It was also found that medium-scale parameters produced better quality image segments when used with small color weights, while large-scale parameters produced good quality segments when used with large color factor weights. Moreover, with optimal segmentation parameters, the image autocorrelation measure stabilizes and follows a near horizontal fluctuation while the image variance drops to values very close to zero, preventing the heterogeneity function from fluctuating irregularly towards the end of the image segmentation process.

1. Introduction

In the past two decades, object-based image analysis has gained momentum in digital image mapping for geographical information systems and remote sensing applications [1]. Object-based image analysis offers the advantage of extracting objects of interest as distinct land use and land cover features in contrast to pixel-based image analysis, resulting in more accurate thematic mapping. Today, high-resolution aerial photographs and satellite imagery offer some accuracy advantages in urban mapping in the sense that it has now become possible to extract individual buildings as objects of interest using object-based image classification. However, due to their high variability in building sizes, shapes, and roof colors, urban areas remain some of the most difficult environments to map using pixel-based image classification algorithms [2]. In object-based image analysis, attributes such as shape, size, and color can play a vital role in improving the accuracy of image classification outcomes. In contrast to pixel-based image classification, object-based image classification offers the advantage of mapping groups of internally homogeneous pixels into distinct classes [3]. A key element in object-based image analysis is image segmentation, which partitions the digital image into distinct objects that are made of groups of homogeneous pixels. However, achieving image objects that are internally homogeneous remains a challenge in image segmentation due to the fact that the quality of produced image segments largely depends on a fair balance of some user-defined parameter thresholds that drive the image segmentation process [4,5]. These user-defined thresholds are generally associated with shape compactness, shape smoothness, color factor, and scale parameters [6]. Finding a proper balance between these segmentation parameters has been the subject of great research interest in object-based image analysis for the past decades. Not achieving a good balance between these segmentation parameters can result in small segments being submerged by larger ones and larger objects being over-segmented.
Due to the difficulty of finding optimal balances between segmentation parameter thresholds, the supervised trial-and-error strategy has been widely used. Ref. [7], for instance, performed the segmentation of an aerial image of an agricultural field by assigning weights of 0.9 and 0.1 to the color factor and shape compactness parameter, respectively. Meanwhile, [8] cautioned that the subjective selection of large segmentation-scale parameters with the expectation of achieving good quality segments can lead to serious segmentation errors, and the authors suggested an equal weight of 0.5 be assigned to the color factor and shape compactness parameter, respectively, in order to achieve acceptable image segmentation results.
In the last decade, the scale parameter and shape compactness have received high attention in research on the optimal selection of segmentation parameters [9,10]. The multi-resolution image segmentation algorithm in eCognition has been widely used for urban mapping applications. A successful image segmentation process is expected to produce image segments of a shape similar to their real-world boundaries [11]. Unsupervised attempts to identify optimal segmentation parameter thresholds have been proposed in the literature [12,13,14,15,16]. Metrics such as the weighted variance and Moran’s index have been widely used to study intra-segment homogeneity and inter-segment heterogeneity, respectively. Early studies to determine optimal balances between segmentation parameters relied on a global objective function that combines a local image variance measure and the spatial autocorrelation Moran’s index [17]. However, the main limitations of these early approaches include the fact that they only perform well for large homogeneous areas and were designed to search for optimal segmentation parameters at a single segmentation level. These approaches were later improved by [18,19] to include more than one segmentation level. The improved approaches identified optimal segmentation parameters as those defining peak points of the curve of a global objective function that is a function of a spatial autocorrelation model and an image variance measure. Ref. [20] modified the global objective function by restricting it to generate strictly three peak points of the curve, with each peak point corresponding to one segmentation level. This restriction may not be suitable for urban scenes where more than three types of objects may be dominant. This restriction was also reported not to be suitable for high-resolution imagery when the purpose of image classification is to extract features of small sizes, such as individual building units, swimming pools, roads, parking lots, or urban trees [9].
It has been suggested in [21] that methods of selection of optimal segmentation parameters based on the weighted variance and Moran’s index in their most commonly used formulations are very limited when it comes to objective searches for optimal segmentation parameters. Ref. [22] also reported that methods that use Moran’s index in its commonly used formulation are not suitable for heterogeneous areas such as urban environments, and the authors suggested the use of local variance measures instead. This is because Moran’s index in its present formulation makes the global objective function fluctuate irregularly, and this results in the image variance increasing drastically towards the end of the segmentation process, forcing larger objects to merge even though they belong to distinct classes [23]. Ref. [24] also cautioned that when modeling a good local variance curve in order to identify optimal segmentation parameters, it is vital to ensure that the models identify optimal values of parameters such as shape compactness or color factor weights that increase or are equal between successive segmentation levels. Ref. [25] argued that standard deviation measures should replace the current image variance measure since the latter fails to capture all the irregular intra-segment variances throughout the image.
Most segmentation parameter evaluation approaches have mainly focused on the scale parameter and the shape compactness and have not given great attention to the influence of color factor weights on the performance of scale parameters in object-based image analysis [26,27]. This paper proposes one nonlinear global heterogeneity objective function based on a non-weighted standard deviation image variance and a hybrid weighted spatial autocorrelation measure based partly on the current formulation of Moran’s index and a spatial autocorrelation coefficient in order to evaluate optimal segmentation parameters that can minimize under and over-segmentation of urban environments. The rationale behind the use of a non-weighted standard deviation image variance is that the current formulation of the image variance measure is dependent on segment area measures. This inclusion of individual segment area measures can introduce numerical instabilities into the global objective function or the global heterogeneity function in occurrences of under- or over-segmentations since some of the segment area measures do not accurately match their real-world measures, thus undermining the accuracy of the computed image variance. The rationale behind the proposal of the new spatial autocorrelation measure is that Moran’s index in its current formulation is expressed as a ratio in which the presence in the numerator of the sum of the spatial distance weights scaled by the total number of image segments leads to an increase in the measure while the merging process carries on. This will result in objects with low inter-segment heterogeneity not being merged. In addition, the four-nearest neighbor distance matrix used by the current formulation does not always hold true for all the segments within the image since it is possible for a large number of image segments to be surrounded by only one distinct neighbor, and this can result in a large number of zeros in the distance matrix, which could undermine the accuracy of Moran’s index.

2. Related Work

Several land use and land cover mapping studies have used image segmentation for object-based image classification. Many of these studies based the choice of segmentation parameters on trial-and-error procedures [28,29]. One inconvenience of such an approach is that it is tedious and does not always guarantee the best results since the final decision relies on visual interpretations of segmentation results, which can be very subjective when dealing with very complex areas such as urban areas. Ref. [17] proposed a technique to evaluate segmentation parameters in a forest area. The approach relied on the assumption that successful segmentations should produce objects that are internally homogeneous and, at the same time, distinct from their neighbors. Object internal homogeneity is measured through a weighted variance expressed as follows:
W var = i = 1 n a i v i i = 1 n a i
with a i as the area of a segment of interest i and v i its variance. The measure returns low values for internally homogeneous segments and produces high values for heterogeneous segments [30]. The inter-segment heterogeneity is, on the other hand, measured through Moran’s index, which is a spatial autocorrelation measure formulated as follows:
M I = n i = 1 n j = 1 n w i j y i y ¯ y j y ¯ i = 1 n y i y ¯ 2 i j w i j
where n is the total number of objects, y i is the spectral brightness of a segment of interest i , and y i ¯ is the mean spectral brightness on the image at a given segmentation level. The parameter w i j is a measure of the spatial proximity between two segments i and j . The assignment of a value of one to the parameter w i j indicates that segments i and j are adjacent regions that share at least one boundary, and a value of zero indicates they do not. Moran index returns low values to describe segmentations with high inter-segment heterogeneity, and high values of the index describe segmentations that produce segments with low heterogeneity. Global objective functions are often used to characterize the overall quality of a segmentation outcome, and these functions usually combine the image variance measure in (1) and the spatial autocorrelation measure in (2). The most commonly used formulation of the global score function or global objective function is given as follows:
G s = w V a r n o r m + M I n o r m
with w V a r n o r m and M I n o r m the normalized weighted variance and the normalized Moran’s Index, respectively. The normalizations of the measures in (1) and (2) are generally determined through the following equation:
γ N o r m a l i z e d = γ γ M i n γ M a x γ M i n
where γ is either the image-weighted variance measure or the spatial autocorrelation measure, while γ min and γ max represent their respective minimum and maximum values. The normalization equation rescales the values of these measures within the range of 0 to 1. Ref. [19] argued that the global objective function proposed in (3) by [17] does not guarantee optimal segmentation results, and the authors came to this assumption after visually observing segmentation results performed with optimal scale parameters determined by the global score function. To address this limitation, a heterogeneity function was proposed and combined the recalculated weighted variance and Moran’s index for under and over-segmented objects. The proposed index also requires normalized image variance and spatial autocorrelation measures and was formulated as follows:
H = w V a r n o r m M I n o r m w V a r n o r m + M I n o r m
where w V a r n o r m and M I n o r m are the normalized image variance and Moran’s index measures, respectively. The measure enables the identification of optimal segmentation scales that can re-segment the under- and over-segmented objects, which are then merged back to the previous segmentation performed under the condition in Equation (3). Furthermore, ref. [31] tested a hybrid approach combining the assumptions in [17,19]. However, their technique did not rely on the global objective function measure in (3) to identify the optimal scale parameters but rather used the different values of the heterogeneity function in (5) to identify scale parameters that produce high inter-segment heterogeneity and high intra-segment homogeneity measures, as illustrated in Figure 1 below.
The heterogeneity function, as illustrated in Figure 1, shows some high values of the measure, which are potential optimal scale parameters for the segmentation. Ref. [32] also argued that the global objective function in (3) is not sensitive to under- and over-segmentation and suggested an alternative measure, which has an additional weight compared to the traditional measure and was formulated as follows:
G s = 1 + a 2 M I n o r m × w V a r n o r m a 2 × M I n o r m + w V a r n o r m
where a is a weight that controls the relative weight of w V a r n o r m and M I n o r m . The model returns lower values for a segmentation with more homogeneous objects and high values for a segmentation with more heterogeneous objects. Ref. [21] pointed out that the functions describing the weighted image variance and Moran index in (1) and (2) still impose trustworthy spatial and spectral constraints on the segmentation outcome, in order to identify optimal segmentation parameters; however, the authors were more concerned about the normalization equation proposed in (4). According to their investigation, the Equation in (4) can produce inconsistencies in the sense that any error in the choice of minimum or maximum values of the weighted variance and Moran’s index could compromise the optimal selection of segmentation scale parameters due to errors that might originate from inaccurate segments area measures in occurrences of over- and under-segmentation scenarios. The authors proposed that both the weighted variance and Moran’s index should be rescaled separately with different measures. This means that the variance and Moran’s index should be normalized as follows:
V n o r m = v v ¯   and   M I n o r m = M I + 1 2
with v the variance of the object and v ¯ the mean variance of the image at a given segmentation level. It can be noticed that the proposed normalization formulae do not involve a segment area measure. Substituting the expressions in (7) into (3) produces the following global score or objective function:
G s = 2 v + v ¯ M I + 1 2 v ¯
The authors argued that the new global score function is more independent from any under- and over-segmentation errors that may occur. Indeed, the authors’ concerns were reasonable since, currently, it is not possible to obtain a segmentation with 100% correctly segmented objects; there are always some segments subjected to under- or over-segmentation even though the main aim of ongoing studies in the selection of optimal segmentation parameters is to minimize the number of segments subjected to under- and over-segmentation errors.

3. Materials and Methods

3.1. Satellite Imagery and Study Area

GeoEye imagery has been widely used for urban mapping applications [33,34]. The multispectral imagery possesses four spectral bands, including the three bands of the visible light region of the electromagnetic spectrum, comprising the blue band with a spectral resolution expanding from 450 nm to 510 nm, the green band, which expands from 510 nm to 580 nm, and the red band, which expands from 655 nm to 690 nm. The fourth band covers the near-infrared region of the electromagnetic spectrum, ranging from 780 nm to 920 nm. The spatial resolution of 1.8 meters classifies the multispectral imagery as high resolution [35,36]. Atmospheric, radiometric, and geometric corrections were already performed on the imagery. The choice of the City Bowl area of Cape Town metropolitan area as our study area was driven by the diversity and complexity of this urban environment that contains grassland fields, urban trees, open space, green recreational parks, sport fields, roads, residential and non-residential buildings of various sizes, parking lots, and water bodies. Five urban scenes within the study area were selected to extract spatial and spectral attributes of urban features. These urban scenes include the residential areas of Zoonnebloem, Vredelhoek, Oranjezicht, the Company’s Garden, Woodstock, and district six suburbs.

3.2. Feature Selection and Extraction

The satellite image covering the study area was segmented at six distinct scale parameters using the multi-resolution segmentation algorithms in eCognition software version 10.3, with varying color factor weights. The choice of the software eCognition was mainly based on the unavailability of any other object-based image analysis software packages in the laboratory made available to conduct our experiments. The multi-resolution segmentation algorithm in eCognition merges adjacent pixels of similar spectral brightness to form larger, internally homogeneous segments that are clearly distinguishable from their respective neighbors. The process stops when the set homogeneity criteria through the scale parameter are achieved [37]. It has been reported in the work of [38] that beyond the scale parameter of 40 up to 100, the number of object segments becomes more stable when image segmentations are conducted with optimal parameters. Following this, we selected the first segmentation parameter to be 50 and extended the range by incrementing by 20 until the scale parameter of 150, resulting in six segmentation scale parameters, namely 50, 70, 90, 110, 130, and 150, respectively. Each scale parameter was used to perform 8 segmentations with various color factor weights ranging from 0.2 to 0.9, making a total of 48 segmentations so that the various scale parameters could be analyzed at different color factor weights. During these preliminary segmentations, a weight of one was given to each spectral band of the imagery in order to equally retain the various spectral information offered by each individual band. From the resulting segments, unique spectral brightness measures describing the various objects were collected at each segmentation level. In addition to the spectral brightness measures, area measures were also collected. Forty-eight (48) image segments were carefully selected per segmentation level, totaling 2160 samples across all the segmentation levels in this study.

3.3. Image Variance and Spatial Autocorrelation Modeling

To evaluate the intra-segment homogeneity, we first calculated the image variance using a measure we denoted “standard deviation” image variance due to the closeness of its formulation to that of the traditional standard deviation measure. The model produces higher values for segments with low internal homogeneity, while low values of the measure would indicate segments with high internal homogeneity. The latter hypothesis fits what is generally accepted as a good image segmentation outcome. The proposed image variance model was formulated as follows:
δ V a r = i = 1 n a i a ¯ 2 i = 1 n c i
where the quantity i = 1 n c i is the sum of the area measures of segments at a given segmentation level. The variable a i is the brightness value of a segment of interest i , while the quantity a ¯ is the mean brightness value of the image at a given segmentation level. The measure a i a ¯ 2 in the expression minimizes any numerical instability that may be introduced by segments of very low brightness values. It can be observed from (9) that while smaller homogeneous objects are being merged, there is an increase in the quantity in the denominator of the model, leading to a decrease in intra-segment image variance. The model ensures a positive, numerically stable intra-segment homogeneity measure. Instead of using the Moran’s index formulation in Equation (2), we propose the following spatial autocorrelation coefficient index that we denoted M I , which is formulated as follows:
M I = i = 1 n j = 1 n x i x i ¯ × x j x j ¯ w i j x i x i ¯ x j x j ¯ i = 1 n j = 1 n w i j 2 w i j
where x i represents the spectral brightness of a segment of interest i , and x j is the nearest neighbor segment to the segment of interest i . The quantities x i ¯ and x j ¯ represent the mean brightness measures of all the segments of interest and those of all the segments spatially connected to the segments of interest, respectively. The parameter n represents the total number of segments at a given segmentation level. The parameter w i j represents the spatial distance weight between segments of interest and their respective nearest neighbors, and the weight is assigned a value of one when the segment of interest is spatially connected to another segment, while an assignment of a value of zero indicates that there is no spatial connection between the segments. High values of the spatial autocorrelation coefficient index M I indicate a high inter-object heterogeneity in contrast to the traditional Moran’s Index, while low values of the index would indicate a low inter-segment heterogeneity between adjacent segments.
A high value of the spatial autocorrelation coefficient index and a low value of the image variance are required to achieve segments with low internal variance and high heterogeneity. One advantage of the above-proposed model is that it enforces two spatial autocorrelation constraints. The first constraint is enforced through the parameter of the distance matrix w i j present in the denominator of the expression. Since each pixel within the image is assigned a spatial location, the difference between the product terms in the numerator enforces a second spatial autocorrelation constraint between two adjacent segments i and j . The second advantage of the adopted measure is that when more objects are merged, the distance difference in the denominator decreases in contrast to the traditional Moran’s index, resulting in an increase in the index. This association of constraints imposed by models in (9) and (10) prevents larger heterogeneous objects from being merged towards the end of the segmentation process; in other words, only larger homogeneous segments are being merged towards the end of the segmentation process.
To facilitate the graphical projection of image variance and spatial autocorrelation measures, we adopted the normalization function proposed in Equation (4). The motivation for this choice was driven by the fact that the model proposed in [21] gives dominance to the image variance against the Moran index measure since the normalization of the variance would lead to larger values due to the small numerical magnitude of the denominator. In addition, the model proposed to normalize Moran’s index prevents the index from reaching certain numerical values, and there are no clear justifications for the increment of Moran’s index value by a unit measure or why the obtained value is divided by two. From the functions in (9) and (10), we computed a heterogeneity index by modifying the numerator of the expression in (5) since the aim of the parameter selection strategy is to maintain a high value of the spatial autocorrelation measure and a low image variance measure, which cannot be achieved through the original formulation. The motivation to not use the model proposed by [32] is that there are no clear numerical rules for the assignment of numerical values to the weighting parameter α when performing more than two segmentation levels, and this can lead to inaccurate identification of optimal segmentation parameters. For a four-level segmentation, the authors, for instance, assigned weights of 4, 2, 0.5, and 0.25 to the parameter at the first, second, third, and fourth segmentation levels, respectively, while for a three-level segmentation, weights of 3, 1, and 0.33 were assigned to the parameter at the third, second, and third segmentation levels, respectively. The other limitation of their global score model is that even segments with high image variance can undermine the robustness of the function since a high value of the function would still be achieved. For this study, the following modified heterogeneity model was used:
H = M I n o r m δ V a r n o r m M I n o r m + δ V a r n o r m
The proposed heterogeneity function returns values close to 1 for internally homogeneous objects that are distinct from their surrounding neighbors, while values of the index nearing zero would indicate segments internally heterogeneous. Table 1 presents a subset of estimated normalized image variance and spatial autocorrelation measures. These measures were estimated across the six segmentation scales with various color factor weights ranging from 0.2 to 0.9.

3.4. Image Variance Optimization

In order to refine the large number of records of normalized image variance and spatial autocorrelation measures, we computed the average image variance at each individual segmentation level with color factor weights ranging from 0.2 to 0.7. These estimated mean values were useful in the optimization of the selection of segmentation scale parameters. Table 2 presents the optimized image variance per segmentation scale as a function of color factor weights.
Image variance measures give a first indication of the overall internal homogeneity of image segments at a given segmentation level. This first indication can be further analyzed in order to determine the overall internal homogeneity percentage carried by individual segments at a given segmentation level. The overall internal homogeneity percentage carried by each image object increases inversely proportional to the image variance measure. To quantify such a measure an inverse ratio of the image variance could be used; however, due to the normalization of the later measure, this would not yield realistic results. Instead, we proposed a homogeneity proportion evaluation model that returns high values when the intra-segment image variance decreases at a given segmentation level. The formulation of the proposed model is as follows:
Proportion   of   image   segments   internal   homogeneity = i = 1 n c i i = 1 n a i a ¯ 2 i = 1 n c i
where c i is the sum of area measures of all segments at a given segmentation level, a i is the brightness of a segment of interest i , a ¯ is the mean brightness of the image, and n is the total number of image segments at a given segmentation level. The estimated metrics were further normalized using the model adopted in (4), and Table 3 presents image segments’ homogeneity levels per segmentation scale parameter as a function of color factor weights.
Figure 2 shows the variation of image variance measures per segmentation scale parameter as a function of color factor weights. Figure 2A suggests that the scale parameter of 50 did not produce meaningful image segments if associated with color factor weights of 0.2, 0.4, and 0.7. This failure of the image variance curve to produce a global minimum point at each of these segmentation levels could be due to the fact that image segments produced at these segmentation levels did not exhibit strong internal homogeneity, with the measures estimated at 22%, 78%, and 85%, respectively. In contrast, the global minimum characterizing the image variance curve at the color factor weight of 0.9 is an indication of very good segment quality. Image objects produced at this segmentation level hold an internal homogeneity level of about 97%. Associating the scale parameter of 50 with the color factor weight of 0.5 resulted in the poorest image segments’ quality, as the segments at this segmentation level exhibit the lowest internal homogeneity level at 3%. Associating the color factor weights of 0.4 and 0.9 with the scale parameter 70 produced very good quality segments, as illustrated by the global minimum points achieved by the image variance curve at these two segmentation levels in Figure 2B. These global minimum points are characteristic of internal homogeneity levels reaching values close to 98%, respectively. In contrast, the association of the scale parameter 70 with the color factor weight of 0.6 resulted in the poorest segment quality, as illustrated by the global maximum achieved by the image variance curve at this segmentation level. This global maximum point originates from the fact that image segments produced at this segmentation level are only characterized by 2% internal homogeneity. The association of the scale parameter of 90 with the color factor weight of 0.9 resulted in the poorest image segment quality. This poor segmentation quality is illustrated by the global maximum point achieved by the variance curve, as revealed in Figure 2C. At this global maximum point of the curve, image segments only achieved an internal homogeneity level of 27%. The association of the scale parameter of 90 with the color factor weight of 0.2 produced very good image segment quality, as illustrated by the global minimum of the curve achieved at this segmentation level. This global minimum is a description of the high internal homogeneity level of segments produced at this segmentation level, which reached a value of 98%. Associating a color factor weight of 0.7 with the scale parameter of 110 produced the poorest segment quality, as illustrated by the global maximum point of the variance curve in Figure 2D. At this segmentation level, segments only reached an internal homogeneity level of 12% at the end of the segmentation process. However, an adjustment of the color factor to 0.8 enabled us to increase the internal homogeneity of image segments to a level close to 98%. As a consequence of this result, the image variance curve decreased to its global minimum value. In contrast, any color factor weights associated with the scale parameter of 130 failed to produce meaningful image segments. The greatest internal homogeneity level achieved by image segments only occurred with a color factor weight of 0.6, although the achieved homogeneity level of 89% remains low for accurate urban mapping applications. The poorest segment quality was achieved when associating this scale parameter with the color weight of 0.7, and this resulted in the majority of segments only reaching a homogeneity level of about 12%, as illustrated by Figure 2E. A similar result was found when associating the color factor weight of 0.5 with the scale parameter of 150. The global maximum point of the curve achieved at this segmentation level was a consequence of image segments only achieving 3% internal homogeneity. However, adjusting the color factor weight to 0.8 improved segment quality, as the majority of image segments achieved a homogeneity level of about 98% at the end of the segmentation process.

3.5. Image Spatial Autocorrelation Optimization

Spatial autocorrelation measures were computed using Equation (10), and mean values were estimated per segmentation level. The computed data were organized into 48 groups, corresponding to individual segmentation levels. Table 4 presents 48 mean spatial autocorrelation measures computed at each segmentation scale parameter per color factor weight.
Figure 3 presents the fluctuation of the spatial autocorrelation measure at various scale parameters as functions of color factor weights. The association of the scale parameter of 50 with any color factor weights was revealed to not produce a good spatial separation between segments, as the highest dissimilarity level achieved by image segments only reached 0.552, which corresponds to about 55% dissimilarity. However, the poorest inter-segment dissimilarity measure was achieved when associating the scale parameter of 50 with the color factor weight of 0.2. At this segmentation level, image segments only reached a dissimilarity level of 0.13%, as illustrated by the global minimum point achieved by the spatial autocorrelation curve. This is an indication that at this segmentation level, there is still a large number of image segments sharing similar spectral properties that can be merged at higher segmentation levels. An adjustment of the color factor weight to 0.4 and its association with the scale parameter of 70 produced image segments that are highly distinct from one another, as illustrated by the global maximum point achieved by the spatial autocorrelation curve in Figure 3B. At this segmentation level, image segments achieved a dissimilarity level of about 98%. In contrast, the association of this scale parameter with the color factor weight of 0.8 resulted in image segments reaching their lowest dissimilarity level at 2.10%. Similar poor segmentation performance was achieved when the scale parameter of 90 was associated with the color factor weights of 0.2 and 0.9. At these segmentation levels, the spatial autocorrelation curve reached its two lowest points. Image segmentation at these levels produced segments that were spatially distinct from one another only at 2.37% and 7.68%, respectively. However, setting the color factor weight to 0.3 enabled us to achieve segments that are highly distinct from one another, as illustrated by the global maximum achieved by the spatial autocorrelation curve in Figure 3C. At this segmentation level, the segments produced achieved a dissimilarity level of about 94%. The association of the scale parameter 110 with the color factor weights of 0.2, 0.4, and 0.9 did not produce image segments that are distinct enough from one another since the spatial autocorrelation curve achieved its lowest values as illustrated in Figure 3D. Image segments produced at these segmentation levels only achieved dissimilarity levels of 0.67%, 0.15%, and 0.33%, respectively. In contrast, an adjustment of the color factor weight to 0.8 enabled us to achieve image segments that are highly distinct from one another with a dissimilarity level close to 98%. The scale parameter of 130 failed to produce very good quality segments when associated with color factor weights of 0.2 and 0.4, as the spatial autocorrelation curve in Figure 3E achieved its lowest values at these segmentation levels. Image segments produced at these segmentation levels only achieved dissimilarity levels of about 2.51% and 4.28%, respectively. In contrast, an adjustment of the color factor weight to 0.6 enabled the improvement of the quality of image segments, and segments produced at this segmentation level achieved a dissimilarity measure of about 98%. Figure 3E shows that setting the color factor weights to 0.3, 0.4, and 0.6 only enabled us to achieve the local maximum points of the curve. At these segmentation levels, image segments were found to be distinct from their nearest neighbors at 80%, 82%, and 85%, respectively. However, an adjustment of the color factor weight to 0.8 enabled us to achieve image segments that were about 98% distinct from their nearest neighbors.

3.6. Segmentation Scale Parameter Optimization

From the segmentation results analyses, it was revealed that the scale parameter of 50, when associated with the color factor weight of 0.9, produced highly homogeneous image segments. However, the inter-segment dissimilarity level of these segments was revealed to be very low at a level of 42%. The association of the scale parameter of 70 with color factor weights of 0.4 and 0.9 produced image segments with very high internal homogeneity levels at about 98% and dissimilar to their nearest neighbors at 98% and 74%, respectively. From these results, it appears that the scale parameter of 70 could be suitable for the optimal segmentation of a certain type of urban feature. Furthermore, it was revealed that the scale parameter of 90 associated with the color factor weight of 0.2 produced image segments that were 99% internally homogeneous, but these segments were revealed to be dissimilar to their nearest neighbors only at 2%, indicating that they still share 98% of their spectral attributes with their surroundings. It can be suggested that this scale parameter is not suitable for the optimal segmentation of urban scenes within the study area. The scale parameter of 110 associated with the color factor weight of 0.8 produced image segments that were internally homogeneous at 98% and dissimilar to their nearest neighbors at 98%. These results suggest that the scale parameter of 110 can drive an optimal segmentation process that will accurately delineate a certain type of urban feature within the study area. The scale parameter of 130, in association with the color factor weight of 0.6, produced image segments that were distinct from their nearest neighbors at 98% but internally homogeneous only at 89%, and this can suggest that the scale parameter of 130 may not produce good segment delineation if used in our study area. Finally, the scale parameter of 150 associated with the color factor of 0.8 was found to produce image segments that were internally homogeneous at 98% and distinct from their nearest surrounding neighbors at about 99%. From these later results, it can be suggested that the scale parameter of 150 could optimally segment a certain type of urban object within the study area.
Consolidating these results in Table 5 enables us to compute the segmentation heterogeneity measures through Equation (11). The estimated heterogeneity function enables us to confirm the authenticity of the potential ideal segmentation scales pre-identified above. High values of the function confirm that produced segments are internally homogeneous and highly heterogeneous from one another, while low values of the index indicate the presence of segments that are not internally homogeneous, thus not highly heterogeneous from one another.
Figure 4 presents the heterogeneity function curve, showing low and high points. The identified peak points indicate the optimal segmentation scale parameters suitable to perform segmentations of the satellite image and achieve good-quality segments. It can be observed that the scale parameters of 70, 110, and 150 would produce the best image segment quality as they are associated with peaks of the heterogeneity curve, and this consolidates our earlier assumptions on the suitability of these segmentation scale parameters to successfully segment urban features within the study area. In contrast, the scale parameters of 50, 90, and 130 did not qualify as optimal segmentation scale parameters as they failed to produce peaks of the heterogeneity curve.

4. Results and Discussion

Figure 5 shows that the image variance curve follows a very regular fluctuation towards the end of the segmentation, following a sinusoidal-like shape, revealing the robustness of the new proposed image variance model. This result is in line with the statement of [22], who are of the view that a good image variance measure should produce a curve that follows a regular fluctuation shape towards the end of the segmentation. These results are also in line with [23], who argued that a good image variance model should not produce an increase in the image variance towards the end of the segmentation.
The fact that the curve of spatial autocorrelation measures follows a nearly linear trend towards the end of the segmentation shows that segments have reached some status of internal stability, which means that large distinct objects are not being forced to merge towards the end of the segmentation process, in contrast to current segmentation parameter selection methods that rely on the weighted image variance and Moran’s index. These observations indicate that the proposed image variance and spatial autocorrelation formulations are suitable for urban areas, which are very heterogeneous and complex areas, and this is in line with the suggestions made by [21]. Figure 6 illustrates the segmentation of a built-up scene at a scale parameter of 50 (A) and a scale parameter of 70 (B). It can be observed that the segmentation at scale 50 has over-segmented most of the built-up features, including regular-sized buildings as well as large buildings and some trees near buildings. In contrast, the segmentation at a scale parameter of 70 has improved the delineation errors, and all buildings have achieved outlines that are very close to their real-world measurements. This result is coherent with earlier findings by [39], who also found the scale parameter of 70 to be suitable for the segmentation of standard sized and slightly large buildings.
Figure 7 shows the segmentation of a large parking lot at scale parameter 90 (A) and the segmentation of the same area at scale parameter 110 (B). It can be observed that the segmentation at scale parameter 90 produced internally homogeneous objects; however, many of these objects seem to still share spectral similarities and thus are not highly heterogeneous to one another, and this is in line with the earlier results found in Figure 5, which show low heterogeneity between segments at this specific scale parameter. However, the segmentation at a scale parameter of 110 shows an improvement in inter-segment heterogeneity as smaller segments that share similar spectral attributes were successfully merged to produce the full extent of the parking lot. It can also be observed along the diagonal from the top right corner and near the bottom of the image that the scale parameter 110 achieved a good segmentation of elongated urban features, particularly roads that reached near real-world boundaries. However, it is revealed that the majority of standard-sized buildings were under-segmented at this scale parameter. This result confirms an earlier observation by [39], who suggested that scale parameters larger than 100 are not generally suitable for standard and small-sized buildings.
Figure 8 shows the segmentation results of a large green sports field at scale parameters 130 (A) and 150 (B). Unlike the scale parameter of 90, the scale parameter 130 shows some high heterogeneity between the resulting small segments, as most of the segments at this specific scale parameter seem slightly different in terms of spectral brightness. However, a visual examination of individual segments reveals that many of these segments are not internally homogeneous as they seem to carry more than one color tone and this visual result confirms the earlier observation in Figure 5, where distinct segments produced at scale parameter 130 were associated with a high internal segment variance.
Figure 9 shows the segmentation of a large swimming pool structure at scale parameter 130 (A) and at scale parameter 150 (B). As pointed out earlier, segments produced at scale parameter 130 are not yet internally stable in terms of brightness, and this can be extended to the tree cover and some green areas around the swimming pool. However, a segmentation at scale parameter 150 can stabilize the internal variance of the image by further merging smaller segments with the larger ones, and the entire outline of the swimming pool area, the blue roofing structure on the side of the swimming pool as well as some nearby tree patches, were successfully reconstructed.
The segmentation parameters evaluation strategy proposed in this study was further tested on a 0.5 m resolution color aerial photograph covering the same study area as the satellite GeoEye imagery. Radiometric, atmospheric, and geometric corrections were already performed on the imagery acquired from the department of rural development, agriculture and land reform in Pretoria. Three urban scenes were selected from the imagery and subjected to several segmentation tasks at varied scale parameters. Figure 10 illustrates subsets of segmentation results achieved with the scale parameters 40 (B), 50 (C), and 70 (D). The segmentation results in Figure 10B and 10C show poor delineation of building units that were subjected to over- and under-segmentation errors. However, the segmentation result in Figure 10D shows successful delineations of these building units.
Figure 11 presents subsets of segmentation results obtained from the partitioning of the aerial photograph at scale parameters 90 (B), 100 (C), and 110 (D). The result in (D) shows a successful delineation of roads, while the results in (B) and (C) reveal over-segmentation occurrences on the linear urban objects.
Figure 12 shows subsets of segmentation results for urban vegetation cover at scale parameters 120 (B), 130 (C), and 150 (D). The results reveal that scale parameters 120 and 130 could not produce meaningful image segments of urban trees. However, a segmentation at the scale parameter of 150 successfully merged small tree patches into larger homogeneous tree segments that are distinct from their nearest neighbors.

5. Accuracy Assessment

The segmentation accuracy assessment was performed through five quantitative metrics, namely the quality rate ( Q R ), the area fit index ( A F I ), the over-segmentation O S , the under-segmentation U S , and the root mean square error R M S . The quality rate error quantifies how large the under-segmentation error is [40]. Small values of the measure are achieved in an optimal image segmentation where the outlines of segments are very close to their real-world references. The Area Fit Index quantifies the over-segmentation error, and negative values of the index indicate a one hundred percent overlap between the reference polygon and its corresponding segment image and an overall under-segmentation of the image. Values of the AFI greater than zero generally describe an overlap of less than 100 percent between the reference polygon and its corresponding segment. The under-segmentation error ( U S ) describes the amount of over-segmentation in the image, while the over-segmentation error O S characterizes the amount of under-segmentation to which individual segments were subjected. To compute these five assessment measures, we manually digitized some buildings, roads, parking, and swimming pool polygon samples using ArcGIS version 10.8. Area measures of the digitized polygons were converted into pixels. These measures were used with the associated segment area measures to compute the quality rate, area fit index, over and under-segmentation errors, as well as the root mean square error. A total of 390 image segments were carefully selected within the study area, with 65 segment polygons considered per urban scene. Table 6 shows computed values of quality rate, area fit, under and over-segmentation error indices, as well as the root mean square error of six state-of-the-art segmentation evaluation approaches against our proposed strategy.
From the second column of Table 6, it can be observed that the segmentation parameter evaluation strategies proposed by [41] and that proposed by [16] produced the largest quality rate errors, followed by the strategies proposed by [5,24,42,43]. However, our proposed segmentation evaluation strategy achieved the lowest quality rate error, and this implies that the overall under-segmentation to which the various image segments were subjected is very negligible [40]. When it comes to the Area Fit Index, the segmentation parameter evaluation method proposed by [41] produced a positive measure of the error, which indicates that most image segments were subjected to over-segmentation. The evaluation strategies proposed by [42,43] and our proposed method all achieved negative index values. This indicates that the majority of image segments were subjected to under-segmentation errors. However, in terms of the magnitude of the error, our proposed strategy achieved the best results at 0.006. The largest under-segmentation errors were achieved through the segmentation parameter selection methods proposed by [5,24,42,43,44], respectively. The parameters evaluation strategy proposed by [41] achieved the lowest under-segmentation error at 0.005, followed by our proposed strategy at 0.008. This means that some image segments in both evaluation methods were subjected to over-segmentation. A look at the Area Fit Index indicated that the amount of over-segmentation is more prominent in the strategy proposed by [41] than that achieved through our proposed strategy [45]. An observation of the fifth column of Table 6 indicates that the newly proposed segmentation parameter evaluation produced the lowest over-segmentation error, followed by the strategies proposed in [41,42]. This means that the amount of under-segmentation error under which image segments were subjected in our strategy was very negligible, at about 0.782% of the size of the reference polygons, which is a very good and acceptable error. In terms of overall performance, the last column confirms that the newly proposed segmentation parameter evaluation strategy produced the lowest over and under-segmentation errors.
Table 6. Average estimates of five image segmentation accuracy assessment indices from six state-of-the-art methods.
Table 6. Average estimates of five image segmentation accuracy assessment indices from six state-of-the-art methods.
Segmentation Assessment MetricsQRAFIUSOSRMSE
Yang et al., (2014) [24]0.257700-------0.2205000.1275500.180000
El-Naggar, (2018) [42]0.386800−1.0620000.3290000.058000 0.273000
Vamsee et al., (2018) [43]0.306700−0.1450000.3117000.2950000.368000
Wang et al., (2019) [5]0.314500-------0.1482000.1915000.171000
Norman et al., (2020) [41]0.4832000.0045800.0050000.0097300.014000
Dao et al., (2021) [16]0.490000-------0.0800000.9700000.688000
He et al., (2024) [44]0.321100-------0.2380000.2469000.342900
The proposed method0.003910−0.0057400.0077800.0078200.005510

6. Conclusions

Image segmentation remains a very important step in object-based image analysis. However, the development of strategies for selecting optimal segmentation-scale parameters for multi-resolution image segmentation is still ongoing and remains a great challenge. This study proposed an unsupervised strategy for the selection of optimal segmentation-scale parameters and color factor weights. The new strategy presents alternative image segmentation evaluation paradigms in addition to those presented in the works of [5,16,25,41,42,43,44]. The proposed robust, repeatable strategy for optimal selection of segmentation parameters (OSSP) consists of three modules. The first module, based on the concept of standard deviation, computes non-weighted image variance to keep the measure independent from area-size errors that may originate from over- and under-segmentation of the imagery. The second module partially relies on the current formulations of Moran’s index and the correlation coefficient to evaluate the spatial autocorrelation of the image in order to reinforce the separation of internally distinct objects during the segmentation process. When the spectral distance between objects increases, the measure levels off in opposition to the traditional trend of this measure. For our measure, the heterogeneity between objects would always increase when the spectral distances between objects increase, which is best for good segmentations. The third module consists of a heterogeneity index that differs from current formulations in terms of the numerator. This module can find multiple optimal segmentation scale parameters as well as associated color factor weights that would guarantee a good delineation of the spatial outlines of the resulting image segments. The effectiveness of the optimal selection of segmentation scale parameters strategy in extracting distinct segments of land use and land cover from multispectral imagery was validated with a GeoEye multispectral image and a color aerial photograph. The optimal segmentation scale parameters obtained from the proposed OSSP strategy could successfully outline land use and land cover of various sizes, including small and standard-sized buildings, large buildings, parking areas, roads, water bodies, and urban trees. Although the proposed approach achieved successful segmentations for most urban land use and land cover features, some objects were slightly over and under-segmented, but these errors were quantified better than some of the state-of-the-art existing methods. Minor under-segmentation in urban mapping using object-based image analysis is preferable, especially when dealing with very complex environments such as urban environments. These minor under-segmentation errors can generally be corrected through manual re-segmentation and merging. Associating the obtained individual optimal segmentation scale parameters to urban features of interest within the imagery was performed through visual observations. Our evaluation strategy produced image segments with outlines very close to their real-world characteristics. Standard-sized building objects were better delineated using low color factor weights, which can be translated to a higher shape compactness measure. This was observed at a scale parameter of 70 with a color weight of 0.4. However, when increasing the segmentation scale parameter measures, larger color weights seem to play a more significant role than shape compactness in producing image segments of good quality, as observed at scale parameters 110 and 150. The proposed segmentation parameter selection strategy successfully brought out three inherent optimal scale parameters suitable to segment the high-resolution multispectral GeoEye image and the aerial photograph, and the tool has the potential to efficiently improve knowledge-based image analysis of complex urban areas.
It was found in this study that the internal homogeneity of image segments does not necessarily increase with an increase in segmentation-scale parameter values or color factor weights. The study found that small values of the color factor weight can optimize the performance of large-scale parameters in opposition to larger weights of the parameter and vice versa. Large color factor weights were also found to optimize the performance of large segmentation scale parameters, and it can be argued that medium-range scale parameters would perform very well with large shape compactness/small color factor weights for the segmentation of urban scenes. In addition, large segmentation-scale parameters would perform very well with larger color factor weights or smaller shape compactness values. It was also found that the number of spectral bands carried by the imagery also affects the segmentation outcomes, and this finding came from the visual comparison of segmentation results from both the satellite image and the aerial photograph, as some minor over- and under-segmentation occurrences were observed on the aerial image segmentation results while they were inexistent on the satellite image segmentation results. Further investigation of this study could involve an urban environment that contains informal settlements and a large amount of vegetation cover and the consideration of a multispectral image with more spectral bands to assess the performance of the newly proposed modules in handling larger spectral variability. Other segmentation parameters, such as shape compactness and an extension of the range of scale parameters, could also be considered in further investigations. Combining the developed strategy with the concepts of deep and machine learning could also be investigated for future works. To improve the speed of selection of optimal segmentation parameters, an automated heterogeneity graph interpretation algorithm could also be developed. This study did not directly focus on image classification assessment; however, there is a direct implication between improvements in image segmentation quality and image classification results. It has been advocated that improvements in image segmentation quality would result in improved image classification outcomes [46].

Author Contributions

Conceptualization, methodology, validation, formal analysis, original draft preparation, data curation, visualization: G.B.I. Writing—review & editing: G.B.I. and K.M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable, for studies not involving humans or animals.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author because it is part of a research laboratory data repository.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krause, J.R.; Oczkowski, A.J.; Watson, E.B. Improved mapping of coastal salt marsh habitat changes at Barnegat Bay (NJ, USA) using object-based image analysis of high-resolution aerial imagery. Remote Sens. Appl. Soc. Environ. 2023, 29, 100910. [Google Scholar] [CrossRef] [PubMed]
  2. Taubenböck, H.; Esch, T.; Roth, A. An urban classification approach based on an object-oriented analysis of high-resolution satellite imagery for a spatial structuring within urban areas. In Proceedings of the First Workshop of the EARSeL Special Interest Group on Urban Remote Sensing “Challenges and Solutions”, Berlin, Germany, 2–3 March 2006. [Google Scholar]
  3. Ouchra, H.; Belangour, A. Satellite image classification methods and techniques: A survey. In Proceedings of the 2021 IEEE International Conference on Imaging Systems and Techniques (IST), Kaohsiung, Taiwan, 24–26 August 2021; pp. 1–6. [Google Scholar]
  4. Grybas, H.; Melendy, L.; Congalton, R.G. A comparison of unsupervised segmentation parameter optimization approaches using moderate-and high-resolution imagery. GISci. Remote Sens. 2017, 54, 515–533. [Google Scholar] [CrossRef]
  5. Wang, Y.; Qi, Q.; Liu, Y.; Jiang, L.; Wang, J. Unsupervised segmentation parameter selection using the local spatial statistics for remote sensing image segmentation. Int. J. Appl. Earth Obs. Geoinf. 2019, 81, 98–109. [Google Scholar] [CrossRef]
  6. Oreti, L.; Giuliarelli, D.; Tomao, A.; Barbati, A. Object oriented classification for mapping mixed and pure forest stands using very-high resolution imagery. Remote Sens. 2021, 13, 2508. [Google Scholar] [CrossRef]
  7. Li, M.; Ma, L.; Blaschke, T.; Cheng, L.; Tiede, D. A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments. Int. J. Appl. Earth Obs. Geoinf. 2016, 49, 87–98. [Google Scholar] [CrossRef]
  8. Bialas, J.; Oommen, T.; Havens, T.C. Optimal segmentation of high spatial resolution images for the classification of buildings using random forests. Int. J. Appl. Earth Obs. Geoinf. 2019, 82, 101895. [Google Scholar] [CrossRef]
  9. Na, J.; Ding, H.; Zhao, W.; Liu, K.; Tang, G.; Pfeifer, N. Object-based large-scale terrain classification combined with segmentation optimization and terrain features: A case study in China. Trans. GIS 2021, 25, 2939–2962. [Google Scholar] [CrossRef]
  10. Wojtaszek, M.V.; Ronczyk, L.; Mamatkulov, Z.; Reimov, M. Object-based approach for urban land cover mapping using high spatial resolution data. E3S Web Conf. 2021, 227, 01001. [Google Scholar] [CrossRef]
  11. Abdulateef, S.K.; Salman, M.D. A Comprehensive Review of Image Segmentation Techniques. Iraqi J. Electr. Electron. Eng. 2021, 17, 166–175. [Google Scholar] [CrossRef]
  12. Zhu, H.; Cai, L.; Liu, H.; Huang, W. Information extraction of high-resolution remote sensing images based on the calculation of optimal segmentation parameters. PLoS ONE 2016, 11, e0158585. [Google Scholar] [CrossRef]
  13. Liu, J.; Du, M.; Mao, Z. Scale computation on high spatial resolution remotely sensed imagery multi-scale segmentation. Int. J. Remote Sens. 2017, 38, 5186–5214. [Google Scholar] [CrossRef]
  14. Lv, X.; Ming, D.; Chen, Y.; Wang, M. Very High-Resolution Remote Sensing Image Classification with SEEDS-CNN and Scale Effect Analysis for Superpixel CNN Classification. Int. J. Remote Sens. 2019, 40, 506–531. [Google Scholar] [CrossRef]
  15. Chen, Y.; Chen, Q.; Jing, C. Multi-resolution segmentation parameters optimization and evaluation for VHR remote sensing image based on mean NSQI and discrepancy measure. J. Spat. Sci. 2021, 66, 253–278. [Google Scholar] [CrossRef]
  16. Dao, P.D.; Mantripragada, K.; He, Y.; Qureshi, F.Z. Improving hyperspectral image segmentation by applying inverse noise weighting and outlier removal for optimal scale selection. ISPRS J. Photogramm. Remote Sens. 2021, 171, 348–366. [Google Scholar] [CrossRef]
  17. Espindola, G.M.; Câmara, G.; Reis, I.A.; Bins, L.S.; Monteiro, A.M. Parameter selection for region-growing image segmentation algorithms using spatial autocorrelation. Int. J. Remote Sens. 2006, 27, 3035–3040. [Google Scholar] [CrossRef]
  18. Martha, T.R.; Kerle, N.; Van Westen, C.J.; Jetten, V.; Kumar, K.V. Segment optimization and data-driven thresholding for knowledge-based landslide detection by object-based image analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4928–4943. [Google Scholar] [CrossRef]
  19. Johnson, B.; Xie, Z. Unsupervised image segmentation evaluation and refinement using a multi-scale approach. ISPRS J. Photogramm. Remote Sens. 2011, 66, 473–483. [Google Scholar] [CrossRef]
  20. Drăguţ, L.; Csillik, O.; Eisank, C.; Tiede, D. Automated parameterisation for multi-scale image segmentation on multiple layers. ISPRS J. Photogramm. Remote Sens. 2014, 88, 119–127. [Google Scholar] [CrossRef]
  21. Böck, S.; Immitzer, M.; Atzberger, C. On the objectivity of the objective function—Problems with unsupervised segmentation evaluation based on global score and a possible remedy. Remote Sens. 2017, 9, 769. [Google Scholar] [CrossRef]
  22. Georganos, S.; Grippa, T.; Lennert, M.; Vanhuysse, S.; Johnson, B.A.; Wolff, E. Scale matters: Spatially partitioned unsupervised segmentation parameter optimization for large and heterogeneous satellite images. Remote Sens. 2018, 10, 1440. [Google Scholar] [CrossRef]
  23. Hu, Z.; Zhang, Q.; Zou, Q.; Li, Q.; Wu, G. Stepwise evolution analysis of the region-merging segmentation for scale parameterization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2461–2472. [Google Scholar] [CrossRef]
  24. Chen, G.; Weng, Q.; Hay, G.J.; He, Y. Geographic object-based image analysis (GEOBIA): Emerging trends and future opportunities. GISci. Remote Sens. 2018, 55, 159–182. [Google Scholar] [CrossRef]
  25. Yang, J.; Li, P.; He, Y. A multi-band approach to unsupervised scale parameter selection for multi-scale image segmentation. ISPRS J. Photogramm. Remote Sens. 2014, 94, 13–24. [Google Scholar] [CrossRef]
  26. Ni, N.; Chen, N.; Ernst, R.E.; Yang, S.; Chen, J. Semi-automatic extraction and mapping of dyke swarms based on multi-resolution remote sensing images: Applied to the dykes in the Kuluketage region in the northeastern Tarim Block. Precambrian Res. 2019, 329, 262–272. [Google Scholar] [CrossRef]
  27. Sharma, R.; Kumar, M.; Alam, M.S. Image processing techniques to estimate weight and morphological parameters for selected wheat refractions. Sci. Rep. 2021, 11, 20953. [Google Scholar] [CrossRef] [PubMed]
  28. Herold, M.; Scepan, J.; Müller, A.; Günther, S. Object-oriented mapping and analysis of urban land use/cover using IKONOS data. In Proceedings of the 22nd Earsel Symposium Geoinformation for European-Wide Integration, Prague, Czech Republic, 4–6 June 2002; pp. 4–6. [Google Scholar]
  29. Wang, X.; Liu, S.; Du, P.; Liang, H.; Xia, J.; Li, Y. Object-based change detection in urban areas from high spatial resolution images based on multiple features and ensemble learning. Remote Sens. 2018, 10, 276. [Google Scholar] [CrossRef]
  30. Wang, X.; Wang, L.; Tian, J.; Shi, C. Object-based spectral-phenological features for mapping invasive Spartina alterniflora. Int. J. Appl. Earth Obs. Geoinf. 2021, 101, 10. [Google Scholar] [CrossRef]
  31. Ikokou, G.B.; Smit, J. A technique for optimal selection of segmentation scale parameters for object-oriented classification of urban scenes. S. Afr. J. Geomat. 2013, 2, 358–369. [Google Scholar]
  32. Johnson, B.A.; Bragais, M.; Endo, I.; Magcale-Macandog, D.B.; Macandog, P.B.M. Image segmentation parameter optimization considering within-and between-segment heterogeneity at multiple scale levels: Test case for mapping residential areas using landsat imagery. ISPRS Int. J. Geo-Inf. 2015, 4, 2292–2305. [Google Scholar] [CrossRef]
  33. Atik, S.O.; Ipbuker, C. Integrating convolutional neural network and multiresolution segmentation for land cover and land use mapping using satellite imagery. Appl. Sci. 2021, 11, 5551. [Google Scholar] [CrossRef]
  34. Salah, M. Extraction of road centrelines and edge lines from high-resolution satellite imagery using density-oriented fuzzy C-means and mathematical morphology. J. Indian Soc. Remote Sens. 2022, 50, 1243–1255. [Google Scholar] [CrossRef]
  35. Mahdavi Saeidi, A.; Babaie Kafaki, S.; Mattaji, A. Development and Improvement of Neural network algorithm and forest cover index (FCD) classification methods in GEOEYE high resolution satellite data. (Case study: Ramsar-Safarood Hyrcanian forests). J. Environ. Sci. Technol. 2022, 24, 113–126. [Google Scholar]
  36. Alcaras, E.; Parente, C. The Effectiveness of Pan-Sharpening Algorithms on Different Land Cover Types in GeoEye-1 Satellite Images. J. Imaging 2023, 9, 93. [Google Scholar] [CrossRef] [PubMed]
  37. Cánovas-García, F.; Alonso-Sarría, F. A local approach to optimize the scale parameter in multiresolution segmentation for multispectral imagery. Geocarto Int. 2015, 30, 937–961. [Google Scholar] [CrossRef]
  38. Hao, S.; Cui, Y.; Wang, J. Segmentation scale effect analysis in the object-oriented method of high-spatial-resolution image classification. Sensors 2021, 21, 7935. [Google Scholar] [CrossRef] [PubMed]
  39. Frishila, A.A.; Kamal, M. Selection of Optimum Image Segmentation Parameters for Building Extraction using GeoEye-1 Image Data. In Proceedings of the 2019 5th International Conference on Science and Technology (ICST), Yogyakarta, Indonesia, 30–31 July 2019; Volume 1, pp. 1–6. [Google Scholar]
  40. Mohan Vamsee, A.; Kamala, P.; Martha, T.R.; Vinod Kumar, K.; Jai Sankar, G.; Amminedu, E. A tool assessing optimal multi-scale image segmentation. J. Indian Soc. Remote Sens. 2018, 46, 31–41. [Google Scholar] [CrossRef]
  41. Norman, M.; Mohd Shafri, H.Z.; Idrees, M.O.; Mansor, S.; Yusuf, B. Spatio-statistical optimization of image segmentation process for building footprint extraction using very high-resolution WorldView 3 satellite data. Geocarto Int. 2020, 35, 1124–1147. [Google Scholar] [CrossRef]
  42. El-naggar, A.M. Determination of optimum segmentation parameter values for extracting building from remote sensing images. Alexandria Eng. J. 2018, 57, 3089–3097. [Google Scholar] [CrossRef]
  43. Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intelligence 2011, 33, 898–916. [Google Scholar] [CrossRef]
  44. He, T.; Chen, J.; Kang, L.; Zhu, Q. Evaluation of Global-Scale and Local-Scale Optimized Segmentation Algorithms in GEOBIA with SAM on Land Use and Land Cover. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 6721–6738. [Google Scholar] [CrossRef]
  45. Clinton, N.; Holt, A.; Scarborough, J.; Yan, L.I.; Gong, P. Accuracy assessment measures for object-based image segmentation goodness. Photogramm. Eng. Remote Sens. 2010, 76, 289–299. [Google Scholar] [CrossRef]
  46. Kim, M.; Madden, M.; Warner, T. Estimation of optimal image object size for the segmentation of forest stands with multispectral IKONOS imagery. In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications; Springer: Berlin/Heidelberg, Germany, 2008; pp. 291–307. [Google Scholar]
Figure 1. The segmentation heterogeneity graph portraying optimal segmentation scales (Source: [31]).
Figure 1. The segmentation heterogeneity graph portraying optimal segmentation scales (Source: [31]).
Geomatics 04 00009 g001
Figure 2. Fluctuations of image variance as functions of color factor weight per scale parameter. (AF) show successful identifications of optimal image variances at color factor weights of 0.7 (A), 0.9 (B), 0.2 (C), 0.8 (D), 0.6 (E), and 0.8 (F).
Figure 2. Fluctuations of image variance as functions of color factor weight per scale parameter. (AF) show successful identifications of optimal image variances at color factor weights of 0.7 (A), 0.9 (B), 0.2 (C), 0.8 (D), 0.6 (E), and 0.8 (F).
Geomatics 04 00009 g002
Figure 3. Spatial autocorrelation curves per segmentation scale parameter and color weight. (AF) show successful identifications of optimal spatial autocorrelation measures at color factor weights of 0.8 (A), 0.4 (B), 0.3 (C), 0.8 (D), 0.6 (E), and 0.8 (F).
Figure 3. Spatial autocorrelation curves per segmentation scale parameter and color weight. (AF) show successful identifications of optimal spatial autocorrelation measures at color factor weights of 0.8 (A), 0.4 (B), 0.3 (C), 0.8 (D), 0.6 (E), and 0.8 (F).
Geomatics 04 00009 g003
Figure 4. Heterogeneity function measures revealing three optimal segmentation scale parameters.
Figure 4. Heterogeneity function measures revealing three optimal segmentation scale parameters.
Geomatics 04 00009 g004
Figure 5. Overall assessment of the quality of the image segmentation process.
Figure 5. Overall assessment of the quality of the image segmentation process.
Geomatics 04 00009 g005
Figure 6. Subsets of regular and large building segmentation results from the satellite imagery. (A) over-segmentation results at scale parameter 50, and (B) optimal segmentation results at scale parameter 70.
Figure 6. Subsets of regular and large building segmentation results from the satellite imagery. (A) over-segmentation results at scale parameter 50, and (B) optimal segmentation results at scale parameter 70.
Geomatics 04 00009 g006
Figure 7. Subsets of segmentation results of a large parking lot and some roads from the satellite imagery. (A) shows over- and under-segmentation results at scale parameter 90, and (B) optimal segmentation results at scale parameter 110.
Figure 7. Subsets of segmentation results of a large parking lot and some roads from the satellite imagery. (A) shows over- and under-segmentation results at scale parameter 90, and (B) optimal segmentation results at scale parameter 110.
Geomatics 04 00009 g007
Figure 8. Segmentation of a large sports field from satellite imagery. (A) shows over-segmentation of the sports field at scale parameter 130 and (B) optimal segmentation of the sports field at scale parameter 150.
Figure 8. Segmentation of a large sports field from satellite imagery. (A) shows over-segmentation of the sports field at scale parameter 130 and (B) optimal segmentation of the sports field at scale parameter 150.
Geomatics 04 00009 g008
Figure 9. Segmentation of a large swimming pool and its surroundings. In (A), the over-segmentation of the swimming pool structure at scale parameter 130 and (B) the optimal segmentation of the swimming pool structure at scale parameter 150.
Figure 9. Segmentation of a large swimming pool and its surroundings. In (A), the over-segmentation of the swimming pool structure at scale parameter 130 and (B) the optimal segmentation of the swimming pool structure at scale parameter 150.
Geomatics 04 00009 g009
Figure 10. Subsets of image segmentation results performed with scale parameters of 40, 50, and 70 on the aerial photograph. Figure (A) shows the original scene; (B) shows the over-segmentation of building units at scale parameter 40; (C) shows the over- and under-segmentation of building units at scale parameter 50; and (D) shows the improved segmentation of building units at scale parameter 70.
Figure 10. Subsets of image segmentation results performed with scale parameters of 40, 50, and 70 on the aerial photograph. Figure (A) shows the original scene; (B) shows the over-segmentation of building units at scale parameter 40; (C) shows the over- and under-segmentation of building units at scale parameter 50; and (D) shows the improved segmentation of building units at scale parameter 70.
Geomatics 04 00009 g010
Figure 11. Subsets of segmentation results performed on the aerial photograph and showing urban roads segmented with scale parameters 90, 100, and 110. In (A), the figure shows the original scene; (B) and (C) show the over-segmentation of roads at scale parameters 90 and 100, respectively; and (D) shows an optimal segmentation of roads at scale parameter 110.
Figure 11. Subsets of segmentation results performed on the aerial photograph and showing urban roads segmented with scale parameters 90, 100, and 110. In (A), the figure shows the original scene; (B) and (C) show the over-segmentation of roads at scale parameters 90 and 100, respectively; and (D) shows an optimal segmentation of roads at scale parameter 110.
Geomatics 04 00009 g011
Figure 12. Subsets of segmentation results performed on the aerial photograph and showing urban trees segmented with scale parameters of 120, 130, and 150. Figure (A) shows the original scene; (B) and (C) show over-segmentation results of urban vegetation at scale parameters 120 and 130, respectively; and (D) shows optimal segmentation of urban vegetation at scale parameter 150.
Figure 12. Subsets of segmentation results performed on the aerial photograph and showing urban trees segmented with scale parameters of 120, 130, and 150. Figure (A) shows the original scene; (B) and (C) show over-segmentation results of urban vegetation at scale parameters 120 and 130, respectively; and (D) shows optimal segmentation of urban vegetation at scale parameter 150.
Geomatics 04 00009 g012
Table 1. A subset of randomly recorded normalized estimates of image variance and spatial autocorrelation measures across the 48 segmentation sublevels.
Table 1. A subset of randomly recorded normalized estimates of image variance and spatial autocorrelation measures across the 48 segmentation sublevels.
Normalized Image VarianceNormalized-Spatial AutocorrelationNormalized Image VarianceNormalized-Spatial Autocorrelation
0.7611120.0013890.0028610.027271
0.2797030.3896010.5480320.894092
0.2936660.3115280.2709680.182229
0.9897520.4552240.3948350.198282
0.8100250.2014530.6546120.384949
0.1500110.1362000.1868380.632118
0.8166670.5516940.1566770.093594
0.0019060.4209930.6980410.076852
0.3089090.0067730.4462720.874585
0.6455450.0858210.2919090.108591
0.4989770.0893210.3496430.025632
0.0032160.9867620.3563800.230527
0.7893010.3266930.7891800.521224
0.6814930.0210690.0852010.746331
0.3836690.0014620.8589520.787381
Table 2. Optimized image variance measures per segmentation levels at six segmentation scale parameters.
Table 2. Optimized image variance measures per segmentation levels at six segmentation scale parameters.
Color Factor WeightNormalized-Image-Variance at Scale Parameter 50Normalized-Image-Variance at Scale Parameter 70Normalized-Image-Variance at Scale Parameter 90
0.20.7833330.6774190.0027
0.30.2166670.2419350.529032
0.40.2666670.0032160.270968
0.50.9752000.3202970.354839
0.60.7000000.7893000.451613
0.70.1500000.5161290.154839
0.80.8166670.6129030.109677
0.90.0019000.0001120.724100
Color Factor WeightNormalized-Image-Variance at Scale Parameter 110Normalized-Image-Variance at Scale Parameter 130Normalized-Image-Variance at Scale Parameter 150
0.20.2090910.3709680.248120
0.30.3727270.4798390.067669
0.40.3636360.6572580.218045
0.50.5545450.3669350.961840
0.60.1909090.1109560.090226
0.70.8752000.8219000.300752
0.80.0102000.3064520.002159
0.90.2818180.1733870.045113
Table 3. Global homogeneity levels of segments per segmentation level.
Table 3. Global homogeneity levels of segments per segmentation level.
Color Factor WeightProportion of Intra-Segment Homogeneity at Scale Parameter 50Proportion of Intra-Segment Homogeneity at Scale Parameter 70Proportion of Intra-Segment Homogeneity at Scale Parameter 90
0.222%32%99%
0.378%76%47%
0.473%98%72%
0.503%68%64%
0.630%21%54%
0.785%48%84%
0.818%39%89%
0.997%98%27%
COLOR Factor WeightProportion of Intra-Segment Homogeneity at Scale Parameter 110Proportion of Intra-Segment Homogeneity at Scale Parameter 130Proportion of Intra-Segment Homogeneity at Scale Parameter 150
0.279%62%75%
0.362%52%93%
0.463%34%78%
0.544%63%03%
0.680%88%90%
0.712%17%69%
0.898%69%98%
0.971%82%95%
Table 4. Estimates of mean spatial autocorrelation measures.
Table 4. Estimates of mean spatial autocorrelation measures.
Color Factor WeightNormalized-Spatial Autocorrelation-at-Scale Parameter 50Normalized-Spatial Autocorrelation-at-Scale Parameter 70Normalized-Spatial Autocorrelation-at-Scale Parameter 90
0.20.0013430.0893000.023700
0.30.3900100.0256320.940927
0.40.3115280.9867000.182229
0.50.4552240.2305200.198283
0.60.2014530.3266930.384950
0.70.1362000.5212200.632119
0.80.5516940.0210000.093594
0.90.4209930.7463310.076852
Color Factor WeightNormalized-Spatial Autocorrelation-at-Scale Parameter 110Normalized-Spatial Autocorrelation-at-Scale Parameter 130Normalized-Spatial Autocorrelation-at-Scale Parameter 150
0.20.0067730.0251300.215200
0.30.8745850.0453470.803675
0.40.0014620.0427620.816430
0.50.0858220.1420210.783985
0.60.1085920.9896100.850612
0.70.7873810.8417530.820445
0.80.9887850.9439740.998540
0.90.0033370.4000060.944788
Table 5. Computed optimal heterogeneity function measures per scale parameter.
Table 5. Computed optimal heterogeneity function measures per scale parameter.
Scale ParameterLowest Normalized Image Variance MeasuresLargest Normalized Spatial Autocorrelation MeasuresHeterogeneous Function
500.1500000.5516940.572463
700.0032160.9867000.993503
900.1096770.9409270.791211
1100.0102000.9887850.979579
1300.1109560.9896100.798366
1500.0021590.9985400.995685
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ikokou, G.B.; Malale, K.M. Unsupervised Image Segmentation Parameters Evaluation for Urban Land Use/Land Cover Applications. Geomatics 2024, 4, 149-172. https://doi.org/10.3390/geomatics4020009

AMA Style

Ikokou GB, Malale KM. Unsupervised Image Segmentation Parameters Evaluation for Urban Land Use/Land Cover Applications. Geomatics. 2024; 4(2):149-172. https://doi.org/10.3390/geomatics4020009

Chicago/Turabian Style

Ikokou, Guy Blanchard, and Kate Miranda Malale. 2024. "Unsupervised Image Segmentation Parameters Evaluation for Urban Land Use/Land Cover Applications" Geomatics 4, no. 2: 149-172. https://doi.org/10.3390/geomatics4020009

APA Style

Ikokou, G. B., & Malale, K. M. (2024). Unsupervised Image Segmentation Parameters Evaluation for Urban Land Use/Land Cover Applications. Geomatics, 4(2), 149-172. https://doi.org/10.3390/geomatics4020009

Article Metrics

Back to TopTop