Key Takeaways

The benchmarking analysis shows that the proposed algorithm SLIC++ achieves robust performance over different cases. The results of SLIC++ are more predictable as compared to the state-of-the-art methods Meanshift and SLIC. The performance of Meanshift is highly subjective as the recall keeps changing. Less recall values eventually result in less scores at the cost of information loss. Whereas, SLIC achieves 7% less scores and 8% less precision values in terms of boundary retrieval. The results of SLIC++ indicate that the proposed content-aware distance measure integrated in base SLIC demonstrates superior results. The significant improvement to the existing knowledge of super-pixel creation research is hybridization of proximity measures. Based on the comprehensive research it is seen that the hybrid measure performs better than the singular proximity measure counterparts of the same algorithm. These measures substantially control the end results of super-pixel segmentation in terms of accuracy. The proposed hybrid proximity measure carefully finds a balance between the two existing distance measure and performs clustering over image pixels making sure to retain content-aware information.

#### **Table 6.** Detailed perceptual analysis with increasing parameters.

**Figure 5.** Zoomed in view of test case image for content-aware super-pixel analysis created by SLIC++ (**b**) against SLIC (**a**).

#### **5. Limitations of Content-Aware SLIC++ Super-Pixels**

The super-pixel segmentation algorithms are considered pre-processing step for wide range of computer vision applications. To obtain the optimal performance of sophisticated applications, the base super-pixel algorithm SLIC uses set of input parameters. These parameters allow the user control over different aspects of image segmentation. The idea is to extract uniform super-pixels throughout the image grid to maintain reliable learning statistics throughout the process. To make this possible the SLIC initially allows user to choose number of pixels '*K*' (values ranging from 500–2000), parameter '*m*' (where *m* = 10) which decides the extent of enforcement of color similarity over spatial similarity, number

of iterations 'N' (where N = 10) which decides the convergence of the algorithm, neighborhood window '*w*' (where *w* = 3) for gradient calculation to relocate cluster centers (if placed on edge pixel). This makes four input parameters for the base SLIC, whereas the proposed extension SLIC++ introduces two more weight parameters, *w*1 and *w*2 (0.3175 and 0.6825, respectively), to decide the impact of each distance measure in the hybrid distance measure. All these parameters significantly control the accuracy of segmentation results. Incorrect selection of these parameters leads to overall poor performance. Hence, for diverse applications, initial parameter search is necessary, which in turn requires several runs. For the reported research, using the state-of-the-art segmentation dataset, i.e., Berkeley dataset we chose the parameters as selected by the base SLIC. These parameters offer good performance over the image size of 321 × 481 or 481 × 321, whereas, as we increased the resolution of images during the extended research we observed that a higher value of '*K*' is required for better segmentation accuracy.

For the existing research, we conducted experiments focused to identify the gains associated with usage of the proposed content-aware distance measure over the straight line distance measure. For the extended research, the input parameters shall be considered for optimization.

#### **6. Emerging Successes and Practical Implications**

Several decades of research in computer vision for boosted implementations resulting in fast accurate decisions, super-pixels have been a topic of research for long time now. The super-pixel segmentation is taken as entry stage providing pre-processing functionality for sophisticated intelligent workflows such as semantic image segmentation. To speed up the overall process of training and testing of these intelligent systems super-pixels are probable to provide remedies. As the intelligent automated vision systems have critical applications in medicine [46,47], manufacturing [48], surveillance [49], tracking [2] and so on. For this reason, fast and accurate visual decision are required. As the environmental conditions in form of visual dynamicity is challenging task to tackle by pre-processing modules. These modules are required to provide reliable visual results. Many super-pixel creation algorithms have been proposed over time to solve focused problems of image content sparsity [30], initialization optimization [28], and accurate edge detection [38]. However, the topic of the lightning condition in this domain remains untouched and needs attention. The dynamic lightning condition is a key component in autonomous vehicles, autonomous robots, surgical robots. The Berkeley dataset is comprised of images of different objects, ranging from humans, flowers, mountains, animals and so on. The conducted research holds for applications of autonomous robots and autonomous vehicles. However, the proposed algorithm is backed by the core concepts of image segmentation. For this reason, the presented work can be extended for variety of applications. Depending on the nature of application, the ranges of input parameters would be changed based on the required sensitivity of the end results, such as for the segmentation application in the medical domain compact where content-aware information is required. Consequently, the input values including the number of super-pixels to be created will be carefully selected. To handle the pre-processing problems associated with dynamic lightning conditions focusing autonomous robotics, the proposed extension of SLIC is a good fit. SLIC++ imposing minimum prerequisite conditions provides direct control over the working functionality and still manages to provide optimal information retrieval from the visual scenes for not only normal images but rather inclusive of semi-dark images.

#### **7. Conclusions and Future Work**

## *7.1. Conclusions*

In this paper, we introduced a content-aware similarity measure which not only solved the problem of boundary retrieval in semi-dark images but is also applicable to other image types such as bright and dark. The content-aware measure is integrated in SLIC to create content-aware super-pixels which can then be used by other automated applications for fast implementations of high-level vision task. We also report results of integration of SLIC with existing similarity measures and describe their limitations of applicability for visual image data. To validate out proposed algorithm along with the proposed similarity measure, we conduct qualitative and quantitative evaluations on semi-dark images extracted from Berkeley dataset. We also compare SLIC++ with state-of-the-art super-pixel algorithms. Our comparisons show that the SLIC++ outperforms the existing super-pixel algorithms in terms of precision and score values by a margin of 8% and 7%, respectively. Perceptual experimental results also confirm that the proposed extension of SLIC, i.e., SLIC++ backed by content-aware distance measure outperforms the existing super-pixel creation methods. Moreover, SLIC++ results in consistent and reliable performance for different test image cases characterizing a generic workflow for super-pixel creation.

#### *7.2. Future Work*

For the extended research, density-based similarity measures integrated with contentaware nature may lead the future analysis. The density-based feature is expected to aid the overall all working functionality with noise resistant properties against the noisy incoming image data. Moreover, another aspect is the creation of accurate super-pixels in the presence of non-linearly separable data properties. Finally, the input parameter selection for the initialization of SLIC variants depending on the application domain and incoming image size remains an open area of research.

**Author Contributions:** Project administration, supervision, conceptualization, formal analysis, investigation, review, M.A.H.; conceptualization, methodology, analysis, visualization, investigation, writing—original draft preparation, data preparation and curation, M.M.M.; project administration, resources, review, K.R.; funding acquisition, conceptualization, review, S.H.A.; methodology, software, analysis, visualization, S.S.R.; validation, writing—review and editing, M.U. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research study is a part of the funded project under a matching gran<sup>t</sup> scheme supported by Iqra University, Pakistan, and Universiti Teknologi PETRONAS (UTP), Malaysia. Grant number: 015MEO-227.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Following dataset is used. Berkeley: https://www2.eecs.berkeley. edu/Research/Projects/CS/vision/bsds/ (accessed on 27 September 2021).

**Conflicts of Interest:** The authors declare no conflict of interest.
