Superpixels with Content-Awareness via a Two-Stage Generation Framework
Abstract
:1. Introduction
- Boundary accuracy. When employed as a region-level feature for image segmentation or edge detection, it evaluates the effectiveness of superpixels in delineating image object boundaries and their consistency with the ground truth;
- Feature quality. For some applications such as image reconstruction and data compression, it is important to preserve regional homogeneity, spatial relationships and other information pertaining to internal objects within an image;
- Running efficiency. As an advantaged tool for video and image preprocessing, superpixel segmentation plays a pivotal role in computer vision applications that necessitate real-time performance.
- According to the area in manifold space, a region-density-based clustering centroid relocating strategy is proposed to adjust the spatial distribution of superpixels;
- By using redistributed clustering centers, a coarse-to-fine implementation of online average clustering framework is designed for improving the feature performance;
- The synergistic CATS superpixel feature demonstrates comparable performance in terms of segmentation accuracy, spatial compactness, and operational efficiency.
2. Related Work
2.1. Iteration-Demand Superpixels
2.2. Iteration-Free Superpixels
3. Proposed Method
3.1. Superpixel Candidate Generation
3.1.1. Initialization and Seeding
3.1.2. Clustering and Updating
3.2. Centroid Relocation Strategy
3.2.1. Area of Manifold Surface
3.2.2. Splitting and Merging
3.3. Integrated CATS Framework
Algorithm 1 CATS superpixel generation framework |
Input: Source image , the preset superpixel number . |
Output: Label map of . |
Initialize cluster seeds by grid sampling with step . |
Initialize for each pixel . |
Initialize a priority queue with a small root. |
for each seed in do |
Create and push an element onto . |
end for |
while Q is not empty do |
Pop the top-most element from . |
if then |
Set . |
Update by Equation (4). |
Compute by Equation (10). |
if then |
Compute seed set by Algorithm 2. |
Create and push the elements in onto . |
else |
for each 4-neighboring pixel of do |
if then |
Create and push an element onto . |
end if |
end for |
end if |
end if |
end while |
Create a RAG to depict the spatial relationship of all candidate superpixels . |
Refine the label map by Algorithm 3. |
Return . |
Algorithm 2 splitting operation within CATS |
Input: Current cluster centroid , current label map |
Output: Set of new cluster centroids . |
Compute by Equation (11). |
for each candidate seed in do |
if then |
Set . |
end if |
end for |
Return . |
Algorithm 3 merging operation within CATS |
Input: Current Label map of candidate superpixels with the corresponding RAG . |
Output: Refined label map . |
while the number of nodes in is greater than do |
Compute the area ratio corresponding to the global minimum weight . |
if then |
for each pixel with label do |
Set . |
end for |
else |
Set . |
end if |
Update . |
end while |
Return . |
4. Experiments and Analysis
4.1. Visual Comparison
4.2. Metrical Evaluation
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wang, S.; Lan, J.; Lin, J.; Liu, Y.; Wang, L.; Sun, Y.; Yin, B. Adaptive hypergraph superpixels. Displays 2023, 76, 102369. [Google Scholar] [CrossRef]
- Ren, X.; Malik, J. Learning a Classification Model for Segmentation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Nice, France, 13–16 October 2003; pp. 10–17. [Google Scholar]
- Diao, Q.; Dai, Y.; Wang, J.; Feng, X.; Pan, F.; Zhang, C. Spatial-pooling-based graph attention U-Net for hyperspectral image classification. Remote Sens. 2024, 16, 937. [Google Scholar] [CrossRef]
- Huang, S.; Liu, Z.; Jin, W.; Mu, Y. Superpixel-based multi-scale multi-instance learning for hyperspectral image classification. Pattern Recognit. 2024, 149, 110257. [Google Scholar] [CrossRef]
- Mu, Y.; Ou, L.; Chen, W.; Liu, T.; Gao, D. Superpixel-based graph convolutional network for UAV forest fire image segmentation. Drones 2024, 8, 142. [Google Scholar] [CrossRef]
- Chen, G.; He, C.; Wang, T.; Zhu, K.; Liao, P.; Zhang, X. A superpixel-guided unsupervised fast semantic segmentation method of remote sensing images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 2506605. [Google Scholar] [CrossRef]
- Hu, K.; He, W.; Ye, J.; Zhao, L.; Peng, H.; Pi, J. Online visual tracking of weighted multiple instance learning via neutrosophic similarity-based objectness estimation. Symmetry 2019, 11, 832. [Google Scholar] [CrossRef]
- Qiu, Y.; Mei, J.; Xu, J. Superpixel-wise contrast exploration for salient object detection. Knowl. Based Syst. 2024, 292, 111617. [Google Scholar] [CrossRef]
- Zhang, D.; Xie, G.; Ren, J.; Zhang, Z.; Bao, W.; Xu, X. Content-sensitive superpixel generation with boundary adjustment. Appl. Sci. 2020, 10, 3150. [Google Scholar] [CrossRef]
- Chuchvara, A.; Gotchev, A. Efficient image-warping framework for content-adaptive superpixels generation. IEEE Signal Process. Lett. 2021, 28, 1948–1952. [Google Scholar] [CrossRef]
- Liao, N.; Guo, B.; Li, C.; Liu, H.; Zhang, C. BACA: Superpixel segmentation with boundary awareness and content adaptation. Remote Sens. 2022, 14, 4572. [Google Scholar] [CrossRef]
- Sun, L.; Ma, D.; Pan, X.; Zhou, Y. Weak-boundary sensitive superpixel segmentation based on local adaptive distance. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 2302–2316. [Google Scholar] [CrossRef]
- Li, C.; He, W.; Liao, N.; Gong, J.; Hou, S.; Guo, B. Superpixels with contour adherence via label expansion for image decomposition. Neural Comput. Appl. 2022, 34, 16223–16237. [Google Scholar] [CrossRef]
- Uziel, R.; Ronen, M.; Freifeld, O. Bayesian Adaptive Superpixel Segmentation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8470–8479. [Google Scholar]
- Achanta, R.; Marquez, P.; Fua, P.; Susstrunk, S. Scale-Adaptive Superpixels. In Proceedings of the IS&T Color and Imaging Conference (CIC), Vancouver, BC, Canada, 12–16 November 2018; pp. 1–6. [Google Scholar]
- Pan, X.; Zhou, Y.; Zhang, Y.; Zhang, C. Fast generation of superpixels with lattice topology. IEEE Trans. Image Process. 2022, 31, 4828–4841. [Google Scholar] [CrossRef] [PubMed]
- Zhou, P.; Kang, X.; Ming, A. Vine spread for superpixel segmentation. IEEE Trans. Image Process. 2023, 32, 878–891. [Google Scholar] [CrossRef] [PubMed]
- Kang, X.; Zhu, L.; Ming, A. Dynamic random walk for superpixel segmentation. IEEE Trans. Image Process. 2020, 29, 3871–3884. [Google Scholar] [CrossRef] [PubMed]
- Giraud, R.; Ta, V.; Papadakis, N. Robust superpixels using color and contour features along linear path. Comput. Vis. Image Underst. 2018, 170, 1–13. [Google Scholar] [CrossRef]
- Achanta, R.; Susstrunk, S. Superpixels and Polygons Using Simple Non-Iterative Clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4895–4904. [Google Scholar]
- Liu, Y.; Yu, M.; Li, B.; He, Y. Intrinsic manifold SLIC: A simple and efficient method for computing content-sensitive superpixels. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 653–666. [Google Scholar] [CrossRef] [PubMed]
- J, P.; Kumar, B.V. An extensive survey on superpixel segmentation: A research perspective. Arch. Comput. Method Eng. 2023, 30, 3749–3767. [Google Scholar] [CrossRef]
- Xu, Y.; Gao, X.; Zhang, C.; Tan, J.; Li, X. High quality superpixel generation through regional decomposition. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 1802–1815. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Susstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
- Liu, Y.; Yu, C.; Yu, M.; He, Y. Manifold SLIC: A Fast Method to Compute Content-Sensitive Superpixels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 651–659. [Google Scholar]
- Hu, Y.; Li, Y.; Song, R.; Rao, P.; Wang, Y. Minimum barrier superpixel segmentation. Image Vis. Comput. 2018, 70, 1–10. [Google Scholar] [CrossRef]
- Rubio, A.; Yu, L.; Simo-Serra, E.; Moreno-Noguer, F. BASS: Boundary-Aware Superpixel Segmentation. In Proceedings of the International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 2824–2829. [Google Scholar]
- Xiao, X.; Zhou, Y.; Gong, Y. Content-adaptive superpixel segmentation. IEEE Trans. Image Process. 2018, 27, 2883–2896. [Google Scholar] [CrossRef] [PubMed]
- Bobbia, S.; Macwan, R.; Benezeth, Y.; Nakamura, K.; Gomez, R.; Dubois, J. Iterative boundaries implicit identification for superpixels segmentation: A real-time approach. IEEE Access 2021, 9, 77250–77263. [Google Scholar] [CrossRef]
- Zhao, J.; Hou, Q.; Ren, B.; Cheng, M.; Rosin, P. FLIC: Fast Linear Iterative Clustering with Active Search. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New Orleans, LA, USA, 2–7 February 2018; pp. 7574–7581. [Google Scholar]
- Kesavan, Y.; Ramanan, A. One-Pass Clustering Superpixels. In Proceedings of the Conference on Information and Automation for Sustainability, Colombo, Sri Lanka, 22–24 December 2014; pp. 1–5. [Google Scholar]
- Shen, J.; Hao, X.; Liang, Z.; Liu, Y.; Wang, W.; Shao, L. Real-time superpixel segmentation by DBSCAN clustering algorithm. IEEE Trans. Image Process. 2016, 25, 5933–5942. [Google Scholar] [CrossRef] [PubMed]
- Huang, C.; Wang, W.; Lin, S.; Lin, Y. USEQ: Ultra-Fast Superpixel Extraction via Quantization. In Proceedings of the IEEE International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 1965–1970. [Google Scholar]
- Huang, C.; Wang, W.; Wang, W.; Lin, S.; Lin, Y. USEAQ: Ultra-fast superpixel extraction via adaptive sampling from quantized regions. IEEE Trans. Image Process. 2018, 27, 4916–4931. [Google Scholar] [CrossRef] [PubMed]
- Grady, L. Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1768–1783. [Google Scholar] [CrossRef] [PubMed]
- Vincent, L.; Soille, P. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 583–598. [Google Scholar] [CrossRef]
- Machairas, V.; Faessel, M.; Cardenas, D.; Chabardes, T.; Walter, T.; Decencière, E. Waterpixels. IEEE Trans. Image Process. 2015, 24, 3707–3716. [Google Scholar] [CrossRef] [PubMed]
- Yuan, Y.; Zhu, Z.; Yu, H.; Zhang, W. Watershed-based superpixels with global and local boundary marching. IEEE Trans. Image Process. 2020, 29, 7375–7388. [Google Scholar] [CrossRef]
- Zhong, D.; Li, T.; Dong, Y. An efficient hybrid linear clustering superpixel decomposition framework for traffic scene semantic segmentation. Sensors 2023, 23, 1002. [Google Scholar] [CrossRef]
- Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 898–916. [Google Scholar] [CrossRef] [PubMed]
- Stutz, D.; Hermans, A.; Leibe, B. Superpixels: An evaluation of the state-of-the-art. Comput. Vis. Image Underst. 2018, 166, 1–27. [Google Scholar] [CrossRef]
- Li, C.; Guo, B.; Wang, G.; Zheng, Y.; Liu, Y.; He, W. NICE: Superpixel segmentation using non-iterative clustering with efficiency. Appl. Sci. 2020, 10, 4415. [Google Scholar] [CrossRef]
- Martin, D.R.; Fowlkes, C.C.; Malik, J. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 530–549. [Google Scholar] [CrossRef]
- Xu, L.; Luo, B.; Pei, Z.; Qin, K. PFS: Particle-filter-based superpixel segmentation. Symmetry 2018, 10, 143. [Google Scholar] [CrossRef]
- Liu, M.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy rate superpixel segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 2097–2104. [Google Scholar]
Method | Expected Superpixel Number | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
50 | 100 | 150 | 200 | 250 | 300 | 350 | 400 | 450 | 500 | |
MSLIC | 166 | 171 | 178 | 180 | 180 | 178 | 186 | 185 | 189 | 190 |
BASS | 190 | 213 | 235 | 244 | 260 | 272 | 279 | 289 | 295 | 299 |
IBIS | 21 | 20 | 18 | 18 | 18 | 18 | 18 | 18 | 18 | 18 |
SNIC | 33 | 33 | 33 | 33 | 34 | 34 | 34 | 34 | 34 | 34 |
USEQ | 25 | 25 | 25 | 25 | 25 | 25 | 25 | 25 | 25 | 25 |
DBSCAN | - | - | - | 31 | 31 | 30 | 30 | 29 | 29 | 29 |
CATS | 40 | 37 | 37 | 37 | 37 | 37 | 37 | 36 | 37 | 38 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, C.; Liao, N.; Huang, Z.; Bian, H.; Zhang, Z.; Ren, L. Superpixels with Content-Awareness via a Two-Stage Generation Framework. Symmetry 2024, 16, 1011. https://doi.org/10.3390/sym16081011
Li C, Liao N, Huang Z, Bian H, Zhang Z, Ren L. Superpixels with Content-Awareness via a Two-Stage Generation Framework. Symmetry. 2024; 16(8):1011. https://doi.org/10.3390/sym16081011
Chicago/Turabian StyleLi, Cheng, Nannan Liao, Zhe Huang, He Bian, Zhe Zhang, and Long Ren. 2024. "Superpixels with Content-Awareness via a Two-Stage Generation Framework" Symmetry 16, no. 8: 1011. https://doi.org/10.3390/sym16081011
APA StyleLi, C., Liao, N., Huang, Z., Bian, H., Zhang, Z., & Ren, L. (2024). Superpixels with Content-Awareness via a Two-Stage Generation Framework. Symmetry, 16(8), 1011. https://doi.org/10.3390/sym16081011