Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = VoVNetV2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 2169 KB  
Article
Road Scene Instance Segmentation Based on Improved SOLOv2
by Qing Yang, Jiansheng Peng, Dunhua Chen and Hongyu Zhang
Electronics 2023, 12(19), 4169; https://doi.org/10.3390/electronics12194169 - 8 Oct 2023
Cited by 6 | Viewed by 2759
Abstract
Road instance segmentation is vital for autonomous driving, yet the current algorithms struggle in complex city environments, with issues like poor small object segmentation, low-quality mask edge contours, slow processing, and limited model adaptability. This paper introduces an enhanced instance segmentation method based [...] Read more.
Road instance segmentation is vital for autonomous driving, yet the current algorithms struggle in complex city environments, with issues like poor small object segmentation, low-quality mask edge contours, slow processing, and limited model adaptability. This paper introduces an enhanced instance segmentation method based on SOLOv2. It integrates the Bottleneck Transformer (BoT) module into VoVNetV2, replacing the standard convolutions with ghost convolutions. Additionally, it replaces ResNet with an improved VoVNetV2 backbone to enhance the feature extraction and segmentation speed. Furthermore, the algorithm employs Feature Pyramid Grids (FPGs) instead of Feature Pyramid Networks (FPNs) to introduce multi-directional lateral connections for better feature fusion. Lastly, it incorporates a convolutional Block Attention Module (CBAM) into the detection head for refined features by considering the attention weight coefficients in both the channel and spatial dimensions. The experimental results demonstrate the algorithm’s effectiveness, achieving a 27.6% mAP on Cityscapes, a 4.2% improvement over SOLOv2. It also attains a segmentation speed of 8.9 FPS, a 1.7 FPS increase over SOLOv2, confirming its practicality for real-world engineering applications. Full article
(This article belongs to the Special Issue Application of Machine Learning in Graphics and Images)
Show Figures

Figure 1

24 pages, 31677 KB  
Article
Application of Improved Instance Segmentation Algorithm Based on VoVNet-v2 in Open-Pit Mines Remote Sensing Pre-Survey
by Lingran Zhao, Ruiqing Niu, Bingquan Li, Tao Chen and Yueyue Wang
Remote Sens. 2022, 14(11), 2626; https://doi.org/10.3390/rs14112626 - 31 May 2022
Cited by 12 | Viewed by 3341
Abstract
The traditional mine remote sensing information pre-survey is mainly based on manual interpretation, and interpreters delineate the mine boundary shape. This work is difficult and susceptible to subjective judgment due to the large differences in the characteristics of mining complex within individuals and [...] Read more.
The traditional mine remote sensing information pre-survey is mainly based on manual interpretation, and interpreters delineate the mine boundary shape. This work is difficult and susceptible to subjective judgment due to the large differences in the characteristics of mining complex within individuals and small differences between individuals. CondInst-VoV and BlendMask-VoV, based on VoVNet-v2, are two improved instance segmentation models proposed to improve the efficiency of mine remote sensing pre-survey and minimize labor expenses. In Hubei Province, China, Gaofen satellite fusion images, true-color satellite images, false-color satellite images, and Tianditu images are gathered to create a Key Open-pit Mine Acquisition Areas (KOMMA) dataset to assess the efficacy of mine detection models. In addition, regional detection was carried out in Daye Town. The result shows that the performance of improved models on the KOMMA dataset exceeds the baseline as well as the verification accuracy of manual interpretation in regional mine detection tasks. In addition, CondInst-VoV has the best performance on Tianditu image, reaching 88.816% in positioning recall and 98.038% in segmentation accuracy. Full article
Show Figures

Graphical abstract

Back to TopTop