Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = CES-YOLOv8

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 11407 KB  
Article
YOLOv8-LCNET: An Improved YOLOv8 Automatic Crater Detection Algorithm and Application in the Chang’e-6 Landing Area
by Jing Nan, Yexin Wang, Kaichang Di, Bin Xie, Chenxu Zhao, Biao Wang, Shujuan Sun, Xiangjin Deng, Hong Zhang and Ruiqing Sheng
Sensors 2025, 25(1), 243; https://doi.org/10.3390/s25010243 - 3 Jan 2025
Cited by 6 | Viewed by 2380
Abstract
The Chang’e-6 (CE-6) landing area on the far side of the Moon is located in the southern part of the Apollo basin within the South Pole–Aitken (SPA) basin. The statistical analysis of impact craters in this region is crucial for ensuring a safe [...] Read more.
The Chang’e-6 (CE-6) landing area on the far side of the Moon is located in the southern part of the Apollo basin within the South Pole–Aitken (SPA) basin. The statistical analysis of impact craters in this region is crucial for ensuring a safe landing and supporting geological research. Aiming at existing impact crater identification problems such as complex background, low identification accuracy, and high computational costs, an efficient impact crater automatic detection model named YOLOv8-LCNET (YOLOv8-Lunar Crater Net) based on the YOLOv8 network is proposed. The model first incorporated a Partial Self-Attention (PSA) mechanism at the end of the Backbone, allowing the model to enhance global perception and reduce missed detections with a low computational cost. Then, a Gather-and-Distribute mechanism (GD) was integrated into the Neck, enabling the model to fully fuse multi-level feature information and capture global information, enhancing the model’s ability to detect impact craters of various sizes. The experimental results showed that the YOLOv8-LCNET model performs well in the impact crater detection task, achieving 87.7% Precision, 84.3% Recall, and 92% AP, which were 24.7%, 32.7%, and 37.3% higher than the original YOLOv8 model. The improved YOLOv8 model was then used for automatic crater detection in the CE-6 landing area (246 km × 135 km, with a DOM resolution of 3 m/pixel), resulting in a total of 770,671 craters, ranging from 13 m to 19,882 m in diameter. The analysis of this impact crater catalogue has provided critical support for landing site selection and characterization of the CE-6 mission and lays the foundation for future lunar geological studies. Full article
Show Figures

Figure 1

18 pages, 6423 KB  
Article
PGE-YOLO: A Multi-Fault-Detection Method for Transmission Lines Based on Cross-Scale Feature Fusion
by Zixuan Cai, Tianjun Wang, Weiyu Han and Anan Ding
Electronics 2024, 13(14), 2738; https://doi.org/10.3390/electronics13142738 - 12 Jul 2024
Cited by 8 | Viewed by 1968
Abstract
Addressing the issue of incorrect and missed detections caused by the complex types, uneven scales, and small sizes of defect targets in transmission lines, this paper proposes a defect-detection method based on cross-scale feature fusion, PGE-YOLO. Firstly, feature extraction is enriched by replacing [...] Read more.
Addressing the issue of incorrect and missed detections caused by the complex types, uneven scales, and small sizes of defect targets in transmission lines, this paper proposes a defect-detection method based on cross-scale feature fusion, PGE-YOLO. Firstly, feature extraction is enriched by replacing the convolutional blocks in the backbone network that need to be cascaded and fused using the Par_C2f network module, which incorporates a parallel network (ParNet). Secondly, a four-layer efficient multi-scale attention (EMA) mechanism is incorporated into the network’s neck to address long and short dependency issues. This enhancement aims to improve global information retention by employing parallel substructures and integrating cross-space feature information. Finally, the paradigm of generalized feature fusion (GFPN) is introduced and reconfigured to develop a novel CE-GFPN. This model effectively integrates shallow feature information with deep feature information to enhance the capability of feature fusion and improve detection performance. Using a real transmission line multi-defect dataset from UAV aerial photography and the CPLID dataset, ablation and comparison experiments with various models demonstrated that our model achieved superior results. Compared to the initial YOLOv8n model, our model increased the detection accuracy by 6.6% and 1.2%, respectively, while ensuring there is no surge in the number of parameters. This ensures that the real-time and accuracy requirements for defect detection in the industry are satisfied. Full article
Show Figures

Figure 1

14 pages, 20599 KB  
Article
CES-YOLOv8: Strawberry Maturity Detection Based on the Improved YOLOv8
by Yongkuai Chen, Haobin Xu, Pengyan Chang, Yuyan Huang, Fenglin Zhong, Qi Jia, Lingxiao Chen, Huaiqin Zhong and Shuang Liu
Agronomy 2024, 14(7), 1353; https://doi.org/10.3390/agronomy14071353 - 22 Jun 2024
Cited by 20 | Viewed by 4543
Abstract
Automatic harvesting robots are crucial for enhancing agricultural productivity, and precise fruit maturity detection is a fundamental and core technology for efficient and accurate harvesting. Strawberries are distributed irregularly, and their images contain a wealth of characteristic information. This characteristic information includes both [...] Read more.
Automatic harvesting robots are crucial for enhancing agricultural productivity, and precise fruit maturity detection is a fundamental and core technology for efficient and accurate harvesting. Strawberries are distributed irregularly, and their images contain a wealth of characteristic information. This characteristic information includes both simple and intuitive features, as well as deeper abstract meanings. These complex features pose significant challenges to robots in determining fruit ripeness. To increase the precision, accuracy, and efficiency of robotic fruit maturity detection methods, a strawberry maturity detection algorithm based on an improved CES-YOLOv8 network structure from YOLOv8 was developed in this study. Initially, to reflect the characteristics of actual planting environments, the study collected image data under various lighting conditions, degrees of occlusion, and angles during the data collection phase. Subsequently, parts of the C2f module in the YOLOv8 model’s backbone were replaced with the ConvNeXt V2 module to enhance the capture of features in strawberries of varying ripeness, and the ECA attention mechanism was introduced to further improve feature representation capability. Finally, the angle compensation and distance compensation of the SIoU loss function were employed to enhance the IoU, enabling the rapid localization of the model’s prediction boxes. The experimental results show that the improved CES-YOLOv8 model achieves an accuracy, recall rate, mAP50, and F1 score of 88.20%, 89.80%, 92.10%, and 88.99%, respectively, in complex environments, indicating improvements of 4.8%, 2.9%, 2.05%, and 3.88%, respectively, over those of the original YOLOv8 network. This algorithm provides technical support for automated harvesting robots to achieve efficient and precise automated harvesting. Additionally, the algorithm is adaptable and can be extended to other fruit crops. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture—2nd Edition)
Show Figures

Figure 1

30 pages, 7046 KB  
Article
Improving Surgical Scene Semantic Segmentation through a Deep Learning Architecture with Attention to Class Imbalance
by Claudio Urrea, Yainet Garcia-Garcia and John Kern
Biomedicines 2024, 12(6), 1309; https://doi.org/10.3390/biomedicines12061309 - 13 Jun 2024
Cited by 6 | Viewed by 2539
Abstract
This article addresses the semantic segmentation of laparoscopic surgery images, placing special emphasis on the segmentation of structures with a smaller number of observations. As a result of this study, adjustment parameters are proposed for deep neural network architectures, enabling a robust segmentation [...] Read more.
This article addresses the semantic segmentation of laparoscopic surgery images, placing special emphasis on the segmentation of structures with a smaller number of observations. As a result of this study, adjustment parameters are proposed for deep neural network architectures, enabling a robust segmentation of all structures in the surgical scene. The U-Net architecture with five encoder–decoders (U-Net5ed), SegNet-VGG19, and DeepLabv3+ employing different backbones are implemented. Three main experiments are conducted, working with Rectified Linear Unit (ReLU), Gaussian Error Linear Unit (GELU), and Swish activation functions. The applied loss functions include Cross Entropy (CE), Focal Loss (FL), Tversky Loss (TL), Dice Loss (DiL), Cross Entropy Dice Loss (CEDL), and Cross Entropy Tversky Loss (CETL). The performance of Stochastic Gradient Descent with momentum (SGDM) and Adaptive Moment Estimation (Adam) optimizers is compared. It is qualitatively and quantitatively confirmed that DeepLabv3+ and U-Net5ed architectures yield the best results. The DeepLabv3+ architecture with the ResNet-50 backbone, Swish activation function, and CETL loss function reports a Mean Accuracy (MAcc) of 0.976 and Mean Intersection over Union (MIoU) of 0.977. The semantic segmentation of structures with a smaller number of observations, such as the hepatic vein, cystic duct, Liver Ligament, and blood, verifies that the obtained results are very competitive and promising compared to the consulted literature. The proposed selected parameters were validated in the YOLOv9 architecture, which showed an improvement in semantic segmentation compared to the results obtained with the original architecture. Full article
(This article belongs to the Section Biomedical Engineering and Materials)
Show Figures

Graphical abstract

20 pages, 5093 KB  
Article
Strawberry Maturity Recognition Based on Improved YOLOv5
by Zhiqing Tao, Ke Li, Yuan Rao, Wei Li and Jun Zhu
Agronomy 2024, 14(3), 460; https://doi.org/10.3390/agronomy14030460 - 26 Feb 2024
Cited by 16 | Viewed by 2975
Abstract
Strawberry maturity detection plays an essential role in modern strawberry yield estimation and robot-assisted picking and sorting. Due to the small size and complex growth environment of strawberries, there are still problems with existing recognition systems’ accuracy and maturity classifications. This article proposes [...] Read more.
Strawberry maturity detection plays an essential role in modern strawberry yield estimation and robot-assisted picking and sorting. Due to the small size and complex growth environment of strawberries, there are still problems with existing recognition systems’ accuracy and maturity classifications. This article proposes a strawberry maturity recognition algorithm based on an improved YOLOv5s model named YOLOv5s-BiCE. This algorithm model is a replacement of the upsampling algorithm with a CARAFE module structure. It is an improvement on the previous model in terms of its content-aware processing; it also widens the field of vision and maintains a high level of efficiency, resulting in improved object detection capabilities. This article also introduces a double attention mechanism named Biformed for small-target detection, optimizing computing allocation, and enhancing content perception flexibility. Via multi-scale feature fusion, we utilized double attention mechanisms to reduce the number of redundant computations. Additionally, the Focal_EIOU optimization method was introduced to improve its accuracy and address issues related to uneven sample classification in the loss function. The YOLOv5s-BiCE algorithm was better at recognizing strawberry maturity compared to the original YOLOv5s model. It achieved a 2.8% increase in the mean average precision and a 7.4% increase in accuracy for the strawberry maturity dataset. The improved algorithm outperformed other networks, like YOLOv4-tiny, YOLOv4-lite-e, YOLOv4-lite-s, YOLOv7, and Fast RCNN, with recognition accuracy improvements of 3.3%, 4.7%, 4.2%, 1.5%, and 2.2%, respectively. In addition, we developed a corresponding detection app and combined the algorithm with DeepSort to apply it to patrol robots. It was found that the detection algorithm exhibits a fast real-time detection speed, can support intelligent estimations of strawberry yield, and can assist picking robots. Full article
Show Figures

Figure 1

Back to TopTop