Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (100)

Search Parameters:
Keywords = ghost solutions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 531 KB  
Article
Black Hole Solution Free of Ghosts in f(R) Gravity Coupled with Two Scalar Fields
by G. G. L. Nashed and A. Eid
Universe 2025, 11(9), 305; https://doi.org/10.3390/universe11090305 - 9 Sep 2025
Abstract
One extension of general relativity, known as f(R) gravity, where R denotes the Ricci scalar, is regarded as a promising candidate for addressing the anomalies observed in conventional general relativity. In this work, we apply the field equations of [...] Read more.
One extension of general relativity, known as f(R) gravity, where R denotes the Ricci scalar, is regarded as a promising candidate for addressing the anomalies observed in conventional general relativity. In this work, we apply the field equations of f(R) gravity to a spacetime with spherical symmetry with distinct metric potentials, i.e., gttgrr. By solving the resulting nonlinear differential equations, we derive a novel black hole solution without imposing constraints on the Ricci scalar or on the specific form of f(R) gravity. This solution does not reduce to the Schwarzschild solution of Einstein’s general relativity. This solution is notable because it includes a gravitational mass and extra terms that make the singularities in the curvature stronger than those in black holes from Einstein’s general relativity. We analyze these black holes within the framework of thermodynamics and demonstrate their consistency with standard thermodynamic quantities. Furthermore, we investigate the stability by examining odd-type perturbation modes and show that the resulting black hole is stable. Finally, we derive the coefficients of the two scalar fields and demonstrate that the black hole obtained in this study is free from ghosts. Full article
Show Figures

Figure 1

19 pages, 11410 KB  
Article
A Pool Drowning Detection Model Based on Improved YOLO
by Wenhui Zhang, Lu Chen and Jianchun Shi
Sensors 2025, 25(17), 5552; https://doi.org/10.3390/s25175552 - 5 Sep 2025
Viewed by 578
Abstract
Drowning constitutes the leading cause of injury-related fatalities among adolescents. In swimming pool environments, traditional manual surveillance exhibits limitations, while existing technologies suffer from poor adaptability of wearable devices. Vision models based on YOLO still face challenges in edge deployment efficiency, robustness in [...] Read more.
Drowning constitutes the leading cause of injury-related fatalities among adolescents. In swimming pool environments, traditional manual surveillance exhibits limitations, while existing technologies suffer from poor adaptability of wearable devices. Vision models based on YOLO still face challenges in edge deployment efficiency, robustness in complex water conditions, and multi-scale object detection. To address these issues, we propose YOLO11-LiB, a drowning object detection model based on YOLO11n, featuring three key enhancements. First, we design the Lightweight Feature Extraction Module (LGCBlock), which integrates the Lightweight Attention Encoding Block (LAE) and effectively combines Ghost Convolution (GhostConv) with dynamic convolution (DynamicConv). This optimizes the downsampling structure and the C3k2 module in the YOLO11n backbone network, significantly reducing model parameters and computational complexity. Second, we introduce the Cross-Channel Position-aware Spatial Attention Inverted Residual with Spatial–Channel Separate Attention module (C2PSAiSCSA) into the backbone. This module embeds the Spatial–Channel Separate Attention (SCSA) mechanism within the Inverted Residual Mobile Block (iRMB) framework, enabling more comprehensive and efficient feature extraction. Finally, we redesign the neck structure as the Bidirectional Feature Fusion Network (BiFF-Net), which integrates the Bidirectional Feature Pyramid Network (BiFPN) and Frequency-Aware Feature Fusion (FreqFusion). The enhanced YOLO11-LiB model was validated against mainstream algorithms through comparative experiments, and ablation studies were conducted. Experimental results demonstrate that YOLO11-LiB achieves a drowning class mean average precision (DmAP50) of 94.1%, with merely 2.02 M parameters and a model size of 4.25 MB. This represents an effective balance between accuracy and efficiency, providing a high-performance solution for real-time drowning detection in swimming pool scenarios. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 4483 KB  
Article
A Lightweight Instance Segmentation Model for Simultaneous Detection of Citrus Fruit Ripeness and Red Scale (Aonidiella aurantii) Pest Damage
by İlker Ünal and Osman Eceoğlu
Appl. Sci. 2025, 15(17), 9742; https://doi.org/10.3390/app15179742 - 4 Sep 2025
Viewed by 534
Abstract
Early detection of pest damage and accurate assessment of fruit ripeness are essential for improving the quality, productivity, and sustainability of citrus production. Moreover, precisely assessing ripeness is crucial for establishing the optimal harvest time, preserving fruit quality, and enhancing yield. The simultaneous [...] Read more.
Early detection of pest damage and accurate assessment of fruit ripeness are essential for improving the quality, productivity, and sustainability of citrus production. Moreover, precisely assessing ripeness is crucial for establishing the optimal harvest time, preserving fruit quality, and enhancing yield. The simultaneous and precise early detection of pest damage and assessment of fruit ripeness greatly enhance the efficacy of contemporary agricultural decision support systems. This study presents a lightweight deep learning model based on an optimized YOLO12n-Seg architecture for the simultaneous detection of ripeness stages (unripe and fully ripe) and pest damage caused by Red Scale (Aonidiella aurantii). The model is based on an improved version of YOLO12n-Seg, where the backbone and head layers were retained, but the neck was modified with a GhostConv block to reduce parameter size and improve computational efficiency. Additionally, a Global Attention Mechanism (GAM) was incorporated to strengthen the model’s focus on target-relevant features and reduce background noise. The improvement procedure improved both the ability to gather accurate spatial information in several dimensions and the effectiveness of focusing on specific target object areas utilizing the attention mechanism. Experimental results demonstrated high accuracy on test data, with mAP@0.5 = 0.980, mAP@0.95 = 0.960, precision = 0.961, and recall = 0.943, all achieved with only 2.7 million parameters and a training time of 2 h and 42 min. The model offers a reliable and efficient solution for real-time, integrated pest detection and fruit classification in precision agriculture. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

22 pages, 66579 KB  
Article
Cgc-YOLO: A New Detection Model for Defect Detection of Tea Tree Seeds
by Yuwen Liu, Hao Li, Kefan Yu, Hui Zhu, Binjie Zhang, Wangyu Wu and Hongbo Mu
Sensors 2025, 25(17), 5446; https://doi.org/10.3390/s25175446 - 2 Sep 2025
Viewed by 375
Abstract
Tea tree seeds are highly sensitive to dehydration and cannot be stored for extended periods, making surface defect detection crucial for preserving their germination rate and overall quality. To address this challenge, we propose Cgc-YOLO, an enhanced YOLO-based model specifically designed to detect [...] Read more.
Tea tree seeds are highly sensitive to dehydration and cannot be stored for extended periods, making surface defect detection crucial for preserving their germination rate and overall quality. To address this challenge, we propose Cgc-YOLO, an enhanced YOLO-based model specifically designed to detect small-scale and complex surface defects in tea seeds. A high-resolution imaging system was employed to construct a dataset encompassing five common types of tea tree seeds, capturing diverse defect patterns. Cgc-YOLO incorporates two key improvements: (1) GhostBlock, derived from GhostNetV2, embedded in the Backbone to enhance computational efficiency and long-range feature extraction; and (2) the CPCA attention mechanism, integrated into the Neck, to improve sensitivity to local textures and boundary details, thereby boosting segmentation and localization accuracy. Experimental results demonstrate that Cgc-YOLO achieves 97.6% mAP50 and 94.9% mAP50–95, surpassing YOLO11 by 2.3% and 3.1%, respectively. Furthermore, the model retains a compact size of only 8.5 MB, delivering an excellent balance between accuracy and efficiency. This study presents a robust and lightweight solution for nondestructive detection of tea seed defects, contributing to intelligent seed screening and storage quality assurance. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 15988 KB  
Article
YOLO-LCE: A Lightweight YOLOv8 Model for Agricultural Pest Detection
by Xinyu Cen, Shenglian Lu and Tingting Qian
Agronomy 2025, 15(9), 2022; https://doi.org/10.3390/agronomy15092022 - 22 Aug 2025
Viewed by 422
Abstract
Agricultural pest detection through image analysis is a key technology in automated pest-monitoring systems. However, some existing pest detection models face excessive model complexity. This study proposes YOLO-LCE, a lightweight model based on the YOLOv8 architecture for agricultural pest detection. Firstly, a Lightweight [...] Read more.
Agricultural pest detection through image analysis is a key technology in automated pest-monitoring systems. However, some existing pest detection models face excessive model complexity. This study proposes YOLO-LCE, a lightweight model based on the YOLOv8 architecture for agricultural pest detection. Firstly, a Lightweight Complementary Residual (LCR) module is proposed to extract complementary features through a dual-branch structure. It enhances detection performance and reduces model complexity. Additionally, Efficient Partial Convolution (EPConv) is proposed as a downsampling operator. It adopts an asymmetric channel splitting strategy to efficiently utilize features. Furthermore, the Ghost module is introduced to the detection head to reduce computational overhead. Finally, WIoUv3 is used to improve detection performance further. YOLO-LCE is evaluated on the Pest24 dataset. Compared to the baseline model, YOLO-LCE achieves mAP50 improvement of 1.7 percentage points, mAP50-95 improvement of 0.4 percentage points, and precision improvement of 0.5 percentage points. For computational efficiency, parameters are reduced by 43.9%, and GFLOPs are reduced by 33.3%. These metrics demonstrate that YOLO-LCE improves detection accuracy while reducing computational complexity, providing an effective solution for lightweight pest detection. Full article
(This article belongs to the Section Pest and Disease Management)
Show Figures

Figure 1

20 pages, 764 KB  
Article
Black Hole Solution in f(R,G) Gravitational Theory Coupled with Scalar Field
by G. G. L. Nashed and A. Eid
Symmetry 2025, 17(8), 1360; https://doi.org/10.3390/sym17081360 - 20 Aug 2025
Viewed by 474
Abstract
In this work, we explore a class of spherically symmetric black hole (BH) solutions within the framework of modified gravity, focusing on a non-ghost-free f(R,G) theory coupled to a scalar field. We present a novel black hole geometry [...] Read more.
In this work, we explore a class of spherically symmetric black hole (BH) solutions within the framework of modified gravity, focusing on a non-ghost-free f(R,G) theory coupled to a scalar field. We present a novel black hole geometry that arises as a deformation of the Schwarzschild solution and analyze its physical and thermodynamic properties. Our results show that the model satisfies stability conditions, with the Ricci scalar R, as well as its first and second derivatives, remaining positive throughout the spacetime. The solution admits multiple horizons and exhibits strong curvature singularities compared to those in general relativity. Furthermore, it supports a non-trivial scalar field potential. A comprehensive thermodynamic analysis is performed, including evaluations of the entropy, temperature, heat capacity, and quasi-local energy. We find that the black hole exhibits thermodynamic stability within certain ranges of model parameters. In addition, we investigate geodesic deviation and derive the conditions necessary for stability within the f(R,G) gravitational framework. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

25 pages, 6513 KB  
Article
Deployment of CES-YOLO: An Optimized YOLO-Based Model for Blueberry Ripeness Detection on Edge Devices
by Jun Yuan, Jing Fan, Zhenke Sun, Hongtao Liu, Weilong Yan, Donghan Li, Hui Liu, Jingxiang Wang and Dongyan Huang
Agronomy 2025, 15(8), 1948; https://doi.org/10.3390/agronomy15081948 - 13 Aug 2025
Viewed by 592
Abstract
To achieve efficient and accurate detection of blueberry fruit ripeness, this study proposes a lightweight yet high-performance object detection model—CES-YOLO. Designed for real-world blueberry harvesting scenarios, the model addresses key challenges such as significant visual differences across ripeness stages, complex occlusions, and small [...] Read more.
To achieve efficient and accurate detection of blueberry fruit ripeness, this study proposes a lightweight yet high-performance object detection model—CES-YOLO. Designed for real-world blueberry harvesting scenarios, the model addresses key challenges such as significant visual differences across ripeness stages, complex occlusions, and small object sizes. CES-YOLO introduces three core components: the C3K2-Ghost module for efficient feature extraction and model compression, the SEAM attention mechanism to enhance the focus on critical fruit regions, and the EMA Head for improved detection of small and densely packed targets. Experiments on a blueberry ripeness dataset demonstrated that CES-YOLO achieved 91.22% mAP50, 69.18% mAP95, 89.21% precision, and 85.23% recall, while maintaining a lightweight structure with only 2.1 M parameters and 5.0 GFLOPs, significantly outperforming mainstream lightweight detection models. Extensive ablation and comparative studies confirmed the effectiveness of each component in improving detection accuracy and reducing false positives and missed detections. This research offers an efficient and practical solution for automated recognition of fruit and vegetable maturity, supporting broader applications in smart agriculture, and provides theoretical and engineering insights for the future design of agricultural vision models. To further demonstrate its practical deployment capability, CES-YOLO was successfully deployed on the NVIDIA Jetson Orin Nano platform, where it maintained real-time detection performance, with low power consumption and high inference efficiency, validating its suitability for embedded edge computing scenarios in intelligent agriculture. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

19 pages, 6372 KB  
Article
Detecting Planting Holes Using Improved YOLO-PH Algorithm with UAV Images
by Kaiyuan Long, Shibo Li, Jiangping Long, Hui Lin and Yang Yin
Remote Sens. 2025, 17(15), 2614; https://doi.org/10.3390/rs17152614 - 28 Jul 2025
Viewed by 456
Abstract
The identification and detection of planting holes, combined with UAV technology, provides an effective solution to the challenges posed by manual counting, high labor costs, and low efficiency in large-scale planting operations. However, existing target detection algorithms face difficulties in identifying planting holes [...] Read more.
The identification and detection of planting holes, combined with UAV technology, provides an effective solution to the challenges posed by manual counting, high labor costs, and low efficiency in large-scale planting operations. However, existing target detection algorithms face difficulties in identifying planting holes based on their edge features, particularly in complex environments. To address this issue, a target detection network named YOLO-PH was designed to efficiently and rapidly detect planting holes in complex environments. Compared to the YOLOv8 network, the proposed YOLO-PH network incorporates the C2f_DyGhostConv module as a replacement for the original C2f module in both the backbone network and neck network. Furthermore, the ATSS label allocation method is employed to optimize sample allocation and enhance detection effectiveness. Lastly, our proposed Siblings Detection Head reduces computational burden while significantly improving detection performance. Ablation experiments demonstrate that compared to baseline models, YOLO-PH exhibits notable improvements of 1.3% in mAP50 and 1.1% in mAP50:95 while simultaneously achieving a reduction of 48.8% in FLOPs and an impressive increase of 26.8 FPS (frames per second) in detection speed. In practical applications for detecting indistinct boundary planting holes within complex scenarios, our algorithm consistently outperforms other detection networks with exceptional precision (F1-score = 0.95), low computational cost, rapid detection speed, and robustness, thus laying a solid foundation for advancing precision agriculture. Full article
Show Figures

Figure 1

14 pages, 1419 KB  
Article
GhostBlock-Augmented Lightweight Gaze Tracking via Depthwise Separable Convolution
by Jing-Ming Guo, Yu-Sung Cheng, Yi-Chong Zeng and Zong-Yan Yang
Electronics 2025, 14(15), 2978; https://doi.org/10.3390/electronics14152978 - 25 Jul 2025
Viewed by 320
Abstract
This paper proposes a lightweight gaze-tracking architecture named GhostBlock-Augmented Look to Coordinate Space (L2CS), which integrates GhostNet-based modules and depthwise separable convolution to achieve a better trade-off between model accuracy and computational efficiency. Conventional lightweight gaze-tracking models often suffer from degraded accuracy due [...] Read more.
This paper proposes a lightweight gaze-tracking architecture named GhostBlock-Augmented Look to Coordinate Space (L2CS), which integrates GhostNet-based modules and depthwise separable convolution to achieve a better trade-off between model accuracy and computational efficiency. Conventional lightweight gaze-tracking models often suffer from degraded accuracy due to aggressive parameter reduction. To address this issue, we introduce GhostBlocks, a custom-designed convolutional unit that combines intrinsic feature generation with ghost feature recomposition through depthwise operations. Our method enhances the original L2CS architecture by replacing each ResNet block with GhostBlocks, thereby significantly reducing the number of parameters and floating-point operations. The experimental results on the Gaze360 dataset demonstrate that the proposed model reduces FLOPs from 16.527 × 108 to 8.610 × 108 and parameter count from 2.387 × 105 to 1.224 × 105 while maintaining comparable gaze estimation accuracy, with MAE increasing only slightly from 10.70° to 10.87°. This work highlights the potential of GhostNet-augmented designs for real-time gaze tracking on edge devices, providing a practical solution for deployment in resource-constrained environments. Full article
Show Figures

Figure 1

11 pages, 21181 KB  
Article
Parallel Ghost Imaging with Extra Large Field of View and High Pixel Resolution
by Nixi Zhao, Changzhe Zhao, Jie Tang, Jianwen Wu, Danyang Liu, Han Guo, Haipeng Zhang and Tiqiao Xiao
Appl. Sci. 2025, 15(15), 8137; https://doi.org/10.3390/app15158137 - 22 Jul 2025
Cited by 1 | Viewed by 374
Abstract
Ghost imaging (GI) facilitates image acquisition under low-light conditions through single pixel measurements, thus holding tremendous potential across various fields such as biomedical imaging, remote sensing, defense and military applications, and 3D imaging. However, in order to reconstruct high-resolution images, GI typically requires [...] Read more.
Ghost imaging (GI) facilitates image acquisition under low-light conditions through single pixel measurements, thus holding tremendous potential across various fields such as biomedical imaging, remote sensing, defense and military applications, and 3D imaging. However, in order to reconstruct high-resolution images, GI typically requires a large number of single-pixel measurements, which imposes practical limitations on its application. Parallel ghost imaging addresses this issue by utilizing each pixel of a position-sensitive detector as a bucket detector to simultaneously perform tens of thousands of ghost imaging measurements in parallel. In this work, we explore the non-local characteristics of ghost imaging in depth, and by constructing a large speckle space, we achieve a reconstruction result in parallel ghost imaging where the field of view surpasses the limitations of the reference arm detector. Using a computational ghost imaging framework, after pre-recording the speckle patterns, we are able to complete X-ray ghost imaging at a speed of 6 min per sample, with image dimensions of 14,000 × 10,000 pixels (4.55 mm × 3.25 mm, millimeter-scale field of view) and a pixel resolution of 0.325 µm (sub-micron pixel resolution). We present this framework to enhance efficiency, extend resolution, and dramatically expand the field of view, with the aim of providing a solution for the practical implementation of ghost imaging. Full article
(This article belongs to the Special Issue Single-Pixel Intelligent Imaging and Recognition)
Show Figures

Figure 1

23 pages, 3578 KB  
Article
High-Precision Chip Detection Using YOLO-Based Methods
by Ruofei Liu and Junjiang Zhu
Algorithms 2025, 18(7), 448; https://doi.org/10.3390/a18070448 - 21 Jul 2025
Viewed by 499
Abstract
Machining chips are directly related to both the machining quality and tool condition. However, detecting chips from images in industrial settings poses challenges in terms of model accuracy and computational speed. We firstly present a novel framework called GM-YOLOv11-DNMS to track the chips, [...] Read more.
Machining chips are directly related to both the machining quality and tool condition. However, detecting chips from images in industrial settings poses challenges in terms of model accuracy and computational speed. We firstly present a novel framework called GM-YOLOv11-DNMS to track the chips, followed by a video-level post-processing algorithm for chip counting in videos. GM-YOLOv11-DNMS has two main improvements: (1) it replaces the CNN layers with a ghost module in YOLOv11n, significantly reducing the computational cost while maintaining the detection performance, and (2) it uses a new dynamic non-maximum suppression (DNMS) method, which dynamically adjusts the thresholds to improve the detection accuracy. The post-processing method uses a trigger signal from rising edges to improve chip counting in video streams. Experimental results show that the ghost module reduces the FLOPs from 6.48 G to 5.72 G compared to YOLOv11n, with a negligible accuracy loss, while the DNMS algorithm improves the debris detection precision across different YOLO versions. The proposed framework achieves precision, recall, and mAP@0.5 values of 97.04%, 96.38%, and 95.56%, respectively, in image-based detection tasks. In video-based experiments, the proposed video-level post-processing algorithm combined with GM-YOLOv11-DNMS achieves crack–debris counting accuracy of 90.14%. This lightweight and efficient approach is particularly effective in detecting small-scale objects within images and accurately analyzing dynamic debris in video sequences, providing a robust solution for automated debris monitoring in machine tool processing applications. Full article
(This article belongs to the Special Issue Machine Learning Models and Algorithms for Image Processing)
Show Figures

Figure 1

28 pages, 4068 KB  
Article
GDFC-YOLO: An Efficient Perception Detection Model for Precise Wheat Disease Recognition
by Jiawei Qian, Chenxu Dai, Zhanlin Ji and Jinyun Liu
Agriculture 2025, 15(14), 1526; https://doi.org/10.3390/agriculture15141526 - 15 Jul 2025
Viewed by 493
Abstract
Wheat disease detection is a crucial component of intelligent agricultural systems in modern agriculture. However, at present, its detection accuracy still has certain limitations. The existing models hardly capture the irregular and fine-grained texture features of the lesions, and the results of spatial [...] Read more.
Wheat disease detection is a crucial component of intelligent agricultural systems in modern agriculture. However, at present, its detection accuracy still has certain limitations. The existing models hardly capture the irregular and fine-grained texture features of the lesions, and the results of spatial information reconstruction caused by standard upsampling operations are inaccuracy. In this work, the GDFC-YOLO method is proposed to address these limitations and enhance the accuracy of detection. This method is based on YOLOv11 and encompasses three key aspects of improvement: (1) a newly designed Ghost Dynamic Feature Core (GDFC) in the backbone, which improves the efficiency of disease feature extraction and enhances the model’s ability to capture informative representations; (2) a redesigned neck structure, Disease-Focused Neck (DF-Neck), which further strengthens feature expressiveness, to improve multi-scale fusion and refine feature processing pipelines; and (3) the integration of the Powerful Intersection over Union v2 (PIoUv2) loss function to optimize the regression accuracy and convergence speed. The results showed that GDFC-YOLO improved the average accuracy from 0.86 to 0.90 when the cross-overmerge threshold was 0.5 (mAP@0.5), its accuracy reached 0.899, its recall rate reached 0.821, and it still maintained a structure with only 9.27 M parameters. From these results, it can be known that GDFC-YOLO has a good detection performance and stronger practicability relatively. It is a solution that can accurately and efficiently detect crop diseases in real agricultural scenarios. Full article
Show Figures

Figure 1

22 pages, 6645 KB  
Article
Visual Detection on Aircraft Wing Icing Process Using a Lightweight Deep Learning Model
by Yang Yan, Chao Tang, Jirong Huang, Zhixiong Cen and Zonghong Xie
Aerospace 2025, 12(7), 627; https://doi.org/10.3390/aerospace12070627 - 12 Jul 2025
Viewed by 359
Abstract
Aircraft wing icing significantly threatens aviation safety, causing substantial losses to the aviation industry each year. High transparency and blurred edges of icing areas in wing images pose challenges to wing icing detection by machine vision. To address these challenges, this study proposes [...] Read more.
Aircraft wing icing significantly threatens aviation safety, causing substantial losses to the aviation industry each year. High transparency and blurred edges of icing areas in wing images pose challenges to wing icing detection by machine vision. To address these challenges, this study proposes a detection model, Wing Icing Detection DeeplabV3+ (WID-DeeplabV3+), for efficient and precise aircraft wing leading edge icing detection under natural lighting conditions. WID-DeeplabV3+ adopts the lightweight MobileNetV3 as its backbone network to enhance the extraction of edge features in icing areas. Ghost Convolution and Atrous Spatial Pyramid Pooling modules are incorporated to reduce model parameters and computational complexity. The model is optimized using the transfer learning method, where pre-trained weights are utilized to accelerate convergence and enhance performance. Experimental results show WID-DeepLabV3+ segments the icing edge at 1920 × 1080 within 0.03 s. The model achieves the accuracy of 97.15%, an IOU of 94.16%, a precision of 97%, and a recall of 96.96%, representing respective improvements of 1.83%, 3.55%, 1.79%, and 2.04% over DeeplabV3+. The number of parameters and computational complexity are reduced by 92% and 76%, respectively. With high accuracy, superior IOU, and fast inference speed, WID-DeeplabV3+ provides an effective solution for wing-icing detection. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

20 pages, 4488 KB  
Article
OMB-YOLO-tiny: A Lightweight Detection Model for Damaged Pleurotus ostreatus Based on Enhanced YOLOv8n
by Lei Shi, Zhuo Bai, Xiangmeng Yin, Zhanchen Wei, Haohai You, Shilin Liu, Fude Wang, Xuexi Qi, Helong Yu, Chunguang Bi and Ruiqing Ji
Horticulturae 2025, 11(7), 744; https://doi.org/10.3390/horticulturae11070744 - 27 Jun 2025
Viewed by 465
Abstract
Pleurotus ostreatus, classified under the phylum Basidiomycota, order Agaricales, and family Pleurotaceae, is a prevalent gray edible fungus. Its physical damage not only compromises quality and appearance but also significantly diminishes market value. This study proposed an enhanced method for detecting Pleurotus [...] Read more.
Pleurotus ostreatus, classified under the phylum Basidiomycota, order Agaricales, and family Pleurotaceae, is a prevalent gray edible fungus. Its physical damage not only compromises quality and appearance but also significantly diminishes market value. This study proposed an enhanced method for detecting Pleurotus ostreatus damage based on an improved YOLOv8n model, aiming to advance the accessibility of damage recognition technology, enhance automation in Pleurotus cultivation, and reduce labor dependency. This approach holds critical implications for agricultural modernization and serves as a pivotal step in advancing China’s agricultural modernization, while providing valuable references for subsequent research. Utilizing a self-collected, self-organized, and self-constructed dataset, we modified the feature extraction module of the original YOLOv8n by integrating a lightweight GhostHGNetv2 backbone network. During the feature fusion stage, the original YOLOv8 components were replaced with a lightweight SlimNeck network, and an Attentional Scale Sequence Fusion (ASF) mechanism was incorporated into the feature fusion architecture, resulting in the proposed OMB-YOLO model. This model achieves a remarkable balance between parameter efficiency and detection accuracy, attaining a parameter of 2.24 M and a mAP@0.5 of 90.11% on the test set. To further optimize model lightweighting, the DepGraph method was applied for pruning the OMB-YOLO model, yielding the OMB-YOLO-tiny variant. Experimental evaluations on the damaged Pleurotus dataset demonstrate that the OMB-YOLO-tiny model outperforms mainstream models in both accuracy and inference speed while reducing parameters by nearly half. With a parameter of 1.72 M and mAP@0.5 of 90.14%, the OMB-YOLO-tiny model emerges as an optimal solution for detecting Pleurotus ostreatus damage. These results validate its efficacy and practical applicability in agricultural quality control systems. Full article
Show Figures

Figure 1

24 pages, 1871 KB  
Article
Multi-Agent Framework Utilizing Large Language Models for Solving Capture-the-Flag Challenges in Cybersecurity Competitions
by Zewen Huang, Jinjing Zhuge and Jianwei Zhuge
Appl. Sci. 2025, 15(13), 7159; https://doi.org/10.3390/app15137159 - 25 Jun 2025
Viewed by 1372
Abstract
Capture the Flag (CTF) is an important form of competition in cybersecurity, which tests participants’ knowledge and problem-solving abilities. We propose a multi-agent framework based on large language models to simulate human participants and attempt to automate the solutions of common CTF problems, [...] Read more.
Capture the Flag (CTF) is an important form of competition in cybersecurity, which tests participants’ knowledge and problem-solving abilities. We propose a multi-agent framework based on large language models to simulate human participants and attempt to automate the solutions of common CTF problems, especially in cryptographic and miscellaneous challenges. We implement the collaboration of multiple expert agents and access external tools to give the language model a basic level of practical competence in the field of cybersecurity. We primarily test two capabilities of the large model: to analyze, reason, and determine solutions to CTF problems, and to assist with problem-solving by generating code or utilizing unannotated existing external tools. We construct a benchmark based on the puzzles from the book “Ghost in the Wires” and the THUCTF competition. The experiment results showed that our agents performed well on the former and were significantly improved with some human hints, compared with related work. We also discuss the challenges that language models face in cybersecurity challenges and the effect of leveraging reasoning models. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

Back to TopTop