Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (37)

Search Parameters:
Keywords = ReDet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 14055 KB  
Article
TL-Efficient-SE: A Transfer Learning-Based Attention-Enhanced Model for Fingerprint Liveness Detection Across Multi-Sensor Spoof Attacks
by Archana Pallakonda, Rayappa David Amar Raj, Rama Muni Reddy Yanamala, Christian Napoli and Cristian Randieri
Mach. Learn. Knowl. Extr. 2025, 7(4), 113; https://doi.org/10.3390/make7040113 - 1 Oct 2025
Viewed by 450
Abstract
Fingerprint authentication systems encounter growing threats from presentation attacks, making strong liveness detection crucial. This work presents a deep learning-based framework integrating EfficientNetB0 with a Squeeze-and-Excitation (SE) attention approach, using transfer learning to enhance feature extraction. The LivDet 2015 dataset, composed of both [...] Read more.
Fingerprint authentication systems encounter growing threats from presentation attacks, making strong liveness detection crucial. This work presents a deep learning-based framework integrating EfficientNetB0 with a Squeeze-and-Excitation (SE) attention approach, using transfer learning to enhance feature extraction. The LivDet 2015 dataset, composed of both real and fake fingerprints taken using four optical sensors and spoofs made using PlayDoh, Ecoflex, and Gelatine, is used to train and test the model architecture. Stratified splitting is performed once the images being input have been scaled and normalized to conform to EfficientNetB0’s format. The SE module adaptively improves appropriate features to competently differentiate live from fake inputs. The classification head comprises fully connected layers, dropout, batch normalization, and a sigmoid output. Empirical results exhibit accuracy between 98.50% and 99.50%, with an AUC varying from 0.978 to 0.9995, providing high precision and recall for genuine users, and robust generalization across unseen spoof types. Compared to existing methods like Slim-ResCNN and HyiPAD, the novelty of our model lies in the Squeeze-and-Excitation mechanism, which enhances feature discrimination by adaptively recalibrating the channels of the feature maps, thereby improving the model’s ability to differentiate between live and spoofed fingerprints. This model has practical implications for deployment in real-time biometric systems, including mobile authentication and secure access control, presenting an efficient solution for protecting against sophisticated spoofing methods. Future research will focus on sensor-invariant learning and adaptive thresholds to further enhance resilience against varying spoofing attacks. Full article
(This article belongs to the Special Issue Advances in Machine and Deep Learning)
Show Figures

Figure 1

14 pages, 3913 KB  
Article
Isolation of Porcine Adenovirus Serotype 5 and Construction of Recombinant Virus as a Vector Platform for Vaccine Development
by Qianhua He, Jun Wu, Zhilong Bian, Yuan Sun and Jingyun Ma
Viruses 2025, 17(9), 1270; https://doi.org/10.3390/v17091270 - 19 Sep 2025
Viewed by 396
Abstract
Porcine adenovirus serotype 5 (PAdV-5) is an emerging viral vector platform for veterinary vaccines; however, its genomic plasticity and essential replication elements remain incompletely characterized. This study reports the isolation and reverse genetic manipulation of a novel PAdV-5 strain (GD84) from diarrheic piglets [...] Read more.
Porcine adenovirus serotype 5 (PAdV-5) is an emerging viral vector platform for veterinary vaccines; however, its genomic plasticity and essential replication elements remain incompletely characterized. This study reports the isolation and reverse genetic manipulation of a novel PAdV-5 strain (GD84) from diarrheic piglets in China. PCR screening of 167 clinical samples revealed a PAdV-5 detection rate of 38.3% (64/167), with successful isolation on ST cells after three blind passages. The complete GD84 genome is 32,620 bp in length and exhibited 99.0% nucleotide identity to the contemporary strain Ino5, but only 97.0% to the prototype HNF-70. It features an atypical GC content of 51.0% and divergent structural genes—most notably the hexon gene (89% identity to HNF-70)—suggesting altered immunogenicity. Using Red/ET recombineering, we established a rapid (less than 3 weeks) reverse genetics platform and generated four E3-modified recombinants: ΔE3-All-eGFP, ΔE3-12.5K-eGFP, ΔE3-12.5K+ORF4-eGFP, and E3-Insert-eGFP. Crucially, the ΔE3-All-eGFP construct (complete E3 deletion) failed to be rescued, while constructs preserving the 12.5K open reading frame (ORF) yielded replication-competent viruses with sustained eGFP expression over three serial passages and titers over 107.0 TCID50/mL. Fluorescence intensity was inversely correlated with genome size, as the full-length E3-Insert-eGFP virus showed reduced expression compared with the ΔE3 variants. Our work identifies the 12.5K ORF as essential for PAdV-5 replication and provides an optimized vaccine engineering platform that balances genomic payload capacity with replicative fitness. Full article
(This article belongs to the Section Animal Viruses)
Show Figures

Figure 1

28 pages, 4317 KB  
Article
Multi-Scale Attention Networks with Feature Refinement for Medical Item Classification in Intelligent Healthcare Systems
by Waqar Riaz, Asif Ullah and Jiancheng (Charles) Ji
Sensors 2025, 25(17), 5305; https://doi.org/10.3390/s25175305 - 26 Aug 2025
Cited by 2 | Viewed by 863
Abstract
The increasing adoption of artificial intelligence (AI) in intelligent healthcare systems has elevated the demand for robust medical imaging and vision-based inventory solutions. For an intelligent healthcare inventory system, accurate recognition and classification of medical items, including medicines and emergency supplies, are crucial [...] Read more.
The increasing adoption of artificial intelligence (AI) in intelligent healthcare systems has elevated the demand for robust medical imaging and vision-based inventory solutions. For an intelligent healthcare inventory system, accurate recognition and classification of medical items, including medicines and emergency supplies, are crucial for ensuring inventory integrity and timely access to life-saving resources. This study presents a hybrid deep learning framework, EfficientDet-BiFormer-ResNet, that integrates three specialized components: EfficientDet’s Bidirectional Feature Pyramid Network (BiFPN) for scalable multi-scale object detection, BiFormer’s bi-level routing attention for context-aware spatial refinement, and ResNet-18 enhanced with triplet loss and Online Hard Negative Mining (OHNM) for fine-grained classification. The model was trained and validated on a custom healthcare inventory dataset comprising over 5000 images collected under diverse lighting, occlusion, and arrangement conditions. Quantitative evaluations demonstrated that the proposed system achieved a mean average precision (mAP@0.5:0.95) of 83.2% and a top-1 classification accuracy of 94.7%, outperforming conventional models such as YOLO, SSD, and Mask R-CNN. The framework excelled in recognizing visually similar, occluded, and small-scale medical items. This work advances real-time medical item detection in healthcare by providing an AI-enabled, clinically relevant vision system for medical inventory management. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

27 pages, 5228 KB  
Article
Detection of Surface Defects in Steel Based on Dual-Backbone Network: MBDNet-Attention-YOLO
by Xinyu Wang, Shuhui Ma, Shiting Wu, Zhaoye Li, Jinrong Cao and Peiquan Xu
Sensors 2025, 25(15), 4817; https://doi.org/10.3390/s25154817 - 5 Aug 2025
Viewed by 1257
Abstract
Automated surface defect detection in steel manufacturing is pivotal for ensuring product quality, yet it remains an open challenge owing to the extreme heterogeneity of defect morphologies—ranging from hairline cracks and microscopic pores to elongated scratches and shallow dents. Existing approaches, whether classical [...] Read more.
Automated surface defect detection in steel manufacturing is pivotal for ensuring product quality, yet it remains an open challenge owing to the extreme heterogeneity of defect morphologies—ranging from hairline cracks and microscopic pores to elongated scratches and shallow dents. Existing approaches, whether classical vision pipelines or recent deep-learning paradigms, struggle to simultaneously satisfy the stringent demands of industrial scenarios: high accuracy on sub-millimeter flaws, insensitivity to texture-rich backgrounds, and real-time throughput on resource-constrained hardware. Although contemporary detectors have narrowed the gap, they still exhibit pronounced sensitivity–robustness trade-offs, particularly in the presence of scale-varying defects and cluttered surfaces. To address these limitations, we introduce MBY (MBDNet-Attention-YOLO), a lightweight yet powerful framework that synergistically couples the MBDNet backbone with the YOLO detection head. Specifically, the backbone embeds three novel components: (1) HGStem, a hierarchical stem block that enriches low-level representations while suppressing redundant activations; (2) Dynamic Align Fusion (DAF), an adaptive cross-scale fusion mechanism that dynamically re-weights feature contributions according to defect saliency; and (3) C2f-DWR, a depth-wise residual variant that progressively expands receptive fields without incurring prohibitive computational costs. Building upon this enriched feature hierarchy, the neck employs our proposed MultiSEAM module—a cascaded squeeze-and-excitation attention mechanism operating at multiple granularities—to harmonize fine-grained and semantic cues, thereby amplifying weak defect signals against complex textures. Finally, we integrate the Inner-SIoU loss, which refines the geometric alignment between predicted and ground-truth boxes by jointly optimizing center distance, aspect ratio consistency, and IoU overlap, leading to faster convergence and tighter localization. Extensive experiments on two publicly available steel-defect benchmarks—NEU-DET and PVEL-AD—demonstrate the superiority of MBY. Without bells and whistles, our model achieves 85.8% mAP@0.5 on NEU-DET and 75.9% mAP@0.5 on PVEL-AD, surpassing the best-reported results by significant margins while maintaining real-time inference on an NVIDIA Jetson Xavier. Ablation studies corroborate the complementary roles of each component, underscoring MBY’s robustness across defect scales and surface conditions. These results suggest that MBY strikes an appealing balance between accuracy, efficiency, and deployability, offering a pragmatic solution for next-generation industrial quality-control systems. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 4296 KB  
Article
StripSurface-YOLO: An Enhanced Yolov8n-Based Framework for Detecting Surface Defects on Strip Steel in Industrial Environments
by Haomin Li, Huanzun Zhang and Wenke Zang
Electronics 2025, 14(15), 2994; https://doi.org/10.3390/electronics14152994 - 27 Jul 2025
Cited by 1 | Viewed by 917
Abstract
Recent advances in precision manufacturing and high-end equipment technologies have imposed ever more stringent requirements on the accuracy, real-time performance, and lightweight design of online steel strip surface defect detection systems. To reconcile the persistent trade-off between detection precision and inference efficiency in [...] Read more.
Recent advances in precision manufacturing and high-end equipment technologies have imposed ever more stringent requirements on the accuracy, real-time performance, and lightweight design of online steel strip surface defect detection systems. To reconcile the persistent trade-off between detection precision and inference efficiency in complex industrial environments, this study proposes StripSurface–YOLO, a novel real-time defect detection framework built upon YOLOv8n. The core architecture integrates an Efficient Cross-Stage Local Perception module (ResGSCSP), which synergistically combines GSConv lightweight convolutions with a one-shot aggregation strategy, thereby markedly reducing both model parameters and computational complexity. To further enhance multi-scale feature representation, this study introduces an Efficient Multi-Scale Attention (EMA) mechanism at the feature-fusion stage, enabling the network to more effectively attend to critical defect regions. Moreover, conventional nearest-neighbor upsampling is replaced by DySample, which produces deeper, high-resolution feature maps enriched with semantic content, improving both inference speed and fusion quality. To heighten sensitivity to small-scale and low-contrast defects, the model adopts Focal Loss, dynamically adjusting to sample difficulty. Extensive evaluations on the NEU-DET dataset demonstrate that StripSurface–YOLO reduces FLOPs by 11.6% and parameter count by 7.4% relative to the baseline YOLOv8n, while achieving respective improvements of 1.4%, 3.1%, 4.1%, and 3.0% in precision, recall, mAP50, and mAP50:95. Under adverse conditions—including contrast variations, brightness fluctuations, and Gaussian noise—SteelSurface-YOLO outperforms the baseline model, delivering improvements of 5.0% in mAP50 and 4.7% in mAP50:95, attesting to the model’s robust interference resistance. These findings underscore the potential of StripSurface–YOLO to meet the rigorous performance demands of real-time surface defect detection in the metal forging industry. Full article
Show Figures

Figure 1

21 pages, 10851 KB  
Article
Intelligent Flood Scene Understanding Using Computer Vision-Based Multi-Object Tracking
by Xuzhong Yan, Yiqiao Zhu, Zeli Wang, Bin Xu, Liu He and Rong Xia
Water 2025, 17(14), 2111; https://doi.org/10.3390/w17142111 - 16 Jul 2025
Cited by 1 | Viewed by 710
Abstract
Understanding flood scenes is essential for effective disaster response. Previous research has primarily focused on computer vision-based approaches for analyzing flood scenes, capitalizing on their ability to rapidly and accurately cover affected regions. However, most existing methods emphasize static image analysis, with limited [...] Read more.
Understanding flood scenes is essential for effective disaster response. Previous research has primarily focused on computer vision-based approaches for analyzing flood scenes, capitalizing on their ability to rapidly and accurately cover affected regions. However, most existing methods emphasize static image analysis, with limited attention given to dynamic video analysis. Compared to image-based approaches, video analysis in flood scenarios offers significant advantages, including real-time monitoring, flow estimation, object tracking, change detection, and behavior recognition. To address this gap, this study proposes a computer vision-based multi-object tracking (MOT) framework for intelligent flood scene understanding. The proposed method integrates an optical-flow-based module for short-term undetected mask estimation and a deep re-identification (ReID) module to handle long-term occlusions. Experimental results demonstrate that the proposed method achieves state-of-the-art performance across key metrics, with a HOTA of 69.57%, DetA of 67.32%, AssA of 73.21%, and IDF1 of 89.82%. Field tests further confirm its improved accuracy, robustness, and generalization. This study not only addresses key practical challenges but also offers methodological insights, supporting the application of intelligent technologies in disaster response and humanitarian aid. Full article
(This article belongs to the Special Issue AI, Machine Learning and Digital Twin Applications in Water)
Show Figures

Figure 1

13 pages, 3130 KB  
Article
YOLOv8 with Post-Processing for Small Object Detection Enhancement
by Jinkyu Ryu, Dongkurl Kwak and Seungmin Choi
Appl. Sci. 2025, 15(13), 7275; https://doi.org/10.3390/app15137275 - 27 Jun 2025
Cited by 2 | Viewed by 3555
Abstract
Small-object detection in images, a core task in unstructured big-data analysis, remains challenging due to low resolution, background noise, and occlusion. Despite advancements in object detection models like You Only Look Once (YOLO) v8 and EfficientDet, small object detection still faces limitations. This [...] Read more.
Small-object detection in images, a core task in unstructured big-data analysis, remains challenging due to low resolution, background noise, and occlusion. Despite advancements in object detection models like You Only Look Once (YOLO) v8 and EfficientDet, small object detection still faces limitations. This study proposes an enhanced approach combining the content-aware reassembly of features (CARAFE) upsampling module and a confidence-based re-detection (CR) technique integrated with the YOLOv8n model to address these challenges. The CARAFE module is applied to the neck architecture of YOLOv8n to minimize information loss and enhance feature restoration by adaptively generating upsampling kernels based on the input feature map. Furthermore, the CR process involves cropping bounding boxes of small objects with low confidence scores from the original image and re-detecting them using the YOLOv8n-CARAFE model to improve detection performance. Experimental results demonstrate that the proposed approach significantly outperforms the baseline YOLOv8n model in detecting small objects. These findings highlight the effectiveness of combining advanced upsampling and post-processing techniques for improved small object detection. The proposed method holds promise for practical applications, including surveillance systems, autonomous driving, and medical image analysis. Full article
Show Figures

Figure 1

18 pages, 2646 KB  
Article
COP1 Deficiency in BRAFV600E Melanomas Confers Resistance to Inhibitors of the MAPK Pathway
by Ada Ndoja, Christopher M. Rose, Eva Lin, Rohit Reja, Jelena Petrovic, Sarah Kummerfeld, Andrew Blair, Helen Rizos, Zora Modrusan, Scott Martin, Donald S. Kirkpatrick, Amy Heidersbach, Tao Sun, Benjamin Haley, Ozge Karayel, Kim Newton and Vishva M. Dixit
Cells 2025, 14(13), 975; https://doi.org/10.3390/cells14130975 - 25 Jun 2025
Viewed by 1196
Abstract
Aberrant activation of the mitogen-activated protein kinase (MAPK) cascade promotes oncogenic transcriptomes. Despite efforts to inhibit oncogenic kinases, such as BRAFV600E, tumor responses in patients can be heterogeneous and limited by drug resistance mechanisms. Here, we describe patient tumors that acquired COP1 or [...] Read more.
Aberrant activation of the mitogen-activated protein kinase (MAPK) cascade promotes oncogenic transcriptomes. Despite efforts to inhibit oncogenic kinases, such as BRAFV600E, tumor responses in patients can be heterogeneous and limited by drug resistance mechanisms. Here, we describe patient tumors that acquired COP1 or DET1 mutations after treatment with the BRAFV600E inhibitor vemurafenib. COP1 and DET1 constitute the substrate adaptor of the E3 ubiquitin ligase CRL4COP1/DET1, which targets transcription factors, including ETV1, ETV4, and ETV5, for proteasomal degradation. MAPK-MEK-ERK signaling prevents CRL4COP1/DET1 from ubiquitinating ETV1, ETV4, and ETV5, but the mechanistic details are still being elucidated. We found that patient mutations in COP1 or DET1 inactivated CRL4COP1/DET1 in melanoma cells, stabilized ETV1, ETV4, and ETV5, and conferred resistance to inhibitors of the MAPK pathway. ETV5, in particular, enhanced cell survival and was found to promote the expression of the pro-survival gene BCL2A1. Indeed, the deletion of pro-survival BCL2A1 re-sensitized COP1 mutant cells to vemurafenib treatment. These observations indicate that the post-translational regulation of ETV5 by CRL4COP1/DET1 modulates transcriptional outputs in ERK-dependent cancers, and its inactivation contributes to therapeutic resistance. Full article
(This article belongs to the Special Issue Targeting Hallmarks of Cancer)
Show Figures

Graphical abstract

27 pages, 11218 KB  
Article
Advanced 3D Depth Imaging Techniques for Morphometric Analysis of Detected On-Tree Apples Based on AI Technology
by Eungchan Kim, Sang-Yeon Kim, Chang-Hyup Lee, Sungjay Kim, Jiwon Ryu, Geon-Hee Kim, Seul-Ki Lee and Ghiseok Kim
Agriculture 2025, 15(11), 1148; https://doi.org/10.3390/agriculture15111148 - 27 May 2025
Cited by 1 | Viewed by 620
Abstract
This study developed non-destructive technology for predicting apple size to determine optimal harvest timing of field-grown apples. RGBD images were collected in field environments with fluctuating light conditions, and deep learning techniques were integrated to analyze morphometric parameters. After training various models, the [...] Read more.
This study developed non-destructive technology for predicting apple size to determine optimal harvest timing of field-grown apples. RGBD images were collected in field environments with fluctuating light conditions, and deep learning techniques were integrated to analyze morphometric parameters. After training various models, the EfficientDet D4 and Mask R-CNN ResNet101 models demonstrated the highest detection accuracy. Morphometric metrics were measured by linking boundary box information with 3D depth information to determine horizontal and vertical diameters. Without occlusion, mean absolute percentage error (MAPE) using boundary box-based methods was 6.201% and 5.164% for horizontal and vertical diameters, respectively, while mask-based methods achieved improved accuracy with MAPE of 5.667% and 4.921%. Volume and weight predictions showed MAPE of 7.183% and 6.571%, respectively. For partially occluded apples, amodal segmentation was applied to analyze morphometric parameters according to occlusion rates. While conventional models showed increasing MAPE with higher occlusion rates, the amodal segmentation-based model maintained consistent accuracy regardless of occlusion rate, demonstrating potential for automated harvest systems where fruits are frequently partially obscured by leaves and branches. Full article
(This article belongs to the Special Issue Smart Agriculture Sensors and Monitoring Systems for Field Detection)
Show Figures

Figure 1

20 pages, 4445 KB  
Article
COVID-19 Severity Classification Using Hybrid Feature Extraction: Integrating Persistent Homology, Convolutional Neural Networks and Vision Transformers
by Redet Assefa, Adane Mamuye and Marco Piangerelli
Big Data Cogn. Comput. 2025, 9(4), 83; https://doi.org/10.3390/bdcc9040083 - 31 Mar 2025
Viewed by 987
Abstract
This paper introduces a model that automates the diagnosis of a patient’s condition, reducing reliance on highly trained professionals, particularly in resource-constrained settings. To ensure data consistency, the dataset was preprocessed for uniformity in size, format, and color channels. Image quality was further [...] Read more.
This paper introduces a model that automates the diagnosis of a patient’s condition, reducing reliance on highly trained professionals, particularly in resource-constrained settings. To ensure data consistency, the dataset was preprocessed for uniformity in size, format, and color channels. Image quality was further enhanced using histogram equalization to improve the dynamic range. Lung regions were isolated using segmentation techniques, which also eliminated extraneous areas from the images. A modified segmentation-based cropping technique was employed to define an optimal cropping rectangle. Feature extraction was performed using persistent homology, deep learning, and hybrid methodologies. Persistent homology captured topological features across multiple scales, while the deep learning model leveraged convolutional transition equivariance, input-adaptive weighting, and the global receptive field provided by Vision Transformers. By integrating features from both methods, the classification model effectively predicted severity levels (mild, moderate, severe). The segmentation-based cropping method showed a modest improvement, achieving 80% accuracy, while stand-alone persistent homology features reached 66% accuracy. Notably, the hybrid model outperformed existing approaches, including SVM, ResNet50, and VGG16, achieving an accuracy of 82%. Full article
Show Figures

Figure 1

18 pages, 3245 KB  
Article
Enhanced DetNet: A New Framework for Detecting Small and Occluded 3D Objects
by Baowen Zhang, Chengzhi Su and Guohua Cao
Electronics 2025, 14(5), 979; https://doi.org/10.3390/electronics14050979 - 28 Feb 2025
Cited by 1 | Viewed by 847
Abstract
To mitigate the impact on detection performance caused by insufficient input information in 3D object detection based on single LiDAR data, this study designs three innovative modules based on the PointRCNN framework. Firstly, addressing the issue of the Multi-Layer Perceptron (MLP) in PointNet++ [...] Read more.
To mitigate the impact on detection performance caused by insufficient input information in 3D object detection based on single LiDAR data, this study designs three innovative modules based on the PointRCNN framework. Firstly, addressing the issue of the Multi-Layer Perceptron (MLP) in PointNet++ failing to effectively capture local features during the feature extraction phase, we propose the Adaptive Multilayer Perceptron (AMLP). Secondly, to prevent the problem of gradient vanishing due to the increased parameter scale and computational complexity of AMLP, we introduce the Channel Aware Residual module (CA-Res) in the feature extraction layer. Finally, in the head layer of the subsequent processing stage, we propose the Dynamic Attention Head (DA-Head) to enhance the representation of key features in the process of target detection. A series of experiments conducted on the KITTI validation set demonstrate that in complex scenarios, for the small target “Pedestrian”, our model achieves performance improvements of 2.08% and 3.46%, respectively, at the “Medium” and “Difficult” detection difficulty levels. To further validate the generalization capability of the Enhanced DetNet network, we deploy the trained model on the KITTI server and conduct a comprehensive evaluation of detection performance for the “Car”, “Pedestrian”, and “Cyclist” categories. Full article
Show Figures

Figure 1

18 pages, 4555 KB  
Technical Note
GD-Det: Low-Data Object Detection in Foggy Scenarios for Unmanned Aerial Vehicle Imagery Using Re-Parameterization and Cross-Scale Gather-and-Distribute Mechanisms
by Rui Shi, Lili Zhang, Gaoxu Wang, Shutong Jia, Ning Zhang and Chensu Wang
Remote Sens. 2025, 17(5), 783; https://doi.org/10.3390/rs17050783 - 24 Feb 2025
Cited by 2 | Viewed by 966
Abstract
Unmanned Aerial Vehicles (UAVs) play an extremely important role in real-time object detection for maritime emergency rescue missions. However, marine accidents often occur in low-visibility weather conditions, resulting in poor image quality and a lack of object detection samples, which significantly reduces detection [...] Read more.
Unmanned Aerial Vehicles (UAVs) play an extremely important role in real-time object detection for maritime emergency rescue missions. However, marine accidents often occur in low-visibility weather conditions, resulting in poor image quality and a lack of object detection samples, which significantly reduces detection accuracy. To tackle these issues, we propose GD-Det, a low-data object detection model with high accuracy, specifically designed to handle limited sample sizes and low-quality images. The model is primarily composed of three components: (i) A lightweight re-parameterization feature extraction module which integrates RepVGG blocks into multi-concat blocks to enhance the model’s spatial perception and feature diversity during training. Meanwhile, it reduces computational cost in the inference phase through the re-parameterization mechanism. (ii) A cross-scale gather-and-distribute pyramid module, which helps to augment the relationship representation of four-scale features via flexible skip fusion and distribution strategies. (iii) A decoupled prediction module with three branches is to implement classification and regression, enhancing detection accuracy by combining the prediction values from tri-level features. (iv) We also use a domain-adaptive training strategy with knowledge transfer to handle low-data issues. We conducted low-data training and comparison experiments using our constructed dataset AFO-fog. Our model achieved an overall detection accuracy of 84.8%, which is superior to other models. Full article
Show Figures

Figure 1

19 pages, 4431 KB  
Article
HCT-Det: A High-Accuracy End-to-End Model for Steel Defect Detection Based on Hierarchical CNN–Transformer Features
by Xiyin Chen, Xiaohu Zhang, Yonghua Shi and Junjie Pang
Sensors 2025, 25(5), 1333; https://doi.org/10.3390/s25051333 - 21 Feb 2025
Cited by 3 | Viewed by 980
Abstract
Surface defect detection is essential for ensuring the quality and safety of steel products. While Transformer-based methods have achieved state-of-the-art performance, they face several limitations, including high computational costs due to the quadratic complexity of the attention mechanism, inadequate detection accuracy for small-scale [...] Read more.
Surface defect detection is essential for ensuring the quality and safety of steel products. While Transformer-based methods have achieved state-of-the-art performance, they face several limitations, including high computational costs due to the quadratic complexity of the attention mechanism, inadequate detection accuracy for small-scale defects due to substantial downsampling, inconsistencies between classification scores and localization confidence, and feature resolution loss caused by simple upsampling and downsampling strategies. To address these challenges, we propose the HCT-Det model, which incorporates a window-based self-attention residual (WSA-R) block structure. This structure combines window-based self-attention (WSA) blocks to reduce computational overhead and parallel residual convolutional (Res) blocks to enhance local feature continuity. The model’s backbone generates three cross-scale features as encoder inputs, which undergo Intra-Scale Feature Interaction (ISFI) and Cross-Scale Feature Interaction (CSFI) to improve detection accuracy for targets of various sizes. A Soft IoU-Aware mechanism ensures alignment between classification scores and intersection-over-union (IoU) metrics during training. Additionally, Hybrid Downsampling (HDownsample) and Hybrid Upsampling (HUpsample) modules minimize feature degradation. Our experiments demonstrate that HCT-Det achieved a mean average precision (mAP@0.5) of 0.795 on the NEU-DET dataset and 0.733 on the GC10-DET dataset, outperforming other state-of-the-art approaches. These results highlight the model’s effectiveness in improving computational efficiency and detection accuracy for steel surface defect detection. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

16 pages, 1659 KB  
Article
Direct Cloning and Heterologous Expression of the Dmxorosin Biosynthetic Gene Cluster from Streptomyces thermolilacinus SPC6, a Halotolerant Actinomycete Isolated from the Desert in China
by Maoxing Dong, Huyuan Feng, Wei Zhang and Wei Ding
Int. J. Mol. Sci. 2025, 26(4), 1492; https://doi.org/10.3390/ijms26041492 - 11 Feb 2025
Viewed by 1300
Abstract
Streptomyces thermolilacinus SPC6 is a halotolerant strain isolated from the Linze Desert in China. It has a very high growth rate and short life cycle compared to other Streptomycetes, including the model organism Streptomyces coelicolor. The one strain–many compounds fermentation approach and global [...] Read more.
Streptomyces thermolilacinus SPC6 is a halotolerant strain isolated from the Linze Desert in China. It has a very high growth rate and short life cycle compared to other Streptomycetes, including the model organism Streptomyces coelicolor. The one strain–many compounds fermentation approach and global natural products investigation revealed that Streptomyces thermolilacinus SPC6 exhibits impressive productivity of secondary metabolites. Genome mining uncovered 20 typical secondary metabolic biosynthetic gene clusters (BGC), with a BGC dmx identified as completely silent. Subsequently, this cryptic BGC was successfully directly cloned and heterologously expressed in Streptomyces hosts, resulting in the discovery of a new lanthipeptide, dmxorosin. Notably, the proposed biosynthetic pathway indicates its potential as a basis for the synthetic biology of new lanthipeptide. Full article
(This article belongs to the Section Molecular Microbiology)
Show Figures

Figure 1

13 pages, 1569 KB  
Article
Dual-Model Synergy for Fingerprint Spoof Detection Using VGG16 and ResNet50
by Mohamed Cheniti, Zahid Akhtar and Praveen Kumar Chandaliya
J. Imaging 2025, 11(2), 42; https://doi.org/10.3390/jimaging11020042 - 4 Feb 2025
Cited by 3 | Viewed by 2254
Abstract
In this paper, we address the challenge of fingerprint liveness detection by proposing a dual pre-trained model approach that combines VGG16 and ResNet50 architectures. While existing methods often rely on a single feature extraction model, they may struggle with generalization across diverse spoofing [...] Read more.
In this paper, we address the challenge of fingerprint liveness detection by proposing a dual pre-trained model approach that combines VGG16 and ResNet50 architectures. While existing methods often rely on a single feature extraction model, they may struggle with generalization across diverse spoofing materials and sensor types. To overcome this limitation, our approach leverages the high-resolution feature extraction of VGG16 and the deep layer architecture of ResNet50 to capture a more comprehensive range of features for improved spoof detection. The proposed approach integrates these two models by concatenating their extracted features, which are then used to classify the captured fingerprint as live or spoofed. Evaluated on the Livedet2013 and Livedet2015 datasets, our method achieves state-of-the-art performance, with an accuracy of 99.72% on Livedet2013, surpassing existing methods like the Gram model (98.95%) and Pre-trained CNN (98.45%). On Livedet2015, our method achieves an average accuracy of 96.32%, outperforming several state-of-the-art models, including CNN (95.27%) and LivDet 2015 (95.39%). Error rate analysis reveals consistently low Bonafide Presentation Classification Error Rate (BPCER) scores with 0.28% on LivDet 2013 and 1.45% on LivDet 2015. Similarly, the Attack Presentation Classification Error Rate (APCER) remains low at 0.35% on LivDet 2013 and 3.68% on LivDet 2015. However, higher APCER values are observed for unknown spoof materials, particularly in the Crossmatch subset of Livedet2015, where the APCER rises to 8.12%. These findings highlight the robustness and adaptability of our simple dual-model framework while identifying areas for further optimization in handling unseen spoof materials. Full article
Show Figures

Figure 1

Back to TopTop