Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (107)

Search Parameters:
Keywords = filter pruning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2758 KB  
Article
GPRGS: Sparse Input New View Synthesis Based on Probabilistic Modeling and Feature Regularization
by Yinshuang Qin, Gen Liu and Jian Wang
Appl. Sci. 2025, 15(17), 9422; https://doi.org/10.3390/app15179422 - 27 Aug 2025
Viewed by 424
Abstract
When the number of available training views is limited, the small quantity of images results in insufficient generation of Gaussian ellipsoids, leading to an empty Gaussian model. This constraint limits the generation of Gaussian ellipsoids within 3DGS. If the number of Gaussian ellipsoids [...] Read more.
When the number of available training views is limited, the small quantity of images results in insufficient generation of Gaussian ellipsoids, leading to an empty Gaussian model. This constraint limits the generation of Gaussian ellipsoids within 3DGS. If the number of Gaussian ellipsoids is too low, the model is prone to overfitting and may learn incorrect scene geometry. To address this challenge, we propose 3DGS based on Gaussian probabilistic modeling and feature regularization (GPRGS). Our method employs Gaussian probabilistic modeling based on Gaussian distribution features, where we capture feature information from images and establish a Gaussian distribution to model the feature probability map. Additionally, feature regularization is introduced to enhance image features and prevent overfitting. Moreover, we introduce scale and densification thresholds and update the multi-scale densification and pruning strategy to avoid filtering out all low-opacity Gaussian points during the pruning process. We conducted evaluations for new view synthesis with both full and sparse inputs on real and synthetic datasets. The results demonstrate that GPRGS is on par with other models. In sparse environments, we achieve a slight advantage, specifically showing an approximately 4% improvement in the PSNR metric across multiple evaluation metrics. Full article
Show Figures

Figure 1

25 pages, 7802 KB  
Article
A Hybrid Ensemble Equilibrium Optimizer Gene Selection Algorithm for Microarray Data
by Peng Su, Yuxin Zhao, Xiaobo Li, Zhendi Ma and Hui Wang
Biomimetics 2025, 10(8), 523; https://doi.org/10.3390/biomimetics10080523 - 10 Aug 2025
Viewed by 543
Abstract
As modern medical technology advances, the utilization of gene expression data has proliferated across diverse domains, particularly in cancer diagnosis and prognosis monitoring. However, gene expression data is often characterized by high dimensionality and a prevalence of redundant and noisy information, prompting the [...] Read more.
As modern medical technology advances, the utilization of gene expression data has proliferated across diverse domains, particularly in cancer diagnosis and prognosis monitoring. However, gene expression data is often characterized by high dimensionality and a prevalence of redundant and noisy information, prompting the need for effective strategies to mitigate issues like the curse of dimensionality and overfitting. This study introduces a novel hybrid ensemble equilibrium optimizer gene selection algorithm in response. In the first stage, a hybrid approach, combining multiple filters and gene correlation-based methods, is used to select an optimal subset of genes, which is achieved by evaluating the redundancy and complementary relationships among genes to obtain a subset with maximal information content. In the second stage, an equilibrium optimizer algorithm incorporating Gaussian Barebone and a novel gene pruning strategy is employed to further search for the optimal gene subset within the candidate gene space selected in the first stage. To demonstrate the superiority of the proposed method, it was compared with nine feature selection techniques on 15 datasets. The results indicate that the ensemble filtering method in the first stage exhibits strong stability and effectively reduces the search space of the gene selection algorithms. The improved equilibrium optimizer algorithm enhances the prediction accuracy while significantly reducing the number of selected features. These findings highlight the effectiveness of the proposed method as a valuable approach for gene selection. Full article
(This article belongs to the Special Issue Exploration of Bio-Inspired Computing: 2nd Edition)
Show Figures

Figure 1

21 pages, 4875 KB  
Article
Improvement of SAM2 Algorithm Based on Kalman Filtering for Long-Term Video Object Segmentation
by Jun Yin, Fei Wu, Hao Su, Peng Huang and Yuetong Qixuan
Sensors 2025, 25(13), 4199; https://doi.org/10.3390/s25134199 - 5 Jul 2025
Cited by 1 | Viewed by 927 | Correction
Abstract
The Segment Anything Model 2 (SAM2) has achieved state-of-the-art performance in pixel-level object segmentation for both static and dynamic visual content. Its streaming memory architecture maintains spatial context across video sequences, yet struggles with long-term tracking due to its static inference framework. SAM [...] Read more.
The Segment Anything Model 2 (SAM2) has achieved state-of-the-art performance in pixel-level object segmentation for both static and dynamic visual content. Its streaming memory architecture maintains spatial context across video sequences, yet struggles with long-term tracking due to its static inference framework. SAM 2’s fixed temporal window approach indiscriminately retains historical frames, failing to account for frame quality or dynamic motion patterns. This leads to error propagation and tracking instability in challenging scenarios involving fast-moving objects, partial occlusions, or crowded environments. To overcome these limitations, this paper proposes SAM2Plus, a zero-shot enhancement framework that integrates Kalman filter prediction, dynamic quality thresholds, and adaptive memory management. The Kalman filter models object motion using physical constraints to predict trajectories and dynamically refine segmentation states, mitigating positional drift during occlusions or velocity changes. Dynamic thresholds, combined with multi-criteria evaluation metrics (e.g., motion coherence, appearance consistency), prioritize high-quality frames while adaptively balancing confidence scores and temporal smoothness. This reduces ambiguities among similar objects in complex scenes. SAM2Plus further employs an optimized memory system that prunes outdated or low-confidence entries and retains temporally coherent context, ensuring constant computational resources even for infinitely long videos. Extensive experiments on two video object segmentation (VOS) benchmarks demonstrate SAM2Plus’s superiority over SAM 2. It achieves an average improvement of 1.0 in J&F metrics across all 24 direct comparisons, with gains exceeding 2.3 points on SA-V and LVOS datasets for long-term tracking. The method delivers real-time performance and strong generalization without fine-tuning or additional parameters, effectively addressing occlusion recovery and viewpoint changes. By unifying motion-aware physics-based prediction with spatial segmentation, SAM2Plus bridges the gap between static and dynamic reasoning, offering a scalable solution for real-world applications such as autonomous driving and surveillance systems. Full article
Show Figures

Figure 1

18 pages, 766 KB  
Article
Multi-Task Sequence Tagging for Denoised Causal Relation Extraction
by Yijia Zhang, Chaofan Liu, Yuan Zhu and Wanyu Chen
Mathematics 2025, 13(11), 1737; https://doi.org/10.3390/math13111737 - 24 May 2025
Viewed by 438
Abstract
Extracting causal relations from natural language texts is crucial for uncovering causality, and most existing causal relation extraction models are single-task learning-based models, which can not comprehensively address attributes such as part-of-speech tagging and chunk analysis. However, the characteristics of words with multi-domains [...] Read more.
Extracting causal relations from natural language texts is crucial for uncovering causality, and most existing causal relation extraction models are single-task learning-based models, which can not comprehensively address attributes such as part-of-speech tagging and chunk analysis. However, the characteristics of words with multi-domains are more relevant for causal relation extraction, due to words such as adjectives, linking verbs, etc., bringing more noise data limiting the effectiveness of the single-task-based learning methods. Furthermore, causalities from diverse domains also raise a challenge, as existing models tend to falter in multiple domains compared to a single one. In light of this, we propose a multi-task sequence tagging model, MPC−CE, which utilizes more information about causality and relevant tasks to improve causal relation extraction in noised data. By modeling auxiliary tasks, MPC−CE promotes a hierarchical understanding of linguistic structure and semantic roles, filtering noise and isolating salient entities. Furthermore, the sparse sharing paradigm extracts only the most broadly beneficial parameters by pruning redundant ones during training, enhancing model generalization. The empirical results on two datasets show 2.19% and 3.12% F1 improvement, respectively, compared to baselines, demonstrating that our proposed model can effectively enhance causal relation extraction with semantic features across multiple syntactic tasks, offering the representational power to overcome pervasive noise and cross-domain issues. Full article
Show Figures

Figure 1

21 pages, 4686 KB  
Article
Low-Memory-Footprint CNN-Based Biomedical Signal Processing for Wearable Devices
by Zahra Kokhazad, Dimitrios Gkountelos, Milad Kokhazadeh, Charalampos Bournas, Georgios Keramidas and Vasilios Kelefouras
IoT 2025, 6(2), 29; https://doi.org/10.3390/iot6020029 - 8 May 2025
Viewed by 810
Abstract
The rise of wearable devices has enabled real-time processing of sensor data for critical health monitoring applications, such as human activity recognition (HAR) and cardiac disorder classification (CDC). However, the limited computational and memory resources of wearables necessitate lightweight yet accurate classification models. [...] Read more.
The rise of wearable devices has enabled real-time processing of sensor data for critical health monitoring applications, such as human activity recognition (HAR) and cardiac disorder classification (CDC). However, the limited computational and memory resources of wearables necessitate lightweight yet accurate classification models. While deep neural networks (DNNs), including convolutional neural networks (CNNs) and long short-term memory networks, have shown high accuracy for HAR and CDC, their large parameter sizes hinder deployment on edge devices. On the other hand, various DNN compression techniques have been proposed, but exploiting the combination of various compression techniques with the aim of achieving memory efficient DNN models for HAR and CDC tasks remains under-investigated. This work studies the impact of CNN architecture parameters, focusing on the convolutional and dense layers, to identify configurations that balance accuracy and efficiency. We derive two versions of each model—lean and fat—based on their memory characteristics. Subsequently, we apply three complementary compression techniques: filter-based pruning, low-rank factorization, and dynamic range quantization. Experiments across three diverse DNNs demonstrate that this multi-faceted compression approach can significantly reduce memory and computational requirements while maintaining validation accuracy, leading to DNN models suitable for intelligent health monitoring on resource-constrained wearable devices. Full article
Show Figures

Figure 1

22 pages, 8831 KB  
Article
YOLOv8n-SMMP: A Lightweight YOLO Forest Fire Detection Model
by Nianzu Zhou, Demin Gao and Zhengli Zhu
Fire 2025, 8(5), 183; https://doi.org/10.3390/fire8050183 - 3 May 2025
Cited by 6 | Viewed by 1712
Abstract
Global warming has driven a marked increase in forest fire occurrences, underscoring the critical need for timely and accurate detection to mitigate fire-related losses. Existing forest fire detection algorithms face limitations in capturing flame and smoke features in complex natural environments, coupled with [...] Read more.
Global warming has driven a marked increase in forest fire occurrences, underscoring the critical need for timely and accurate detection to mitigate fire-related losses. Existing forest fire detection algorithms face limitations in capturing flame and smoke features in complex natural environments, coupled with high computational complexity and inadequate lightweight design for practical deployment. To address these challenges, this paper proposes an enhanced forest fire detection model, YOLOv8n-SMMP (SlimNeck–MCA–MPDIoU–Pruned), based on the YOLO framework. Key innovations include the following: introducing the SlimNeck solution to streamline the neck network by replacing conventional convolutions with Group Shuffling Convolution (GSConv) and substituting the Cross-convolution with 2 filters (C2f) module with the lightweight VoV-based Group Shuffling Cross-Stage Partial Network (VoV-GSCSP) feature extraction module; integrating the Multi-dimensional Collaborative Attention (MCA) mechanism between the neck and head networks to enhance focus on fire-related regions; adopting the Minimum Point Distance Intersection over Union (MPDIoU) loss function to optimize bounding box regression during training; and implementing selective channel pruning tailored to the modified network architecture. The experimental results reveal that, relative to the baseline model, the optimized lightweight model achieves a 3.3% enhancement in detection accuracy (mAP@0.5), slashes the parameter count by 31%, and reduces computational overhead by 33%. These advancements underscore the model’s superior performance in real-time forest fire detection, outperforming other mainstream lightweight YOLO models in both accuracy and efficiency. Full article
(This article belongs to the Special Issue Intelligent Forest Fire Prediction and Detection)
Show Figures

Figure 1

19 pages, 888 KB  
Article
AI-Based Anomaly Detection and Optimization Framework for Blockchain Smart Contracts
by Hassen Louati, Ali Louati, Elham Kariri and Abdulla Almekhlafi
Adm. Sci. 2025, 15(5), 163; https://doi.org/10.3390/admsci15050163 - 27 Apr 2025
Viewed by 1657
Abstract
Blockchain technology has transformed modern digital ecosystems by enabling secure, transparent, and automated transactions through smart contracts. However, the increasing complexity of these contracts introduces significant challenges, including high computational costs, scalability limitations, and difficulties in detecting anomalous behavior. In this study, we [...] Read more.
Blockchain technology has transformed modern digital ecosystems by enabling secure, transparent, and automated transactions through smart contracts. However, the increasing complexity of these contracts introduces significant challenges, including high computational costs, scalability limitations, and difficulties in detecting anomalous behavior. In this study, we propose an AI-based optimization framework that enhances the efficiency and security of blockchain smart contracts. The framework integrates Neural Architecture Search (NAS) to automatically design optimal Convolutional Neural Network (CNN) architectures tailored to blockchain data, enabling effective anomaly detection. To address the challenge of limited labeled data, transfer learning is employed to adapt pre-trained CNN models to smart contract patterns, improving model generalization and reducing training time. Furthermore, Model Compression techniques, including filter pruning and quantization, are applied to minimize the computational load, making the framework suitable for deployment in resource-constrained blockchain environments. Experimental results on Ethereum transaction datasets demonstrate that the proposed method achieves significant improvements in anomaly detection accuracy and computational efficiency compared to conventional approaches, offering a practical and scalable solution for smart contract monitoring and optimization. Full article
(This article belongs to the Special Issue Research on Blockchain Technology and Business Process Design)
Show Figures

Figure 1

18 pages, 12581 KB  
Article
Aggregation and Pruning for Continuous Incremental Multi-Task Inference
by Lining Li, Fenglin Cen, Quan Feng and Ji Xu
Mathematics 2025, 13(9), 1414; https://doi.org/10.3390/math13091414 - 25 Apr 2025
Viewed by 678
Abstract
In resource-constrained mobile systems, efficiently handling incrementally added tasks under dynamically evolving requirements is a critical challenge. To address this, we propose aggregate pruning (AP), a framework that combines pruning with filter aggregation to optimize deep neural networks for continuous incremental multi-task learning [...] Read more.
In resource-constrained mobile systems, efficiently handling incrementally added tasks under dynamically evolving requirements is a critical challenge. To address this, we propose aggregate pruning (AP), a framework that combines pruning with filter aggregation to optimize deep neural networks for continuous incremental multi-task learning (MTL). The approach reduces redundancy by dynamically pruning and aggregating similar filters across tasks, ensuring efficient use of computational resources while maintaining high task-specific performance. The aggregation strategy enables effective filter sharing across tasks, significantly reducing model complexity. Additionally, an adaptive mechanism is incorporated into AP to adjust filter sharing based on task similarity, further enhancing efficiency. Experiments on different backbone networks, including LeNet, VGG, ResNet, and so on, show that AP achieves substantial parameter reduction and computational savings with minimal accuracy loss, outperforming existing pruning methods and even surpassing non-pruning MTL techniques. The architecture-agnostic design of AP also enables potential extensions to complex architectures like graph neural networks (GNNs), offering a promising solution for incremental multi-task GNNs. Full article
(This article belongs to the Special Issue Research on Graph Neural Networks and Knowledge Graph)
Show Figures

Figure 1

24 pages, 7088 KB  
Article
Ultra-Lightweight and Highly Efficient Pruned Binarised Neural Networks for Intrusion Detection in In-Vehicle Networks
by Auangkun Rangsikunpum, Sam Amiri and Luciano Ost
Electronics 2025, 14(9), 1710; https://doi.org/10.3390/electronics14091710 - 23 Apr 2025
Cited by 1 | Viewed by 856
Abstract
With the rapid evolution toward autonomous vehicles, securing in-vehicle communications is more critical than ever. The widely used Controller Area Network (CAN) protocol lacks built-in security, leaving vehicles vulnerable to cyberattacks. Although machine learning-based Intrusion Detection Systems (IDSs) can achieve high detection accuracy, [...] Read more.
With the rapid evolution toward autonomous vehicles, securing in-vehicle communications is more critical than ever. The widely used Controller Area Network (CAN) protocol lacks built-in security, leaving vehicles vulnerable to cyberattacks. Although machine learning-based Intrusion Detection Systems (IDSs) can achieve high detection accuracy, their heavy computational and power demands often limit real-world deployment. In this paper, we present an optimised IDS based on a Binarised Neural Network (BNN) that employs network pruning to eliminate redundant parameters, achieving up to a 91.07% reduction with only a 0.1% accuracy loss. The proposed approach incorporates a two-stage Coarse-to-Fine (C2F) framework, efficiently filtering normal traffic in the initial stage to minimise unnecessary processing. To assess its practical feasibility, we implement and compare the pruned IDS across CPU, GPU, and FPGA platforms. The experimental results indicate that, with the same model structure, the FPGA-based solution outperforms GPU and CPU implementations by up to 3.7× and 2.4× in speed, while achieving up to 7.4× and 3.8× greater energy efficiency, respectively. Among cutting-edge BNN-based IDSs, our ultra-lightweight FPGA-based C2F approach achieves the fastest average inference speed, showing a 3.3× to 12× improvement, while also outperforming them in accuracy and average F1 score, highlighting its potential for low-power, high-performance vehicle security. Full article
(This article belongs to the Special Issue Recent Advances in Intrusion Detection Systems Using Machine Learning)
Show Figures

Figure 1

12 pages, 1100 KB  
Article
Lightweight U-Net for Blood Vessels Segmentation in X-Ray Coronary Angiography
by Jesus Salvador Ramos-Cortez, Dora E. Alvarado-Carrillo, Emmanuel Ovalle-Magallanes and Juan Gabriel Avina-Cervantes
J. Imaging 2025, 11(4), 106; https://doi.org/10.3390/jimaging11040106 - 30 Mar 2025
Viewed by 922
Abstract
Blood vessel segmentation in X-ray coronary angiography (XCA) plays a crucial role in diagnosing cardiovascular diseases, enabling a precise assessment of arterial structures. However, segmentation is challenging due to a low signal-to-noise ratio, interfering background structures, and vessel bifurcations, which hinder the accuracy [...] Read more.
Blood vessel segmentation in X-ray coronary angiography (XCA) plays a crucial role in diagnosing cardiovascular diseases, enabling a precise assessment of arterial structures. However, segmentation is challenging due to a low signal-to-noise ratio, interfering background structures, and vessel bifurcations, which hinder the accuracy of deep learning models. Additionally, deep learning models for this task often require high computational resources, limiting their practical application in real-time clinical settings. This study proposes a lightweight variant of the U-Net architecture using a structured kernel pruning strategy inspired by the Lottery Ticket Hypothesis. The pruning method systematically removes entire convolutional filters from each layer based on a global reduction factor, generating compact subnetworks that retain key representational capacity. This results in a significantly smaller model without compromising the segmentation performance. This approach is evaluated on two benchmark datasets, demonstrating consistent improvements in segmentation accuracy compared to the vanilla U-Net. Additionally, model complexity is significantly reduced from 31 M to 1.9 M parameters, improving efficiency while maintaining high segmentation quality. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

25 pages, 2229 KB  
Article
MIRA-CAP: Memory-Integrated Retrieval-Augmented Captioning for State-of-the-Art Image and Video Captioning
by Sabina Umirzakova, Shakhnoza Muksimova, Sevara Mardieva, Murodjon Sultanov Baxtiyarovich and Young-Im Cho
Sensors 2024, 24(24), 8013; https://doi.org/10.3390/s24248013 - 15 Dec 2024
Cited by 16 | Viewed by 1923
Abstract
Generating accurate and contextually rich captions for images and videos is essential for various applications, from assistive technology to content recommendation. However, challenges such as maintaining temporal coherence in videos, reducing noise in large-scale datasets, and enabling real-time captioning remain significant. We introduce [...] Read more.
Generating accurate and contextually rich captions for images and videos is essential for various applications, from assistive technology to content recommendation. However, challenges such as maintaining temporal coherence in videos, reducing noise in large-scale datasets, and enabling real-time captioning remain significant. We introduce MIRA-CAP (Memory-Integrated Retrieval-Augmented Captioning), a novel framework designed to address these issues through three core innovations: a cross-modal memory bank, adaptive dataset pruning, and a streaming decoder. The cross-modal memory bank retrieves relevant context from prior frames, enhancing temporal consistency and narrative flow. The adaptive pruning mechanism filters noisy data, which improves alignment and generalization. The streaming decoder allows for real-time captioning by generating captions incrementally, without requiring access to the full video sequence. Evaluated across standard datasets like MS COCO, YouCook2, ActivityNet, and Flickr30k, MIRA-CAP achieves state-of-the-art results, with high scores on CIDEr, SPICE, and Polos metrics, underscoring its alignment with human judgment and its effectiveness in handling complex visual and temporal structures. This work demonstrates that MIRA-CAP offers a robust, scalable solution for both static and dynamic captioning tasks, advancing the capabilities of vision–language models in real-world applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 14999 KB  
Article
Lightweight Implementation of the Signal Enhancement Model for Early Wood-Boring Pest Monitoring
by Juhu Li, Xue Li, Mengwei Ju, Xuejing Zhao, Yincheng Wang and Feng Yang
Forests 2024, 15(11), 1903; https://doi.org/10.3390/f15111903 - 29 Oct 2024
Viewed by 1038
Abstract
Wood-boring pests are one of the most destructive forest pests. However, the early detection of wood-boring pests is extremely difficult because their larvae live in tree trunks and have high invisibility. Borehole listening technology is a new and effective method to detect the [...] Read more.
Wood-boring pests are one of the most destructive forest pests. However, the early detection of wood-boring pests is extremely difficult because their larvae live in tree trunks and have high invisibility. Borehole listening technology is a new and effective method to detect the larvae of insect pests. It identifies infested trees by analyzing wood-boring vibration signals. However, the collected wood-boring vibration signals are often disturbed by various noises existing in the field environment, which reduces the accuracy of pest detection. Therefore, it is necessary to filter out the noise and enhance the wood-boring vibration signals to facilitate the subsequent identification of pests. The current signal enhancement models are all designed based on deep learning models, which have complex scales, a large number of parameters, high demands for storage resources, large computational complexity, and high time costs. They often run on resource-rich computers or servers, and they are difficult to deploy to resource-limited field environments to realize the real-time monitoring of pests; as well, they have low practicability. Therefore, this study designs and implements two model lightweight optimization algorithms, one is a pre-training pruning algorithm based on masks, and the other is a knowledge distillation algorithm based on the separate transfer of vibration signal knowledge and noise signal knowledge. We apply the two lightweight optimization algorithms to the signal enhancement model T-CENV with good performance outcomes and conduct a series of ablation experiments. The experimental results show that the proposed methods effectively reduce the volume of the T-CENV model, which make them useful for the deployment of signal enhancement models on embedded devices, improve the usability of the model, and help to realize the real-time monitoring of wood-boring pest larvae. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning Applications in Forestry)
Show Figures

Figure 1

14 pages, 4157 KB  
Article
D-Band 4.6 km 2 × 2 MIMO Photonic-Assisted Terahertz Wireless Communication Utilizing Iterative Pruning Deep Neural Network-Based Nonlinear Equalization
by Jingwen Lin, Sicong Xu, Qihang Wang, Jie Zhang, Jingtao Ge, Siqi Wang, Zhihang Ou, Yuan Ma, Wen Zhou and Jianjun Yu
Photonics 2024, 11(11), 1009; https://doi.org/10.3390/photonics11111009 - 26 Oct 2024
Cited by 2 | Viewed by 1398
Abstract
In this paper, we explore the enhancement of a 4.6 km dual-polarization 2 × 2 MIMO D-band photonic-assisted terahertz communication system using iterative pruning-based deep neural network (DNN) nonlinear equalization techniques. The system employs advanced digital signal processing (DSP) methods, including down-conversion, resampling, [...] Read more.
In this paper, we explore the enhancement of a 4.6 km dual-polarization 2 × 2 MIMO D-band photonic-assisted terahertz communication system using iterative pruning-based deep neural network (DNN) nonlinear equalization techniques. The system employs advanced digital signal processing (DSP) methods, including down-conversion, resampling, matched filtering, and various equalization algorithms to combat signal distortions. We demonstrate the effectiveness of DNN and iterative pruning techniques in significantly reducing bit error rates (BERs) across a range of symbol rates (10 Gbaud to 30 Gbaud) and polarization states (vertical and horizontal). Before pruning, at 10 GBaud transmission, the lowest BER was 0.0362, and at 30 GBaud transmission, the lowest BER was 0.1826, both of which did not meet the 20% soft-decision forward error correction (SD-FEC) threshold. After pruning, the BER at different transmission rates was reduced to below the hard decision forward error correction (HD-FEC) threshold, indicating a substantial improvement in signal quality. Additionally, the pruning process contributed to a decrease in network complexity, with a maximum reduction of 85.9% for 10 GBaud signals and 63.0% for 30 GBaud signals. These findings indicate the potential of DNN and pruning techniques to enhance the performance and efficiency of terahertz communication systems, providing valuable insights for future high-capacity, long-distance wireless networks. Full article
(This article belongs to the Special Issue New Advances in Optical Wireless Communication)
Show Figures

Figure 1

38 pages, 6505 KB  
Review
A Survey of Computer Vision Detection, Visual SLAM Algorithms, and Their Applications in Energy-Efficient Autonomous Systems
by Lu Chen, Gun Li, Weisi Xie, Jie Tan, Yang Li, Junfeng Pu, Lizhu Chen, Decheng Gan and Weimin Shi
Energies 2024, 17(20), 5177; https://doi.org/10.3390/en17205177 - 17 Oct 2024
Cited by 8 | Viewed by 3877
Abstract
Within the area of environmental perception, automatic navigation, object detection, and computer vision are crucial and demanding fields with many applications in modern industries, such as multi-target long-term visual tracking in automated production, defect detection, and driverless robotic vehicles. The performance of computer [...] Read more.
Within the area of environmental perception, automatic navigation, object detection, and computer vision are crucial and demanding fields with many applications in modern industries, such as multi-target long-term visual tracking in automated production, defect detection, and driverless robotic vehicles. The performance of computer vision has greatly improved recently thanks to developments in deep learning algorithms and hardware computing capabilities, which have spawned the creation of a large number of related applications. At the same time, with the rapid increase in autonomous systems in the market, energy consumption has become an increasingly critical issue in computer vision and SLAM (Simultaneous Localization and Mapping) algorithms. This paper presents the results of a detailed review of over 100 papers published over the course of two decades (1999–2024), with a primary focus on the technical advancement in computer vision. To elucidate the foundational principles, an examination of typical visual algorithms based on traditional correlation filtering was initially conducted. Subsequently, a comprehensive overview of the state-of-the-art advancements in deep learning-based computer vision techniques was compiled. Furthermore, a comparative analysis of conventional and novel algorithms was undertaken to discuss the future trends and directions of computer vision. Lastly, the feasibility of employing visual SLAM algorithms in the context of autonomous vehicles was explored. Additionally, in the context of intelligent robots for low-carbon, unmanned factories, we discussed model optimization techniques such as pruning and quantization, highlighting their importance in enhancing energy efficiency. We conducted a comprehensive comparison of the performance and energy consumption of various computer vision algorithms, with a detailed exploration of how to balance these factors and a discussion of potential future development trends. Full article
(This article belongs to the Section K: State-of-the-Art Energy Related Technologies)
Show Figures

Figure 1

24 pages, 526 KB  
Article
A Petri Net-Based Algorithm for Solving the One-Dimensional Cutting Stock Problem
by Irving Barragan-Vite, Joselito Medina-Marin, Norberto Hernandez-Romero and Gustavo Erick Anaya-Fuentes
Appl. Sci. 2024, 14(18), 8172; https://doi.org/10.3390/app14188172 - 11 Sep 2024
Cited by 2 | Viewed by 1383
Abstract
This paper addresses the one-dimensional cutting stock problem, focusing on minimizing total stock usage. Most procedures that deal with this problem reside on linear programming methods, heuristics, metaheuristics, and hybridizations. These methods face drawbacks like handling only low-complexity instances or requiring extensive parameter [...] Read more.
This paper addresses the one-dimensional cutting stock problem, focusing on minimizing total stock usage. Most procedures that deal with this problem reside on linear programming methods, heuristics, metaheuristics, and hybridizations. These methods face drawbacks like handling only low-complexity instances or requiring extensive parameter tuning. To address these limitations we develop a Petri-net model to construct cutting patterns. Using the filtered beam search algorithm, the reachability tree of the Petri net is constructed level by level from its root node to find the best solution, pruning the nodes that worsen the solution as the search progresses through the tree. Our algorithm is compared with the Least Lost Algorithm and the Generate and Solve algorithm over five datasets of instances. These algorithms share some characteristics with ours and have proven to be effective and efficient. Experimental results demonstrate that our algorithm effectively finds optimal and near-optimalsolutions for both low and high-complexity instances. These findings confirm that Petri nets are suitable for modeling and solving the one-dimensional cutting stock problem. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop