Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (606)

Search Parameters:
Keywords = binary segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2574 KB  
Article
Self-Supervised Representation Learning for UK Power Grid Frequency Disturbance Detection Using TC-TSS
by Maitreyee Dey and Soumya Prakash Rana
Energies 2025, 18(21), 5611; https://doi.org/10.3390/en18215611 (registering DOI) - 25 Oct 2025
Abstract
This study presents a self-supervised learning framework for detecting frequency disturbances in power systems using high-resolution time series data. Employing data from the UK National Grid, we apply the Temporal Contrastive Self-Supervised Learning (TC-TSS) approach to learn task-agnostic embeddings from unlabelled 60-s rolling [...] Read more.
This study presents a self-supervised learning framework for detecting frequency disturbances in power systems using high-resolution time series data. Employing data from the UK National Grid, we apply the Temporal Contrastive Self-Supervised Learning (TC-TSS) approach to learn task-agnostic embeddings from unlabelled 60-s rolling window segments of frequency measurements. The learned representations are then used to train four traditional classifiers, Logistic Regression (LR), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and Random Forest (RF), for binary classification of frequency stability events. The proposed method is evaluated using over 15 million data points spanning six months of system operation data. Results show that classifiers trained on TC-TSS embeddings performed better than those using raw input features, particularly in detecting rare disturbance events. ROC-AUC scores for MLP and SVM models reach as high as 0.98, indicating excellent separability in the latent space. Visualisations using UMAP and t-SNE further demonstrate the clustering quality of TC-TSS features. This study highlights the effectiveness of contrastive representation learning in the energy domain, particularly under conditions of limited labelled data, and proves its suitability for integration into real-time smart grid applications. Full article
Show Figures

Figure 1

26 pages, 1737 KB  
Article
ECG-CBA: An End-to-End Deep Learning Model for ECG Anomaly Detection Using CNN, Bi-LSTM, and Attention Mechanism
by Khalid Ammar, Salam Fraihat, Ghazi Al-Naymat and Yousef Sanjalawe
Algorithms 2025, 18(11), 674; https://doi.org/10.3390/a18110674 - 22 Oct 2025
Viewed by 158
Abstract
The electrocardiogram (ECG) is a vital diagnostic tool used to monitor heart activity and detect cardiac abnormalities, such as arrhythmias. Accurate classification of normal and abnormal heartbeats is essential for effective diagnosis and treatment. Traditional deep learning methods for automated ECG classification primarily [...] Read more.
The electrocardiogram (ECG) is a vital diagnostic tool used to monitor heart activity and detect cardiac abnormalities, such as arrhythmias. Accurate classification of normal and abnormal heartbeats is essential for effective diagnosis and treatment. Traditional deep learning methods for automated ECG classification primarily focus on reconstructing the original ECG signal and detecting anomalies based on reconstruction errors, which represent abnormal features. However, these approaches struggle with unseen or underrepresented abnormalities in the training data. In addition, other methods rely on manual feature extraction, which can introduce bias and limit their adaptability to new datasets. To overcome this problem, this study proposes an end-to-end model called ECG-CBA, which integrates the convolutional neural networks (CNNs), bidirectional long short-term memory networks (Bi-LSTM), and a multi-head Attention mechanism. ECG-CBA model learns discriminative features directly from the original dataset rather than relying on feature extraction or signal reconstruction. This enables higher accuracy and reliability in detecting and classifying anomalies. The CNN extracts local spatial features from raw ECG signals, while the Bi-LSTM captures the temporal dependencies in sequential data. An attention mechanism enables the model to primarily focus on critical segments of the ECG, thereby improving classification performance. The proposed model is trained on normal and abnormal ECG signals for binary classification. The ECG-CBA model demonstrates strong performance on the ECG5000 and MIT-BIH datasets, achieving accuracies of 99.60% and 98.80%, respectively. The model surpasses traditional methods across key metrics, including sensitivity, specificity, and overall classification accuracy. This offers a robust and interpretable solution for both ECG-based anomaly detection and cardiac abnormality classification. Full article
Show Figures

Figure 1

22 pages, 7404 KB  
Article
EDAT-BBH: An Energy-Modulated Transformer with Dual-Energy Attention Masks for Binary Black Hole Signal Classification
by Osman Tayfun Bişkin
Electronics 2025, 14(20), 4098; https://doi.org/10.3390/electronics14204098 - 19 Oct 2025
Viewed by 121
Abstract
Gravitational-wave (GW) detection has become a significant area of research following the first successful observation by the Laser Interferometer Gravitational-Wave Observatory (LIGO). The detection of signals emerging from binary black hole (BBH) mergers have challenges due to the presence of non-Gaussian and non-stationary [...] Read more.
Gravitational-wave (GW) detection has become a significant area of research following the first successful observation by the Laser Interferometer Gravitational-Wave Observatory (LIGO). The detection of signals emerging from binary black hole (BBH) mergers have challenges due to the presence of non-Gaussian and non-stationary noise in observational data. Using traditional matched filtering techniques to detect BBH merging are computationally expensive and may not generalize well to unexpected GW events. As a result, deep learning-based methods have emerged as powerful alternatives for robust GW signal detection. In this study, we propose a novel Transformer-based architecture that introduces energy-aware modulation into the attention mechanism through dual-energy attention masks. In the proposed framework, Q-transform and discrete wavelet transform (DWT) are employed to extract time–frequency energy representations from gravitational-wave signals which are fused into energy masks that dynamically guide the Transformer encoder. In parallel, the raw one-dimensional signal is used directly as input and segmented into temporal patches, which enables the model to leverage both learned representations and physically grounded priors. This proposed architecture allows the model to focus on energy-rich and informative regions of the signal in order to enhance the robustness of the model under realistic noise conditions. Experimental results on BBH datasets embedded in real LIGO noise show that EDAT-BBH outperforms CNN-based and standard Transformer-based approaches, achieving an accuracy of 0.9953, a recall of 0.9950, an F1-score of 0.9953, and an AUC of 0.9999. These findings demonstrate the effectiveness of energy-modulated attention in improving both the interpretability and performance of deep learning models for gravitational-wave signal classification. Full article
Show Figures

Figure 1

16 pages, 2401 KB  
Article
Thermal Rectification in One-Dimensional Atomic Chains with Mass Asymmetry and Nonlinear Interactions
by Arseny M. Kazakov, Elvir Z. Karimov, Galiia F. Korznikova and Elena A. Korznikova
Computation 2025, 13(10), 243; https://doi.org/10.3390/computation13100243 - 17 Oct 2025
Viewed by 203
Abstract
Understanding and controlling thermal rectification is pivotal for designing phononic devices that guide heat flow in a preferential direction. This study investigates one-dimensional atomic chains with binary mass asymmetry and nonlinear interatomic potentials, focusing on how energy propagates under thermal and wave excitation. [...] Read more.
Understanding and controlling thermal rectification is pivotal for designing phononic devices that guide heat flow in a preferential direction. This study investigates one-dimensional atomic chains with binary mass asymmetry and nonlinear interatomic potentials, focusing on how energy propagates under thermal and wave excitation. Two potential models—the β-FPU and Morse potentials—were employed to examine the role of nonlinearity and bond softness in energy transport. Simulations reveal strong directional energy transport governed by the interplay of mass distribution, nonlinearity, and excitation type. In FPU chains, pronounced rectification occurs: under “cold-heavy” conditions, energy in the left segment increases from ~1% to over 63%, while reverse (“hot-heavy”) cases show less than 4% net transfer. For wave-driven excitation, the rectification coefficient reaches ~0.58 at 100:1. In contrast, Morse-based systems exhibit weaker rectification (∆E < 1%) and structural instabilities at high asymmetry due to bond breaking. A comprehensive summary and heatmap visualization highlight how system parameters govern rectification efficiency. These findings provide mechanistic insights into nonreciprocal energy transport in nonlinear lattices and offer design principles for nanoscale thermal management strategies based on controlled asymmetry and potential engineering. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Graphical abstract

16 pages, 2334 KB  
Article
A Comprehensive Image Quality Evaluation of Image Fusion Techniques Using X-Ray Images for Detonator Detection Tasks
by Lynda Oulhissane, Mostefa Merah, Simona Moldovanu and Luminita Moraru
Appl. Sci. 2025, 15(20), 10987; https://doi.org/10.3390/app152010987 - 13 Oct 2025
Viewed by 210
Abstract
Purpose: Luggage X-rays suffer from low contrast, material overlap, and noise; dual-energy imaging reduces ambiguity but creates colour biases that impair segmentation. This study aimed to (1) employ connotative fusion by embedding realistic detonator patches into real X-rays to simulate threats and enhance [...] Read more.
Purpose: Luggage X-rays suffer from low contrast, material overlap, and noise; dual-energy imaging reduces ambiguity but creates colour biases that impair segmentation. This study aimed to (1) employ connotative fusion by embedding realistic detonator patches into real X-rays to simulate threats and enhance unattended detection without requiring ground-truth labels; (2) thoroughly evaluate fusion techniques in terms of balancing image quality, information content, contrast, and the preservation of meaningful features. Methods: A total of 1000 X-ray luggage images and 150 detonator images were used for fusion experiments based on deep learning, transform-based, and feature-driven methods. The proposed approach does not need ground truth supervision. Deep learning fusion techniques, including VGG, FusionNet, and AttentionFuse, enable the dynamic selection and combination of features from multiple input images. The transform-based fusion methods convert input images into different domains using mathematical transforms to enhance fine structures. The Nonsubsampled Contourlet Transform (NSCT), Curvelet Transform, and Laplacian Pyramid (LP) are employed. Feature-driven image fusion methods combine meaningful representations for easier interpretation. Singular Value Decomposition (SVD), Principal Component Analysis (PCA), Random Forest (RF), and Local Binary Pattern (LBP) are used to capture and compare texture details across source images. Entropy (EN), Standard Deviation (SD), and Average Gradient (AG) assess factors such as spatial resolution, contrast preservation, and information retention and are used to evaluate the performance of the analysed methods. Results: The results highlight the strengths and limitations of the evaluated techniques, demonstrating their effectiveness in producing sharpened fused X-ray images with clearly emphasized targets and enhanced structural details. Conclusions: The Laplacian Pyramid fusion method emerges as the most versatile choice for applications demanding a balanced trade-off. This is evidenced by its overall multi-criteria balance, supported by a composite (geometric mean) score on normalised metrics. It consistently achieves high performance across all evaluated metrics, making it reliable for detecting concealed threats under diverse imaging conditions. Full article
Show Figures

Figure 1

15 pages, 8859 KB  
Article
A Hybrid Estimation Model for Graphite Nodularity of Ductile Cast Iron Based on Multi-Source Feature Extraction
by Yongjian Yang, Yanhui Liu, Yuqian He, Zengren Pan and Zhiwei Li
Modelling 2025, 6(4), 126; https://doi.org/10.3390/modelling6040126 - 13 Oct 2025
Viewed by 246
Abstract
Graphite nodularity is a key indicator for evaluating the microstructure quality of ductile iron and plays a crucial role in ensuring product quality and enhancing manufacturing efficiency. Existing research often only focuses on a single type of feature and fails to utilize multi-source [...] Read more.
Graphite nodularity is a key indicator for evaluating the microstructure quality of ductile iron and plays a crucial role in ensuring product quality and enhancing manufacturing efficiency. Existing research often only focuses on a single type of feature and fails to utilize multi-source information in a coordinated manner. Single-feature methods are difficult to comprehensively capture microstructures, which limits the accuracy and robustness of the model. This study proposes a hybrid estimation model for the graphite nodularity of ductile cast iron based on multi-source feature extraction. A comprehensive feature engineering pipeline was established, incorporating geometric, color, and texture features extracted via Hue-Saturation-Value color space (HSV) histograms, gray level co-occurrence matrix (GLCM), Local Binary Pattern (LBP), and multi-scale Gabor filters. Dimensionality reduction was performed using Principal Component Analysis (PCA) to mitigate redundancy. An improved watershed algorithm combined with intelligent filtering was used for accurate particle segmentation. Several machine learning algorithms, including Support Vector Regression (SVR), Multi-Layer Perceptron (MLP), Random Forest (RF), Gradient Boosting Regressor (GBR), eXtreme Gradient Boosting (XGBoost) and Categorical Boosting (CatBoost), are applied to estimate graphite nodularity based on geometric features (GFs) and feature extraction. Experimental results demonstrate that the CatBoost model trained on fused features achieves high estimation accuracy and stability for geometric parameters, with R-squared (R2) exceeding 0.98. Furthermore, introducing geometric features into the fusion set enhances model generalization and suppresses overfitting. This framework offers an efficient and robust approach for intelligent analysis of metallographic images and provides valuable support for automated quality assessment in casting production. Full article
Show Figures

Figure 1

29 pages, 2868 KB  
Article
224-CPSK–CSS–WCDMA FPGA-Based Reconfigurable Chaotic Modulation for Multiuser Communications in the 2.45 GHz Band
by Jose-Cruz Nuñez-Perez, Miguel-Angel Estudillo-Valdez, José-Ricardo Cárdenas-Valdez, Gabriela-Elizabeth Martinez-Mendivil and Yuma Sandoval-Ibarra
Electronics 2025, 14(20), 3995; https://doi.org/10.3390/electronics14203995 - 12 Oct 2025
Viewed by 194
Abstract
This article presents an innovative chaotic communication scheme that integrates the multiuser access technique known as Wideband Code Division Multiple Access (W-CDMA) with the chaos-based selective strategy Chaos-Based Selective Symbol (CSS) and the unconventional modulation Chaos Parameter Shift Keying (CPSK). The system is [...] Read more.
This article presents an innovative chaotic communication scheme that integrates the multiuser access technique known as Wideband Code Division Multiple Access (W-CDMA) with the chaos-based selective strategy Chaos-Based Selective Symbol (CSS) and the unconventional modulation Chaos Parameter Shift Keying (CPSK). The system is designed to operate in the 2.45 GHz band and provides a robust and efficient alternative to conventional schemes such as Quadrature Amplitude Modulation (QAM). The proposed CPSK modulation enables the encoding of information for multiple users by regulating the 36 parameters of a Reconfigurable Chaotic Oscillator (RCO), theoretically allowing the simultaneous transmission of up to 224 independent users over the same channel. The CSS technique encodes each user’s information using a unique chaotic segment configuration generated by the RCO; this serves as a reference for binary symbol encoding. W-CDMA further supports the concurrent transmission of data from multiple users through orthogonal sequences, minimizing inter-user interference. The system was digitally implemented on the Artix-7 AC701 FPGA (XC7A200TFBG676-2) to evaluate logic-resource requirements, while RF validation was carried out using a ZedBoard FPGA equipped with an AD9361 transceiver. Experimental results demonstrate optimal performance in the 2.45 GHz band, confirming the effectiveness of the chaos-based W-CDMA approach as a multiuser access technique for high-spectral-density environments and its potential for use in 5G applications. Full article
Show Figures

Figure 1

20 pages, 1853 KB  
Article
Enhanced U-Net for Spleen Segmentation in CT Scans: Integrating Multi-Slice Context and Grad-CAM Interpretability
by Sowad Rahman, Md Azad Hossain Raju, Abdullah Evna Jafar, Muslima Akter, Israt Jahan Suma and Jia Uddin
BioMedInformatics 2025, 5(4), 56; https://doi.org/10.3390/biomedinformatics5040056 - 8 Oct 2025
Viewed by 498
Abstract
Accurate spleen segmentation in abdominal CT scans remains a critical challenge in medical image analysis due to variable morphology, low tissue contrast, and proximity to similar anatomical structures. This paper presents an enhanced U-Net architecture that addresses these challenges through multi-slice contextual integration [...] Read more.
Accurate spleen segmentation in abdominal CT scans remains a critical challenge in medical image analysis due to variable morphology, low tissue contrast, and proximity to similar anatomical structures. This paper presents an enhanced U-Net architecture that addresses these challenges through multi-slice contextual integration and interpretable deep learning. Our approach incorporates three-channel inputs from adjacent CT slices, implements a hybrid loss function combining Dice and binary cross-entropy terms, and integrates Grad-CAM visualization for enhanced model interpretability. Comprehensive evaluation on the Medical Decathlon dataset demonstrates superior performance, with a Dice similarity coefficient of 0.923 ± 0.04, outperforming standard 2D approaches by 3.2%. The model exhibits robust performance across varying slice thicknesses, contrast phases, and pathological conditions. Grad-CAM analysis reveals focused attention on spleen–tissue interfaces and internal vascular structures, providing clinical insight into model decision-making. The system demonstrates practical applicability for automated splenic volumetry, trauma assessment, and surgical planning, with processing times suitable for clinical workflow integration. Full article
Show Figures

Figure 1

14 pages, 2414 KB  
Article
An Integrated Analytical and Extended Ponchon–Savarit Graphical Method for Determining Actual and Minimum Boil-Up Ratios in Binary Distillation
by Oualid Hamdaoui
Processes 2025, 13(10), 3031; https://doi.org/10.3390/pr13103031 - 23 Sep 2025
Viewed by 455
Abstract
A rigorous framework for determining actual and minimum boil-up ratios in binary distillation combining analytical mass and energy balances with an extended Ponchon–Savarit graphical approach was implemented. First, global balances across the enriching and stripping sections yield a closed-form expression of the boil-up [...] Read more.
A rigorous framework for determining actual and minimum boil-up ratios in binary distillation combining analytical mass and energy balances with an extended Ponchon–Savarit graphical approach was implemented. First, global balances across the enriching and stripping sections yield a closed-form expression of the boil-up ratio (VB) based on enthalpy differences. Second, the VB was directly determined from an enthalpy–composition diagram by measuring the enthalpy segments between the saturated liquid, vapor, and heat-duty points. Applying this method to high-stage columns confirms that the two methods converge on identical VB values. Based on these findings, a unified graphical methodology was developed to determine the minimum boil-up ratio (VBmin). VBmin can be determined on the same diagram by locating the intersections of the extremal tie lines in both the enriching and exhausting sections, analogous to the reflux-pinch points. This procedure was systematically validated across the five canonical feed thermal states. The implemented method is a graphical approach based on the Ponchon–Savarit technique, developed for binary systems. Full article
(This article belongs to the Section Separation Processes)
Show Figures

Figure 1

25 pages, 12760 KB  
Article
Intelligent Face Recognition: Comprehensive Feature Extraction Methods for Holistic Face Analysis and Modalities
by Thoalfeqar G. Jarullah, Ahmad Saeed Mohammad, Musab T. S. Al-Kaltakchi and Jabir Alshehabi Al-Ani
Signals 2025, 6(3), 49; https://doi.org/10.3390/signals6030049 - 19 Sep 2025
Viewed by 965
Abstract
Face recognition technology utilizes unique facial features to analyze and compare individuals for identification and verification purposes. This technology is crucial for several reasons, such as improving security and authentication, effectively verifying identities, providing personalized user experiences, and automating various operations, including attendance [...] Read more.
Face recognition technology utilizes unique facial features to analyze and compare individuals for identification and verification purposes. This technology is crucial for several reasons, such as improving security and authentication, effectively verifying identities, providing personalized user experiences, and automating various operations, including attendance monitoring, access management, and law enforcement activities. In this paper, comprehensive evaluations are conducted using different face detection and modality segmentation methods, feature extraction methods, and classifiers to improve system performance. As for face detection, four methods are proposed: OpenCV’s Haar Cascade classifier, Dlib’s HOG + SVM frontal face detector, Dlib’s CNN face detector, and Mediapipe’s face detector. Additionally, two types of feature extraction techniques are proposed: hand-crafted features (traditional methods: global local features) and deep learning features. Three global features were extracted, Scale-Invariant Feature Transform (SIFT), Speeded Robust Features (SURF), and Global Image Structure (GIST). Likewise, the following local feature methods are utilized: Local Binary Pattern (LBP), Weber local descriptor (WLD), and Histogram of Oriented Gradients (HOG). On the other hand, the deep learning-based features fall into two categories: convolutional neural networks (CNNs), including VGG16, VGG19, and VGG-Face, and Siamese neural networks (SNNs), which generate face embeddings. For classification, three methods are employed: Support Vector Machine (SVM), a one-class SVM variant, and Multilayer Perceptron (MLP). The system is evaluated on three datasets: in-house, Labelled Faces in the Wild (LFW), and the Pins dataset (sourced from Pinterest) providing comprehensive benchmark comparisons for facial recognition research. The best performance accuracy for the proposed ten-feature extraction methods applied to the in-house database in the context of the facial recognition task achieved 99.8% accuracy by using the VGG16 model combined with the SVM classifier. Full article
Show Figures

Figure 1

20 pages, 55265 KB  
Article
Learning Precise Mask Representation for Siamese Visual Tracking
by Peng Yang, Fen Hu, Qinghui Wang and Lei Dou
Sensors 2025, 25(18), 5743; https://doi.org/10.3390/s25185743 - 15 Sep 2025
Viewed by 559
Abstract
Siamese network trackers are a prominent paradigm in visual object tracking due to efficient similarity learning. However, most Siamese trackers are restricted to the bounding box tracking format, which often fails to accurately describe the appearance of non-rigid targets with complex deformations. Additionally, [...] Read more.
Siamese network trackers are a prominent paradigm in visual object tracking due to efficient similarity learning. However, most Siamese trackers are restricted to the bounding box tracking format, which often fails to accurately describe the appearance of non-rigid targets with complex deformations. Additionally, since the bounding box frequently includes excessive background pixels, trackers are sensitive to similar distractors. To address these issues, we propose a novel segmentation-assisted model that learns binary mask representations of targets. This model is generic and can be seamlessly integrated into various Siamese frameworks, enabling pixel-wise segmentation tracking instead of the suboptimal bounding box tracking. Specifically, our model features two core components: (i) a multi-stage precise mask representation module composed of cascaded U-Net decoders, designed to predict segmentation masks of targets, and (ii) a saliency localization head based on the Euclidean model, which extracts spatial position constraints to boost the decoder’s discriminative capability. Extensive experiments on five tracking benchmarks demonstrate that our method effectively improves the performance of both anchor-based and anchor-free Siamese trackers. Notably, on GOT-10k, our method increases the AO scores of the baseline trackers SiamRPN++ (anchor-based) and SiamBAN (anchor-free) by 5.2% and 7.5%, respectively while maintaining speeds exceeding 60 FPS. Full article
(This article belongs to the Special Issue Deep Learning Technology and Image Sensing: 2nd Edition)
Show Figures

Figure 1

19 pages, 8261 KB  
Article
Oil Spill Identification with Marine Radar Using Feature Augmentation and Improved Firefly Optimization Algorithm
by Jin Xu, Boxi Yao, Haihui Dong, Zekun Guo, Bo Xu, Yuanyuan Huang, Bo Li, Sihan Qian and Bingxin Liu
Remote Sens. 2025, 17(18), 3148; https://doi.org/10.3390/rs17183148 - 10 Sep 2025
Viewed by 418
Abstract
Oil spill accidents pose a grave threat to marine ecosystems, human economy, and public health. Consequently, expeditious and efficacious oil spill detection technology is imperative for the pollution mitigation and the health preservation in the marine environment. This study proposed a marine radar [...] Read more.
Oil spill accidents pose a grave threat to marine ecosystems, human economy, and public health. Consequently, expeditious and efficacious oil spill detection technology is imperative for the pollution mitigation and the health preservation in the marine environment. This study proposed a marine radar oil spill detection method based on Local Binary Patterns (LBP), Histogram of Oriented Gradient (HOG), and an improved Firefly Optimization Algorithm (IFA). In the stage of image pre-processing, the oil film features were significantly enhanced through three steps. The LBP features were extracted from the preprocessed image. Then, the mean filtering was used to smooth out the LBP features. Subsequently, the HOG statistical features were extracted from the filtered LBP feature map. After the feature enhancement, the oil spill regions were accurately extracted by using K-Means clustering algorithm. Next, an IFA model was used to classify oil films. Compared with traditional Firefly Optimization Algorithm (FA) algorithm, the IFA method is suitable for oil film segmentation tasks in marine radar data. The proposed method can achieve accuracy segmentation and provide a new technical path for marine oil spill monitoring. Full article
Show Figures

Figure 1

12 pages, 4871 KB  
Article
Construction and Segmental Reconstitution of Full-Length Infectious Clones of Milk Vetch Dwarf Virus
by Aamir Lal, Muhammad Amir Qureshi, Man-Cheol Son, Sukchan Lee and Eui-Joon Kil
Viruses 2025, 17(9), 1213; https://doi.org/10.3390/v17091213 - 5 Sep 2025
Viewed by 726
Abstract
The construction of infectious clones (ICs) is essential for studying viral replication, pathogenesis, and host interactions. Milk vetch dwarf virus (MDV), a nanovirus with a multipartite, single-stranded DNA genome, presents unique challenges for IC development due to its segmented genome organization. To enable [...] Read more.
The construction of infectious clones (ICs) is essential for studying viral replication, pathogenesis, and host interactions. Milk vetch dwarf virus (MDV), a nanovirus with a multipartite, single-stranded DNA genome, presents unique challenges for IC development due to its segmented genome organization. To enable functional analysis of its genome, we constructed full-length tandem-dimer-based ICs for all eight MDV genomic segments. Each segment was cloned into a binary vector and co-delivered into Nicotiana benthamiana, Nicotiana tabacum, Vicia faba, and Vigna unguiculata plants via Agrobacterium-mediated inoculation. Systemic infection was successfully reconstituted in all host plants, with PCR-based detection confirming the presence of all viral segments in the infected leaves of nearly all tested plants. Segmental accumulation in infected plants was quantified using qPCR, revealing non-equimolar distribution across hosts. This study establishes the first complete IC system for MDV, enabling reproducible infection, replication analysis, and quantitative segment profiling. It provides a foundational tool for future molecular investigations into MDV replication, host interactions, and viral movement, advancing our understanding of nanovirus biology and transmission dynamics. Full article
(This article belongs to the Special Issue Application of Genetically Engineered Plant Viruses)
Show Figures

Graphical abstract

37 pages, 12368 KB  
Article
Machine Learning-Based Analysis of Optical Coherence Tomography Angiography Images for Age-Related Macular Degeneration
by Abdullah Alfahaid, Tim Morris, Tim Cootes, Pearse A. Keane, Hagar Khalid, Nikolas Pontikos, Fatemah Alharbi, Easa Alalwany, Abdulqader M. Almars, Amjad Aldweesh, Abdullah G. M. ALMansour, Panagiotis I. Sergouniotis and Konstantinos Balaskas
Biomedicines 2025, 13(9), 2152; https://doi.org/10.3390/biomedicines13092152 - 5 Sep 2025
Viewed by 698
Abstract
Background/Objectives: Age-related macular degeneration (AMD) is the leading cause of visual impairment among the elderly. Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that enables detailed visualisation of retinal vascular layers. However, clinical assessment of OCTA images is often challenging due [...] Read more.
Background/Objectives: Age-related macular degeneration (AMD) is the leading cause of visual impairment among the elderly. Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that enables detailed visualisation of retinal vascular layers. However, clinical assessment of OCTA images is often challenging due to high data volume, pattern variability, and subtle abnormalities. This study aimed to develop automated algorithms to detect and quantify AMD in OCTA images, thereby reducing ophthalmologists’ workload and enhancing diagnostic accuracy. Methods: Two texture-based algorithms were developed to classify OCTA images without relying on segmentation. The first algorithm used whole local texture features, while the second applied principal component analysis (PCA) to decorrelate and reduce texture features. Local texture descriptors, including rotation-invariant uniform local binary patterns (LBP2riu), local binary patterns (LBP), and binary robust independent elementary features (BRIEF), were combined with machine learning classifiers such as support vector machine (SVM) and K-nearest neighbour (KNN). OCTA datasets from Manchester Royal Eye Hospital and Moorfields Eye Hospital, covering healthy, dry AMD, and wet AMD eyes, were used for evaluation. Results: The first algorithm achieved a mean area under the receiver operating characteristic curve (AUC) of 1.00±0.00 for distinguishing healthy eyes from wet AMD. The second algorithm showed superior performance in differentiating dry AMD from wet AMD (AUC 0.85±0.02). Conclusions: The proposed algorithms demonstrate strong potential for rapid and accurate AMD diagnosis in OCTA workflows. By reducing manual image evaluation and associated variability, they may support improved clinical decision-making and patient care. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Figure 1

43 pages, 1021 KB  
Review
A Survey of Cross-Layer Security for Resource-Constrained IoT Devices
by Mamyr Altaibek, Aliya Issainova, Tolegen Aidynov, Daniyar Kuttymbek, Gulsipat Abisheva and Assel Nurusheva
Appl. Sci. 2025, 15(17), 9691; https://doi.org/10.3390/app15179691 - 3 Sep 2025
Viewed by 1209
Abstract
Low-power microcontrollers, wireless sensors, and embedded gateways form the backbone of many Internet of Things (IoT) deployments. However, their limited memory, constrained energy budgets, and lack of standardized firmware make them attractive targets for diverse attacks, including bootloader backdoors, hardcoded keys, unpatched CVE [...] Read more.
Low-power microcontrollers, wireless sensors, and embedded gateways form the backbone of many Internet of Things (IoT) deployments. However, their limited memory, constrained energy budgets, and lack of standardized firmware make them attractive targets for diverse attacks, including bootloader backdoors, hardcoded keys, unpatched CVE exploits, and code-reuse attacks, while traditional single-layer defenses are insufficient as they often assume abundant resources. This paper presents a Systematic Literature Review (SLR) conducted according to the PRISMA 2020 guidelines, covering 196 peer-reviewed studies on cross-layer security for resource-constrained IoT and Industrial IoT environments, and introduces a four-axis taxonomy—system level, algorithmic paradigm, data granularity, and hardware budget—to structure and compare prior work. At the firmware level, we analyze static analysis, symbolic execution, and machine learning-based binary similarity detection that operate without requiring source code or a full runtime; at the network and behavioral levels, we review lightweight and graph-based intrusion detection systems (IDS), including single-packet authorization, unsupervised anomaly detection, RF spectrum monitoring, and sensor–actuator anomaly analysis bridging cyber-physical security; and at the policy level, we survey identity management, micro-segmentation, and zero-trust enforcement mechanisms supported by blockchain-based authentication and programmable policy enforcement points (PEPs). Our review identifies current strengths, limitations, and open challenges—including scalable firmware reverse engineering, efficient cross-ISA symbolic learning, and practical spectrum anomaly detection under constrained computing environments—and by integrating diverse security layers within a unified taxonomy, this SLR highlights both the state-of-the-art and promising research directions for advancing IoT security. Full article
Show Figures

Figure 1

Back to TopTop