Journal Description
Algorithms
Algorithms
is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications. Algorithms is published monthly online by MDPI. The European Society for Fuzzy Logic and Technology (EUSFLAT) is affiliated with Algorithms and their members receive discounts on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, MathSciNet and other databases.
- Journal Rank: CiteScore - Q2 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 15 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the second half of 2023).
- Testimonials: See what our editors and authors say about Algorithms.
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.3 (2022);
5-Year Impact Factor:
2.2 (2022)
Latest Articles
Strategic Machine Learning Optimization for Cardiovascular Disease Prediction and High-Risk Patient Identification
Algorithms 2024, 17(5), 178; https://doi.org/10.3390/a17050178 - 26 Apr 2024
Abstract
Despite medical advancements in recent years, cardiovascular diseases (CVDs) remain a major factor in rising mortality rates, challenging predictions despite extensive expertise. The healthcare sector is poised to benefit significantly from harnessing massive data and the insights we can derive from it, underscoring
[...] Read more.
Despite medical advancements in recent years, cardiovascular diseases (CVDs) remain a major factor in rising mortality rates, challenging predictions despite extensive expertise. The healthcare sector is poised to benefit significantly from harnessing massive data and the insights we can derive from it, underscoring the importance of integrating machine learning (ML) to improve CVD prevention strategies. In this study, we addressed the major issue of class imbalance in the Behavioral Risk Factor Surveillance System (BRFSS) 2021 heart disease dataset, including personal lifestyle factors, by exploring several resampling techniques, such as the Synthetic Minority Oversampling Technique (SMOTE), Adaptive Synthetic Sampling (ADASYN), SMOTE-Tomek, and SMOTE-Edited Nearest Neighbor (SMOTE-ENN). Subsequently, we trained, tested, and evaluated multiple classifiers, including logistic regression (LR), decision trees (DTs), random forest (RF), gradient boosting (GB), XGBoost (XGB), CatBoost, and artificial neural networks (ANNs), comparing their performance with a primary focus on maximizing sensitivity for CVD risk prediction. Based on our findings, the hybrid resampling techniques outperformed the alternative sampling techniques, and our proposed implementation includes SMOTE-ENN coupled with CatBoost optimized through Optuna, achieving a remarkable 88% rate for recall and 82% for the area under the receiver operating characteristic (ROC) curve (AUC) metric.
Full article
(This article belongs to the Collection Feature Papers in Algorithms and Mathematical Models for Computer-Assisted Diagnostic Systems)
►
Show Figures
Open AccessArticle
Mission Planning of UAVs and UGV for Building Inspection in Rural Area
by
Xiao Chen, Yu Wu and Shuting Xu
Algorithms 2024, 17(5), 177; https://doi.org/10.3390/a17050177 (registering DOI) - 26 Apr 2024
Abstract
Unmanned aerial vehicles (UAVs) have become increasingly popular in the civil field, and building inspection is one of the most promising applications. In a rural area, the UAVs are assigned to inspect the surface of buildings, and an unmanned ground vehicle (UGV) is
[...] Read more.
Unmanned aerial vehicles (UAVs) have become increasingly popular in the civil field, and building inspection is one of the most promising applications. In a rural area, the UAVs are assigned to inspect the surface of buildings, and an unmanned ground vehicle (UGV) is introduced to carry the UAVs to reach the rural area and also serve as a charging station. In this paper, the mission planning problem for UAVs and UGV systems is focused on, and the goal is to realize an efficient inspection of buildings in a specific rural area. Firstly, the mission planning problem (MPP) involving UGVs and UAVs is described, and an optimization model is established with the objective of minimizing the total UAV operation time, fully considering the impact of UAV operation time and its cruising capability. Subsequently, the locations of parking points are determined based on the information about task points. Finally, a hybrid ant colony optimization-genetic algorithm (ACO-GA) is designed to solve the problem. The update mechanism of ACO is incorporated into the selection operation of GA. At the same time, the GA is improved and the defects that make GA easy to fall into local optimal and ACO have insufficient searching ability are solved. Simulation results demonstrate that the ACO-GA algorithm can obtain reasonable solutions for MPP, and the search capability of the algorithm is enhanced, presenting significant advantages over the original GA and ACO.
Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
►▼
Show Figures
Figure 1
Open AccessReview
A Survey of the Applications of Text Mining for the Food Domain
by
Shufeng Xiong, Wenjie Tian, Haiping Si, Guipei Zhang and Lei Shi
Algorithms 2024, 17(5), 176; https://doi.org/10.3390/a17050176 - 25 Apr 2024
Abstract
In the food domain, text mining techniques are extensively employed to derive valuable insights from large volumes of text data, facilitating applications such as aiding food recalls, offering personalized recipes, and reinforcing food safety regulation. To provide researchers and practitioners with a comprehensive
[...] Read more.
In the food domain, text mining techniques are extensively employed to derive valuable insights from large volumes of text data, facilitating applications such as aiding food recalls, offering personalized recipes, and reinforcing food safety regulation. To provide researchers and practitioners with a comprehensive understanding of the latest technology and application scenarios of text mining in the food domain, the pertinent literature is reviewed and analyzed. Initially, the fundamental concepts, principles, and primary tasks of text mining, encompassing text categorization, sentiment analysis, and entity recognition, are elucidated. Subsequently, an analysis of diverse types of data sources within the food domain and the characteristics of text data mining is conducted, spanning social media, reviews, recipe websites, and food safety reports. Furthermore, the applications of text mining in the food domain are scrutinized from the perspective of various scenarios, including leveraging consumer food reviews and feedback to enhance product quality, providing personalized recipe recommendations based on user preferences and dietary requirements, and employing text mining for food safety and fraud monitoring. Lastly, the opportunities and challenges associated with the adoption of text mining techniques in the food domain are summarized and evaluated. In conclusion, text mining holds considerable potential for application in the food domain, thereby propelling the advancement of the food industry and upholding food safety standards.
Full article
(This article belongs to the Special Issue Machine Learning Algorithms and Optimization in the Digital Transition)
►▼
Show Figures
Figure 1
Open AccessArticle
Cross-Project Defect Prediction Based on Domain Adaptation and LSTM Optimization
by
Khadija Javed, Ren Shengbing, Muhammad Asim and Mudasir Ahmad Wani
Algorithms 2024, 17(5), 175; https://doi.org/10.3390/a17050175 - 24 Apr 2024
Abstract
Cross-project defect prediction (CPDP) aims to predict software defects in a target project domain by leveraging information from different source project domains, allowing testers to identify defective modules quickly. However, CPDP models often underperform due to different data distributions between source and target
[...] Read more.
Cross-project defect prediction (CPDP) aims to predict software defects in a target project domain by leveraging information from different source project domains, allowing testers to identify defective modules quickly. However, CPDP models often underperform due to different data distributions between source and target domains, class imbalances, and the presence of noisy and irrelevant instances in both source and target projects. Additionally, standard features often fail to capture sufficient semantic and contextual information from the source project, leading to poor prediction performance in the target project. To address these challenges, this research proposes Smote Correlation and Attention Gated recurrent unit based Long Short-Term Memory optimization (SCAG-LSTM), which first employs a novel hybrid technique that extends the synthetic minority over-sampling technique (SMOTE) with edited nearest neighbors (ENN) to rebalance class distributions and mitigate the issues caused by noisy and irrelevant instances in both source and target domains. Furthermore, correlation-based feature selection (CFS) with best-first search (BFS) is utilized to identify and select the most important features, aiming to reduce the differences in data distribution among projects. Additionally, SCAG-LSTM integrates bidirectional gated recurrent unit (Bi-GRU) and bidirectional long short-term memory (Bi-LSTM) networks to enhance the effectiveness of the long short-term memory (LSTM) model. These components efficiently capture semantic and contextual information as well as dependencies within the data, leading to more accurate predictions. Moreover, an attention mechanism is incorporated into the model to focus on key features, further improving prediction performance. Experiments are conducted on apache_lucene, equinox, eclipse_jdt_core, eclipse_pde_ui, and mylyn (AEEEM) and predictor models in software engineering (PROMISE) datasets and compared with active learning-based method (ALTRA), multi-source-based cross-project defect prediction method (MSCPDP), the two-phase feature importance amplification method (TFIA) on AEEEM and the two-phase transfer learning method (TPTL), domain adaptive kernel twin support vector machines method (DA-KTSVMO), and generative adversarial long-short term memory neural networks method (GB-CPDP) on PROMISE datasets. The results demonstrate that the proposed SCAG-LSTM model enhances the baseline models by 33.03%, 29.15% and 1.48% in terms of F1- measure and by 16.32%, 34.41% and 3.59% in terms of Area Under the Curve (AUC) on the AEEEM dataset, while on the PROMISE dataset it enhances the baseline models’ F1- measure by 42.60%, 32.00% and 25.10% and AUC by 34.90%, 27.80% and 12.96%. These findings suggest that the proposed model exhibits strong predictive performance.
Full article
(This article belongs to the Special Issue Algorithms in Software Engineering)
Open AccessArticle
An Oracle Bone Inscriptions Detection Algorithm Based on Improved YOLOv8
by
Qianqian Zhen, Liang Wu and Guoying Liu
Algorithms 2024, 17(5), 174; https://doi.org/10.3390/a17050174 - 24 Apr 2024
Abstract
►▼
Show Figures
Ancient Chinese characters known as oracle bone inscriptions (OBIs) were inscribed on turtle shells and animal bones, and they boast a rich history dating back over 3600 years. The detection of OBIs is one of the most basic tasks in OBI research. The
[...] Read more.
Ancient Chinese characters known as oracle bone inscriptions (OBIs) were inscribed on turtle shells and animal bones, and they boast a rich history dating back over 3600 years. The detection of OBIs is one of the most basic tasks in OBI research. The current research aimed to determine the precise location of OBIs with rubbing images. Given the low clarity, severe noise, and cracks in oracle bone inscriptions, the mainstream networks within the realm of deep learning possess low detection accuracy on the OBI detection dataset. To address this issue, this study analyzed the significant research progress in oracle bone script detection both domestically and internationally. Then, based on the YOLOv8 algorithm, according to the characteristics of OBI rubbing images, the algorithm was improved accordingly. The proposed algorithm added a small target detection head, modified the loss function, and embedded a CBAM. The results show that the improved model achieves an F-measure of 84.3%, surpassing the baseline model by approximately 1.8%.
Full article
Figure 1
Open AccessReview
An Overview of Demand Analysis and Forecasting Algorithms for the Flow of Checked Baggage among Departing Passengers
by
Bo Jiang, Guofu Ding, Jianlin Fu, Jian Zhang and Yong Zhang
Algorithms 2024, 17(5), 173; https://doi.org/10.3390/a17050173 - 23 Apr 2024
Abstract
The research on baggage flow plays a pivotal role in achieving the efficient and intelligent allocation and scheduling of airport service resources, as well as serving as a fundamental element in determining the design, development, and process optimization of airport baggage handling systems.
[...] Read more.
The research on baggage flow plays a pivotal role in achieving the efficient and intelligent allocation and scheduling of airport service resources, as well as serving as a fundamental element in determining the design, development, and process optimization of airport baggage handling systems. This paper examines baggage checked in by departing passengers at airports. The crrent state of the research on baggage flow demand is first reviewed and analyzed. Then, using examples of objective data, it is concluded that while there is a significant correlation between airport passenger flow and baggage flow, an increase in passenger flow does not necessarily result in a proportional increase in baggage flow. According to the existing research results on the influencing factors of baggage flow sorting and classification, the main influencing factors of baggage flow are divided into two categories: macro-influencing factors and micro-influencing factors. When studying the relationship between the economy and baggage flow, it is recommended to use a comprehensive analysis that includes multiple economic indicators, rather than relying solely on GDP. This paper provides a brief overview of prevalent transportation flow prediction methods, categorizing algorithmic models into three groups: based on mathematical and statistical models, intelligent algorithmic-based models, and combined algorithmic models utilizing artificial neural networks. The structures, strengths, and weaknesses of various transportation flow prediction algorithms are analyzed, as well as their application scenarios. The potential advantages of using artificial neural network-based combined prediction models for baggage flow forecasting are explained. It concludes with an outlook on research regarding the demand for baggage flow. This review may provide further research assistance to scholars in airport management and baggage handling system development.
Full article
Open AccessArticle
Improved Brain Storm Optimization Algorithm Based on Flock Decision Mutation Strategy
by
Yanchi Zhao, Jianhua Cheng and Jing Cai
Algorithms 2024, 17(5), 172; https://doi.org/10.3390/a17050172 - 23 Apr 2024
Abstract
►▼
Show Figures
To tackle the problem of the brain storm optimization (BSO) algorithm’s suboptimal capability for avoiding local optima, which contributes to its inadequate optimization precision, we developed a flock decision mutation approach that substantially enhances the efficacy of the BSO algorithm. Furthermore, to solve
[...] Read more.
To tackle the problem of the brain storm optimization (BSO) algorithm’s suboptimal capability for avoiding local optima, which contributes to its inadequate optimization precision, we developed a flock decision mutation approach that substantially enhances the efficacy of the BSO algorithm. Furthermore, to solve the problem of insufficient BSO algorithm population diversity, we introduced a strategy that utilizes the good point set to enhance the initial population’s quality. Simultaneously, we substituted the K-means clustering approach with spectral clustering to improve the clustering accuracy of the algorithm. This work introduced an enhanced version of the brain storm optimization algorithm founded on a flock decision mutation strategy (FDIBSO). The improved algorithm was compared against contemporary leading algorithms through the CEC2018. The experimental section additionally employs the AUV intelligence evaluation as an application case. It addresses the combined weight model under various dimensional settings to substantiate the efficacy of the FDIBSO algorithm further. The findings indicate that FDIBSO surpasses BSO and other enhanced algorithms for addressing intricate optimization challenges.
Full article
Figure 1
Open AccessArticle
Pediatric Ischemic Stroke: Clinical and Paraclinical Manifestations—Algorithms for Diagnosis and Treatment
by
Niels Wessel, Mariana Sprincean, Ludmila Sidorenko, Ninel Revenco and Svetlana Hadjiu
Algorithms 2024, 17(4), 171; https://doi.org/10.3390/a17040171 - 22 Apr 2024
Abstract
Childhood stroke can lead to lifelong disability. Developing algorithms for timely recognition of clinical and paraclinical signs is crucial to ensure prompt stroke diagnosis and minimize decision-making time. This study aimed to characterize clinical and paraclinical symptoms of childhood and neonatal stroke as
[...] Read more.
Childhood stroke can lead to lifelong disability. Developing algorithms for timely recognition of clinical and paraclinical signs is crucial to ensure prompt stroke diagnosis and minimize decision-making time. This study aimed to characterize clinical and paraclinical symptoms of childhood and neonatal stroke as relevant diagnostic criteria encountered in clinical practice, in order to develop algorithms for prompt stroke diagnosis. The analysis included data from 402 pediatric case histories from 2010 to 2016 and 108 prospective stroke cases from 2017 to 2020. Stroke cases were predominantly diagnosed in newborns, with 362 (71%, 95% CI 68.99–73.01) cases occurring within the first 28 days of birth, and 148 (29%, 95% CI 26.99–31.01) cases occurring after 28 days. The findings of the study enable the development of algorithms for timely stroke recognition, facilitating the selection of optimal treatment options for newborns and children of various age groups. Logistic regression serves as the basis for deriving these algorithms, aiming to initiate early treatment and reduce lifelong morbidity and mortality in children. The study outcomes include the formulation of algorithms for timely recognition of newborn stroke, with plans to adopt these algorithms and train a fuzzy classifier-based diagnostic model using machine learning techniques for efficient stroke recognition.
Full article
(This article belongs to the Collection Feature Papers in Algorithms and Mathematical Models for Computer-Assisted Diagnostic Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
A Multi-Stage Method for Logo Detection in Scanned Official Documents Based on Image Processing
by
María Guijarro, Juan Bayon, Daniel Martín-Carabias and Joaquín Recas
Algorithms 2024, 17(4), 170; https://doi.org/10.3390/a17040170 - 22 Apr 2024
Abstract
A logotype is a rectangular region defined by a set of characteristics, which come from the pixel information and region shape, that differ from those of the text. In this paper, a new method for automatic logo detection is proposed and tested using
[...] Read more.
A logotype is a rectangular region defined by a set of characteristics, which come from the pixel information and region shape, that differ from those of the text. In this paper, a new method for automatic logo detection is proposed and tested using the public Tobacco800 database. Our method outputs a set of regions from an official document with a high probability to contain a logo using a new approach based on the variation of the feature rectangles method available in the literature. Candidate regions were computed using the longest increasing run algorithm over the document blank lines’ indices. Those regions were further refined by using a feature-rectangle-expansion method with forward checking, where the rectangle expansion can occur in parallel in each region. Finally, a C4.5 decision tree was trained and tested against a set of 1291 official documents to evaluate its performance. The strategic combination of the three previous steps offers a precision and recall for logo detention of 98.9% and 89.9%, respectively, being also resistant to noise and low-quality documents. The method is also able to reduce the processing area of the document while maintaining a low percentage of false negatives.
Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Computer Vision Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Security and Ownership in User-Defined Data Meshes
by
Michalis Pingos, Panayiotis Christodoulou and Andreas S. Andreou
Algorithms 2024, 17(4), 169; https://doi.org/10.3390/a17040169 - 22 Apr 2024
Abstract
Data meshes are an approach to data architecture and organization that treats data as a product and focuses on decentralizing data ownership and access. It has recently emerged as a field that presents quite a few challenges related to data ownership, governance, security,
[...] Read more.
Data meshes are an approach to data architecture and organization that treats data as a product and focuses on decentralizing data ownership and access. It has recently emerged as a field that presents quite a few challenges related to data ownership, governance, security, monitoring, and observability. To address these challenges, this paper introduces an innovative algorithmic framework leveraging data blueprints to enable the dynamic creation of data meshes and data products in response to user requests, ensuring that stakeholders have access to specific portions of the data mesh as needed. Ownership and governance concerns are addressed through a unique mechanism involving Blockchain and Non-Fungible Tokens (NFTs). This facilitates the secure and transparent transfer of data ownership, with the ability to mint time-based NFTs. By combining these advancements with the fundamental tenets of data meshes, this research offers a comprehensive solution to the challenges surrounding data ownership and governance. It empowers stakeholders to navigate the complexities of data management within a decentralized architecture, ensuring a secure, efficient, and user-centric approach to data utilization. The proposed framework is demonstrated using real-world data from a poultry meat production factory.
Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms)
►▼
Show Figures
Figure 1
Open AccessArticle
CCFNet: Collaborative Cross-Fusion Network for Medical Image Segmentation
by
Jialu Chen and Baohua Yuan
Algorithms 2024, 17(4), 168; https://doi.org/10.3390/a17040168 - 21 Apr 2024
Abstract
►▼
Show Figures
The Transformer architecture has gained widespread acceptance in image segmentation. However, it sacrifices local feature details and necessitates extensive data for training, posing challenges to its integration into computer-aided medical image segmentation. To address the above challenges, we introduce CCFNet, a collaborative cross-fusion
[...] Read more.
The Transformer architecture has gained widespread acceptance in image segmentation. However, it sacrifices local feature details and necessitates extensive data for training, posing challenges to its integration into computer-aided medical image segmentation. To address the above challenges, we introduce CCFNet, a collaborative cross-fusion network, which continuously fuses a CNN and Transformer interactively to exploit context dependencies. In particular, when integrating CNN features into Transformer, the correlations between local and global tokens are adaptively fused through collaborative self-attention fusion to minimize the semantic disparity between these two types of features. When integrating Transformer features into the CNN, it uses the spatial feature injector to reduce the spatial information gap between features due to the asymmetry of the extracted features. In addition, CCFNet implements the parallel operation of Transformer and the CNN and independently encodes hierarchical global and local representations when effectively aggregating different features, which can preserve global representations and local features. The experimental findings from two public medical image segmentation datasets reveal that our approach exhibits competitive performance in comparison to current state-of-the-art methods.
Full article
Figure 1
Open AccessArticle
Evaluating Diffusion Models for the Automation of Ultrasonic Nondestructive Evaluation Data Analysis
by
Nick Torenvliet and John Zelek
Algorithms 2024, 17(4), 167; https://doi.org/10.3390/a17040167 - 21 Apr 2024
Abstract
We develop decision support and automation for the task of ultrasonic non-destructive evaluation data analysis. First, we develop a probabilistic model for the task and then implement the model as a series of neural networks based on Conditional Score-Based Diffusion and Denoising Diffusion
[...] Read more.
We develop decision support and automation for the task of ultrasonic non-destructive evaluation data analysis. First, we develop a probabilistic model for the task and then implement the model as a series of neural networks based on Conditional Score-Based Diffusion and Denoising Diffusion Probabilistic Model architectures. We use the neural networks to generate estimates for peak amplitude response time of flight and perform a series of tests probing their behavior, capacity, and characteristics in terms of the probabilistic model. We train the neural networks on a series of datasets constructed from ultrasonic non-destructive evaluation data acquired during an inspection at a nuclear power generation facility. We modulate the partition classifying nominal and anomalous data in the dataset and observe that the probabilistic model predicts trends in neural network model performance, thereby demonstrating a principled basis for explainability. We improve on previous related work as our methods are self-supervised and require no data annotation or pre-processing, and we train on a per-dataset basis, meaning we do not rely on out-of-distribution generalization. The capacity of the probabilistic model to predict trends in neural network performance, as well as the quality of the estimates sampled from the neural networks, support the development of a technical justification for usage of the method in safety-critical contexts such as nuclear applications. The method may provide a basis or template for extension into similar non-destructive evaluation tasks in other industrial contexts.
Full article
(This article belongs to the Collection Feature Papers on Artificial Intelligence Algorithms and Their Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Predicting the Aggregate Mobility of a Vehicle Fleet within a City Graph
by
J. Fernando Sánchez-Rada, Raquel Vila-Rodríguez, Jesús Montes and Pedro J. Zufiria
Algorithms 2024, 17(4), 166; https://doi.org/10.3390/a17040166 - 19 Apr 2024
Abstract
Predicting vehicle mobility is crucial in domains such as ride-hailing, where the balance between offer and demand is paramount. Since city road networks can be easily represented as graphs, recent works have exploited graph neural networks (GNNs) to produce more accurate predictions on
[...] Read more.
Predicting vehicle mobility is crucial in domains such as ride-hailing, where the balance between offer and demand is paramount. Since city road networks can be easily represented as graphs, recent works have exploited graph neural networks (GNNs) to produce more accurate predictions on real traffic data. However, a better understanding of the characteristics and limitations of this approach is needed. In this work, we compare several GNN aggregated mobility prediction schemes to a selection of other approaches in a very restricted and controlled simulation scenario. The city graph employed represents roads as directed edges and road intersections as nodes. Individual vehicle mobility is modeled as transitions between nodes in the graph. A time series of aggregated mobility is computed by counting vehicles in each node at any given time. Three main approaches are employed to construct the aggregated mobility predictors. First, the behavior of the moving individuals is assumed to follow a Markov chain (MC) model whose transition matrix is inferred via a least squares estimation procedure; the recurrent application of this MC provides the aggregated mobility prediction values. Second, a multilayer perceptron (MLP) is trained so that—given the node occupation at a given time—it can recursively provide predictions for the next values of the time series. Third, we train a GNN (according to the city graph) with the time series data via a supervised learning formulation that computes—through an embedding construction for each node in the graph—the aggregated mobility predictions. Some mobility patterns are simulated in the city to generate different time series for testing purposes. The proposed schemes are comparatively assessed compared to different baseline prediction procedures. The comparison illustrates several limitations of the GNN approaches in the selected scenario and uncovers future lines of investigation.
Full article
(This article belongs to the Special Issue Algorithms for Network Analysis: Theory and Practice)
►▼
Show Figures
Figure 1
Open AccessArticle
Research on a Fast Image-Matching Algorithm Based on Nonlinear Filtering
by
Chenglong Yin, Fei Zhang, Bin Hao, Zijian Fu and Xiaoyu Pang
Algorithms 2024, 17(4), 165; https://doi.org/10.3390/a17040165 - 19 Apr 2024
Abstract
Computer vision technology is being applied at an unprecedented speed in various fields such as 3D scene reconstruction, object detection and recognition, video content tracking, pose estimation, and motion estimation. To address the issues of low accuracy and high time complexity in traditional
[...] Read more.
Computer vision technology is being applied at an unprecedented speed in various fields such as 3D scene reconstruction, object detection and recognition, video content tracking, pose estimation, and motion estimation. To address the issues of low accuracy and high time complexity in traditional image feature point matching, a fast image-matching algorithm based on nonlinear filtering is proposed. By applying nonlinear diffusion filtering to scene images, details and edge information can be effectively extracted. The feature descriptors of the feature points are transformed into binary form, occupying less storage space and thus reducing matching time. The adaptive RANSAC algorithm is utilized to eliminate mismatched feature points, thereby improving matching accuracy. Our experimental results on the Mikolajcyzk image dataset comparing the SIFT algorithm with SURF-, BRISK-, and ORB-improved algorithms based on the SIFT algorithm conclude that the fast image-matching algorithm based on nonlinear filtering reduces matching time by three-quarters, with an overall average accuracy of over 7% higher than other algorithms. These experiments demonstrate that the fast image-matching algorithm based on nonlinear filtering has better robustness and real-time performance.
Full article
(This article belongs to the Special Issue Meta-Heuristics and Machine Learning in Modelling, Developing and Optimising Complex Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
Diabetic Retinopathy Lesion Segmentation Method Based on Multi-Scale Attention and Lesion Perception
by
Ye Bian, Chengyong Si and Lei Wang
Algorithms 2024, 17(4), 164; https://doi.org/10.3390/a17040164 - 19 Apr 2024
Abstract
The early diagnosis of diabetic retinopathy (DR) can effectively prevent irreversible vision loss and assist ophthalmologists in providing timely and accurate treatment plans. However, the existing methods based on deep learning have a weak perception ability of different scale information in retinal fundus
[...] Read more.
The early diagnosis of diabetic retinopathy (DR) can effectively prevent irreversible vision loss and assist ophthalmologists in providing timely and accurate treatment plans. However, the existing methods based on deep learning have a weak perception ability of different scale information in retinal fundus images, and the segmentation capability of subtle lesions is also insufficient. This paper aims to address these issues and proposes MLNet for DR lesion segmentation, which mainly consists of the Multi-Scale Attention Block (MSAB) and the Lesion Perception Block (LPB). The MSAB is designed to capture multi-scale lesion features in fundus images, while the LPB perceives subtle lesions in depth. In addition, a novel loss function with tailored lesion weight is designed to reduce the influence of imbalanced datasets on the algorithm. The performance comparison between MLNet and other state-of-the-art methods is carried out in the DDR dataset and DIARETDB1 dataset, and MLNet achieves the best results of 51.81% mAUPR, 49.85% mDice, and 37.19% mIoU in the DDR dataset, and 67.16% mAUPR and 61.82% mDice in the DIARETDB1 dataset. The generalization experiment of MLNet in the IDRiD dataset achieves 59.54% mAUPR, which is the best among other methods. The results show that MLNet has outstanding DR lesion segmentation ability.
Full article
(This article belongs to the Special Issue AI Algorithms in Medical Imaging)
►▼
Show Figures
Figure 1
Open AccessArticle
Quantum Recurrent Neural Networks: Predicting the Dynamics of Oscillatory and Chaotic Systems
by
Yuan Chen and Abdul Khaliq
Algorithms 2024, 17(4), 163; https://doi.org/10.3390/a17040163 - 19 Apr 2024
Abstract
In this study, we investigate Quantum Long Short-Term Memory and Quantum Gated Recurrent Unit integrated with Variational Quantum Circuits in modeling complex dynamical systems, including the Van der Pol oscillator, coupled oscillators, and the Lorenz system. We implement these advanced quantum machine learning
[...] Read more.
In this study, we investigate Quantum Long Short-Term Memory and Quantum Gated Recurrent Unit integrated with Variational Quantum Circuits in modeling complex dynamical systems, including the Van der Pol oscillator, coupled oscillators, and the Lorenz system. We implement these advanced quantum machine learning techniques and compare their performance with traditional Long Short-Term Memory and Gated Recurrent Unit models. The results of our study reveal that the quantum-based models deliver superior precision and more stable loss metrics throughout 100 epochs for both the Van der Pol oscillator and coupled harmonic oscillators, and 20 epochs for the Lorenz system. The Quantum Gated Recurrent Unit outperforms competing models, showcasing notable performance metrics. For the Van der Pol oscillator, it reports MAE 0.0902 and RMSE 0.1031 for variable x and MAE 0.1500 and RMSE 0.1943 for y; for coupled oscillators, Oscillator 1 shows MAE 0.2411 and RMSE 0.2701 and Oscillator 2 MAE is 0.0482 and RMSE 0.0602; and for the Lorenz system, the results are MAE 0.4864 and RMSE 0.4971 for x, MAE 0.4723 and RMSE 0.4846 for y, and MAE 0.4555 and RMSE 0.4745 for z. These outcomes mark a significant advancement in the field of quantum machine learning.
Full article
(This article belongs to the Special Issue Quantum and Classical Artificial Intelligence)
►▼
Show Figures
Figure 1
Open AccessArticle
Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
by
Roman Garaev, Bader Rasheed and Adil Mehmood Khan
Algorithms 2024, 17(4), 162; https://doi.org/10.3390/a17040162 - 19 Apr 2024
Abstract
Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training
[...] Read more.
Deep neural networks (DNNs) have gained prominence in various applications, but remain vulnerable to adversarial attacks that manipulate data to mislead a DNN. This paper aims to challenge the efficacy and transferability of two contemporary defense mechanisms against adversarial attacks: (a) robust training and (b) adversarial training. The former suggests that training a DNN on a data set consisting solely of robust features should produce a model resistant to adversarial attacks. The latter creates an adversarially trained model that learns to minimise an expected training loss over a distribution of bounded adversarial perturbations. We reveal a significant lack in the transferability of these defense mechanisms and provide insight into the potential dangers posed by -norm attacks previously underestimated by the research community. Such conclusions are based on extensive experiments involving (1) different model architectures, (2) the use of canonical correlation analysis, (3) visual and quantitative analysis of the neural network’s latent representations, (4) an analysis of networks’ decision boundaries and (5) the use of equivalence of and perturbation norm theories.
Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
Advancing Pulmonary Nodule Diagnosis by Integrating Engineered and Deep Features Extracted from CT Scans
by
Wiem Safta and Ahmed Shaffie
Algorithms 2024, 17(4), 161; https://doi.org/10.3390/a17040161 - 18 Apr 2024
Abstract
Enhancing lung cancer diagnosis requires precise early detection methods. This study introduces an automated diagnostic system leveraging computed tomography (CT) scans for early lung cancer identification. The main approach is the integration of three distinct feature analyses: the novel 3D-Local Octal Pattern (LOP)
[...] Read more.
Enhancing lung cancer diagnosis requires precise early detection methods. This study introduces an automated diagnostic system leveraging computed tomography (CT) scans for early lung cancer identification. The main approach is the integration of three distinct feature analyses: the novel 3D-Local Octal Pattern (LOP) descriptor for texture analysis, the 3D-Convolutional Neural Network (CNN) for extracting deep features, and geometric feature analysis to characterize pulmonary nodules. The 3D-LOP method innovatively captures nodule texture by analyzing the orientation and magnitude of voxel relationships, enabling the distinction of discriminative features. Simultaneously, the 3D-CNN extracts deep features from raw CT scans, providing comprehensive insights into nodule characteristics. Geometric features and assessing nodule shape further augment this analysis, offering a holistic view of potential malignancies. By amalgamating these analyses, our system employs a probability-based linear classifier to deliver a final diagnostic output. Validated on 822 Lung Image Database Consortium (LIDC) cases, the system’s performance was exceptional, with measures of , , , and for accuracy, sensitivity, specificity, and Area Under the ROC Curve (AUC), respectively. These results highlight the system’s potential as a significant advancement in clinical diagnostics, offering a reliable, non-invasive tool for lung cancer detection that promises to improve patient outcomes through early diagnosis.
Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis)
►▼
Show Figures
Figure 1
Open AccessArticle
A Communication-Efficient Federated Learning Framework for Sustainable Development Using Lemurs Optimizer
by
Mohammed Azmi Al-Betar, Ammar Kamal Abasi, Zaid Abdi Alkareem Alyasseri, Salam Fraihat and Raghad Falih Mohammed
Algorithms 2024, 17(4), 160; https://doi.org/10.3390/a17040160 - 15 Apr 2024
Abstract
The pressing need for sustainable development solutions necessitates innovative data-driven tools. Machine learning (ML) offers significant potential, but faces challenges in centralized approaches, particularly concerning data privacy and resource constraints in geographically dispersed settings. Federated learning (FL) emerges as a transformative paradigm for
[...] Read more.
The pressing need for sustainable development solutions necessitates innovative data-driven tools. Machine learning (ML) offers significant potential, but faces challenges in centralized approaches, particularly concerning data privacy and resource constraints in geographically dispersed settings. Federated learning (FL) emerges as a transformative paradigm for sustainable development by decentralizing ML training to edge devices. However, communication bottlenecks hinder its scalability and sustainability. This paper introduces an innovative FL framework that enhances communication efficiency. The proposed framework addresses the communication bottleneck by harnessing the power of the Lemurs optimizer (LO), a nature-inspired metaheuristic algorithm. Inspired by the cooperative foraging behavior of lemurs, the LO strategically selects the most relevant model updates for communication, significantly reducing communication overhead. The framework was rigorously evaluated on CIFAR-10, MNIST, rice leaf disease, and waste recycling plant datasets representing various areas of sustainable development. Experimental results demonstrate that the proposed framework reduces communication overhead by over 15% on average compared to baseline FL approaches, while maintaining high model accuracy. This breakthrough extends the applicability of FL to resource-constrained environments, paving the way for more scalable and sustainable solutions for real-world initiatives.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Efficient Algorithm for Proportional Lumpability and Its Application to Selfish Mining in Public Blockchains
by
Carla Piazza, Sabina Rossi and Daria Smuseva
Algorithms 2024, 17(4), 159; https://doi.org/10.3390/a17040159 - 15 Apr 2024
Abstract
This paper explores the concept of proportional lumpability as an extension of the original definition of lumpability, addressing the challenges posed by the state space explosion problem in computing performance indices for large stochastic models. Lumpability traditionally relies on state aggregation techniques and
[...] Read more.
This paper explores the concept of proportional lumpability as an extension of the original definition of lumpability, addressing the challenges posed by the state space explosion problem in computing performance indices for large stochastic models. Lumpability traditionally relies on state aggregation techniques and is applicable to Markov chains demonstrating structural regularity. Proportional lumpability extends this idea, proposing that the transition rates of a Markov chain can be modified by certain factors, resulting in a lumpable new Markov chain. This concept facilitates the derivation of precise performance indices for the original process. This paper establishes the well-defined nature of the problem of computing the coarsest proportional lumpability that refines a given initial partition, ensuring a unique solution exists. Additionally, a polynomial time algorithm is introduced to solve this problem, offering valuable insights into both the concept of proportional lumpability and the broader realm of partition refinement techniques. The effectiveness of proportional lumpability is demonstrated through a case study that consists of designing a model to investigate selfish mining behaviors on public blockchains. This research contributes to a better understanding of efficient approaches for handling large stochastic models and highlights the practical applicability of proportional lumpability in deriving exact performance indices.
Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
►▼
Show Figures
Figure 1
Journal Menu
► ▼ Journal Menu-
- Algorithms Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Algorithms, Computation, Entropy, Fractal Fract, MCA
Analytical and Numerical Methods for Stochastic Biological Systems
Topic Editors: Mehmet Yavuz, Necati Ozdemir, Mouhcine Tilioua, Yassine SabbarDeadline: 10 May 2024
Topic in
Algorithms, Diagnostics, Entropy, Information, J. Imaging
Application of Machine Learning in Molecular Imaging
Topic Editors: Allegra Conti, Nicola Toschi, Marianna Inglese, Andrea Duggento, Matthew Grech-Sollars, Serena Monti, Giancarlo Sportelli, Pietro CarraDeadline: 31 May 2024
Topic in
Algorithms, Axioms, Fractal Fract, Mathematics, Symmetry
Fractal and Design of Multipoint Iterative Methods for Nonlinear Problems
Topic Editors: Xiaofeng Wang, Fazlollah SoleymaniDeadline: 30 June 2024
Topic in
Algorithms, Computation, Information, Mathematics
Complex Networks and Social Networks
Topic Editors: Jie Meng, Xiaowei Huang, Minghui Qian, Zhixuan XuDeadline: 31 July 2024
Conferences
Special Issues
Special Issue in
Algorithms
Hybrid Intelligent Algorithms
Guest Editors: Grigorios Beligiannis, Efstratios F. Georgopoulos, Spiridon D. Likothanassis, Isidoros Perikos, Ioannis X. TassopoulosDeadline: 30 April 2024
Special Issue in
Algorithms
Bio-Inspired Algorithms
Guest Editors: Sándor Szénási, Gábor KertészDeadline: 20 May 2024
Special Issue in
Algorithms
Algorithms for Smart Cities
Guest Editors: Gloria Cerasela Crisan, Elena NechitaDeadline: 31 May 2024
Special Issue in
Algorithms
Algorithms for Games AI
Guest Editors: Wenxin Li, Haifeng ZhangDeadline: 20 June 2024
Topical Collections
Topical Collection in
Algorithms
Feature Papers in Algorithms for Multidisciplinary Applications
Collection Editor: Francesc Pozo
Topical Collection in
Algorithms
Feature Papers in Randomized, Online and Approximation Algorithms
Collection Editor: Frank Werner
Topical Collection in
Algorithms
Featured Reviews of Algorithms
Collection Editors: Arun Kumar Sangaiah, Xingjuan Cai