Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,480)

Search Parameters:
Keywords = k-Nearest Neighbor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3920 KB  
Article
Identification of Rubber Belt Damages Using Machine Learning Algorithms
by Miroslaw Rucki, Arturas Kilikevicius, Damian Bzinkowski and Tomasz Ryba
Appl. Sci. 2025, 15(19), 10449; https://doi.org/10.3390/app151910449 - 26 Sep 2025
Abstract
This paper presents the experimental results of a Machine Learning application for the health monitoring of a conveyor belt. The real-time analysis of the rubber belt condition is a crucial issue in achieving safety and avoiding critical failures and related expenses. The measuring [...] Read more.
This paper presents the experimental results of a Machine Learning application for the health monitoring of a conveyor belt. The real-time analysis of the rubber belt condition is a crucial issue in achieving safety and avoiding critical failures and related expenses. The measuring system based on strain gauges was applied to identify the actual state of the belt. Using the Classification Lerner application from MATLAB platform, 22 algorithms were tested, and using the Diagnostic Feature Designer application, the analysis was performed. Three tested ML algorithms were able to classify the states of the conveyor belt with preset damages correctly, exhibiting 100% prediction accuracy. The k-nearest neighbors (KNN) classifiers and neural networks failed to achieve that level of accuracy. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

21 pages, 2807 KB  
Article
Discrimination of Multiple Foliar Diseases in Wheat Using Novel Feature Selection and Machine Learning
by Sen Zhuang, Yujuan Huang, Jie Zhu, Qingluo Yang, Wei Li, Yangyang Gu, Tongjie Li, Hengbiao Zheng, Chongya Jiang, Tao Cheng, Yongchao Tian, Yan Zhu, Weixing Cao and Xia Yao
Remote Sens. 2025, 17(19), 3304; https://doi.org/10.3390/rs17193304 - 26 Sep 2025
Abstract
Wheat, a globally vital food crop, faces severe threats from numerous foliar diseases, which often infect agricultural fields, significantly compromising yield and quality. Rapid and accurate identification of the specific disease is crucial for ensuring food security. Although progress has been made in [...] Read more.
Wheat, a globally vital food crop, faces severe threats from numerous foliar diseases, which often infect agricultural fields, significantly compromising yield and quality. Rapid and accurate identification of the specific disease is crucial for ensuring food security. Although progress has been made in wheat foliar disease detection using RGB imaging and spectroscopy, most prior studies have focused on identifying the presence of a single disease, without considering the need to operationalize such methods, and it will be necessary to differentiate between multiple diseases. In this study, we systematically investigate the differentiation of three wheat foliar diseases (e.g., powdery mildew, stripe rust, and leaf rust) and evaluate feature selection strategies and machine learning models for disease identification. Based on field experiments conducted from 2017 to 2024 employing artificial inoculation, we established a standardized hyperspectral database of wheat foliar diseases classified by disease severity. Four feature selection methods were employed to extract spectral features prior to classification: continuous wavelet projection algorithm (CWPA), continuous wavelet analysis (CWA), successive projections algorithm (SPA), and Relief-F. The selected features (which are derived by CWPA, CWA, SPA, and Relief-F algorithm) were then used as predictors for three disease-identification machine learning models: random forest (RF), k-nearest neighbors (KNN), and naïve Bayes (BAYES). Results showed that CWPA outperformed other feature selection methods. The combination of CWPA and KNN for discriminating disease-infected (powdery mildew, stripe rust, leaf rust) and healthy leaves by using only two key features (i.e., 668 nm at wavelet scale 5 and 894 nm at wavelet scale 7), achieved an overall accuracy (OA) of 77% and a map-level image classification efficacy (MICE) of 0.63. This combination of feature selection and machine learning model provides an efficient and precise procedure for discriminating between multiple foliar diseases in agricultural fields, thus offering technical support for precision agriculture. Full article
Show Figures

Figure 1

8 pages, 641 KB  
Proceeding Paper
Prediction of Asthma Disease Using Machine Learning Algorithm
by Zahab, Manzoor Hussain and Lusiana Sani Parwati
Eng. Proc. 2025, 107(1), 115; https://doi.org/10.3390/engproc2025107115 - 26 Sep 2025
Abstract
Millions of people worldwide suffer from asthma disease, and frequently, early diagnosis and efficient treatment are needed to enhance patient outcomes. Through an analysis of clinical and environmental characteristics, this study investigates a machine learning algorithm for predicting asthma using decision trees, K-Nearest [...] Read more.
Millions of people worldwide suffer from asthma disease, and frequently, early diagnosis and efficient treatment are needed to enhance patient outcomes. Through an analysis of clinical and environmental characteristics, this study investigates a machine learning algorithm for predicting asthma using decision trees, K-Nearest Neighbors, random forests, and the naïve Bayes method. A dataset related to asthma disease is divided into two parts, with the first part for training consisting of around 70% and the second part for testing comprising 30%. Before dividing the subset, SMOTE is applied to balance the dataset because the dataset is unbalanced. Regarding the four algorithms, the decision tree attained better accuracy than the other algorithms. K-NN (K Nearest Neighbor) attained 97.50% accuracy, random forest attained 97.35% accuracy, naïve Bayes attained 69.99% accuracy, and the decision tree attained 67.65% accuracy. In all algorithms, the decision tree performed with high accuracy, as its prediction is 97.65% correct in detection. These algorithms can be applied to related predictive healthcare tasks. Full article
Show Figures

Figure 1

12 pages, 622 KB  
Article
Combined Infrared Thermography and Agitated Behavior in Sows Improve Estrus Detection When Applied to Supervised Machine Learning Algorithms
by Leila Cristina Salles Moura, Janaina Palermo Mendes, Yann Malini Ferreira, Rayna Sousa Vieira Amaral, Diana Assis Oliveira, Fabiana Ribeiro Caldara, Bianca Thais Baumann, Jansller Luiz Genova, Charles Kiefer, Luciano Hauschild and Luan Sousa Santos
Animals 2025, 15(19), 2798; https://doi.org/10.3390/ani15192798 - 25 Sep 2025
Abstract
The identification of estrus at the right moment allows for a higher success of fecundity with artificial insemination. Evaluating changes in body surface temperature of sows during the estrus period using an infrared thermography camera (ITC) can provide an accurate model to predict [...] Read more.
The identification of estrus at the right moment allows for a higher success of fecundity with artificial insemination. Evaluating changes in body surface temperature of sows during the estrus period using an infrared thermography camera (ITC) can provide an accurate model to predict these changes. This pilot study comprised nine crossbred Large White x Landrace sows, providing 59 data records for analysis. Observed changes in the behavior and physiological signs of the sows signaled the identification of estrus. Images of the ocular area, ear tips, breast, back, vulva, and perianal area were collected with the ITC. The images were analyzed using the FLIR Thermal Studio Starter software. Infrared mean temperatures were reported and compared using ANOVA and Tukey–Kramer tests (p < 0.05). Supervised machine learning models were tested using random forest (RF), Conditional inference trees (Ctree), Partial least squares (PLS), and K-nearest neighbors (KNN), and the method performance was measured using a confusion matrix. The orbital region showed significant differences between estrus and non-estrus states in sows. In the confusion matrix, the algorithm predicted estrus with 87% accuracy in the test set, which contained 40% of the data, when agitated behavior was combined with orbital area temperature. These findings suggest the potential for integrating behavioral and physiological observations with orbital thermography and machine learning to detect estrus in sows under field conditions accurately. Full article
(This article belongs to the Section Pigs)
Show Figures

Figure 1

27 pages, 4687 KB  
Article
Comparative Study of Vibration-Based Machine Learning Algorithms for Crack Identification and Location in Operating Wind Turbine Blades
by Adolfo Salgado-Ancona, Perla Yazmín Sevilla-Camacho, José Billerman Robles-Ocampo, Juvenal Rodríguez-Reséndiz, Sergio De la Cruz-Arreola and Edwin Neptalí Hernández-Estrada
AI 2025, 6(10), 242; https://doi.org/10.3390/ai6100242 - 25 Sep 2025
Abstract
The growing energy demand has increased the number of wind turbines, raising the need to monitor blade health. Since blades are prone to damage that can cause severe failures, early detection is crucial. Machine learning-based monitoring systems can identify and locate cracks without [...] Read more.
The growing energy demand has increased the number of wind turbines, raising the need to monitor blade health. Since blades are prone to damage that can cause severe failures, early detection is crucial. Machine learning-based monitoring systems can identify and locate cracks without interrupting energy production, enabling timely maintenance. This study provides a comparative analysis and approach to the application and effectiveness of different vibration-based machine learning algorithms to detect the presence of cracks, identify the cracked blade, and locate the zone where the crack occurs in rotating blades of a small wind turbine. The datasets comprise root vibration signals, derived from healthy and cracked blades of a wind turbine in operational conditions. In this study, the blades are not considered identical. The sampling set dimension and the number of features were variables considered during the development and assessment of different models based on decision tree (DT), support vector machine (SVM), k-nearest neighbors (KNN), and multilayer perceptron algorithms (MLP). Overall, the KNN models are the clear winners in terms of training efficiency, even as the sample size increases. DT is the most efficient algorithm in terms of test speed, followed by SVM, MLP, and KNN. Full article
Show Figures

Figure 1

11 pages, 849 KB  
Proceeding Paper
Real-Time Phishing URL Detection Using Machine Learning
by Atta Ur Rehman, Irsa Imtiaz, Sabeen Javaid and Muhamad Muslih
Eng. Proc. 2025, 107(1), 108; https://doi.org/10.3390/engproc2025107108 - 25 Sep 2025
Abstract
The study investigates the use of powerful machine learning approaches to the real-time detection of phishing URLs, addressing a critical cybersecurity concern. The dataset we utilized in this research work was collected from the University of California Irvine (UCI) Machine Learning Repository. It [...] Read more.
The study investigates the use of powerful machine learning approaches to the real-time detection of phishing URLs, addressing a critical cybersecurity concern. The dataset we utilized in this research work was collected from the University of California Irvine (UCI) Machine Learning Repository. It has 235,795 instances with fifty-four distinct parameters. The label class is of binomial type and has only two target classes. We used a range of complex algorithms, including k-nearest neighbor, naive Bayes, decision trees, random forests, and random tree, to assess the discriminative characteristics retrieved from URLs. The random forest classifier beat the other classifiers, reaching the greatest accuracy of 99.99%. The study demonstrates that these models achieve superior accuracy in identifying phishing attempts, significantly outperforming traditional detection methodologies. The findings underscore the potential of machine learning to provide a scalable, efficient, and robust solution for real-time phishing detection. Implementing these innovative platforms to existing security solutions is going to play a critical role in sustaining the protective line against continuously evolving and persistent phishing schemes. Full article
Show Figures

Figure 1

25 pages, 3707 KB  
Article
Evaluating the Effectiveness of Large Language Models (LLMs) Versus Machine Learning (ML) in Identifying and Detecting Phishing Email Attempts
by Saed Tarapiah, Linda Abbas, Oula Mardawi, Shadi Atalla, Yassine Himeur and Wathiq Mansoor
Algorithms 2025, 18(10), 599; https://doi.org/10.3390/a18100599 - 25 Sep 2025
Abstract
Phishing emails remain a significant concern and a growing cybersecurity threat in online communication. They often bypass traditional filters due to their increasing sophistication. This study presents a comparative evaluation of machine learning (ML) models and transformer-based large language models (LLMs) for phishing [...] Read more.
Phishing emails remain a significant concern and a growing cybersecurity threat in online communication. They often bypass traditional filters due to their increasing sophistication. This study presents a comparative evaluation of machine learning (ML) models and transformer-based large language models (LLMs) for phishing email detection, with embedded URL analysis. This study assessed ML training and LLM fine-tuning on both balanced and imbalanced datasets. We evaluated multiple ML models, including Random Forest, Logistic Regression, Support Vector Machine, Naïve Bayes, Gradient Boosting, Decision Tree, and K-Nearest Neighbors, alongside transformer-based LLMs DistilBERT, ALBERT, BERT-Tiny, ELECTRA, MiniLM, and RoBERTa. To further enhance realism, phishing emails generated by LLMs were included in the evaluation. Across all configurations, both the ML models and the fine-tuned LLMs demonstrated robust performance. Random Forest achieved over 98% accuracy in both email detection and URL classification. DistilBERT obtained almost as high scores on emails and URLs. Balancing the dataset led to slight accuracy gains in ML models but minor decreases in LLMs, likely due to their sensitivity to majority class reductions during training. Overall, LLMs are highly effective at capturing complex language patterns, while traditional ML models remain efficient and require low computational resources. Combining both approaches through a hybrid or ensemble method could enhance phishing detection effectiveness. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

17 pages, 2255 KB  
Article
Electromyography-Based Sign Language Recognition: A Low-Channel Approach for Classifying Fruit Name Gestures
by Kudratjon Zohirov, Mirjakhon Temirov, Sardor Boykobilov, Golib Berdiev, Feruz Ruziboev, Khojiakbar Egamberdiev, Mamadiyor Sattorov, Gulmira Pardayeva and Kuvonch Madatov
Signals 2025, 6(4), 50; https://doi.org/10.3390/signals6040050 - 25 Sep 2025
Abstract
This paper presents a method for recognizing sign language gestures corresponding to fruit names using electromyography (EMG) signals. The proposed system focuses on classification using a limited number of EMG channels, aiming to reduce classification process complexity while maintaining high recognition accuracy. The [...] Read more.
This paper presents a method for recognizing sign language gestures corresponding to fruit names using electromyography (EMG) signals. The proposed system focuses on classification using a limited number of EMG channels, aiming to reduce classification process complexity while maintaining high recognition accuracy. The dataset (DS) contains EMG signal data of 46 hearing-impaired people and descriptions of fruit names, including apple, pear, apricot, nut, cherry, and raspberry, in sign language (SL). Based on the presented DS, gesture movements were classified using five different classification algorithms—Random Forest, k-Nearest Neighbors, Logistic Regression, Support Vector Machine, and neural networks—and the algorithm that gives the best result for gesture movements was determined. The best classification result was obtained during recognition of the word cherry based on the RF algorithm, and 97% accuracy was achieved. Full article
(This article belongs to the Special Issue Advances in Signal Detecting and Processing)
Show Figures

Figure 1

23 pages, 3914 KB  
Article
Machine Learning-Driven Early Productivity Forecasting for Post-Fracturing Multilayered Wells
by Ruibin Zhu, Ning Li, Guohua Liu, Fengjiao Qu, Changjun Long, Xin Wang, Shuzhi Xiu, Fei Ling, Qinzhuo Liao and Gensheng Li
Water 2025, 17(19), 2804; https://doi.org/10.3390/w17192804 - 24 Sep 2025
Viewed by 125
Abstract
Hydraulic fracturing technology significantly enhances reservoir conductivity by creating artificial fractures, serving as a crucial means for the economically viable development of low-permeability reservoirs. Accurate prediction of post-fracturing productivity is essential for optimizing fracturing parameter design and establishing scientific production strategies. However, current [...] Read more.
Hydraulic fracturing technology significantly enhances reservoir conductivity by creating artificial fractures, serving as a crucial means for the economically viable development of low-permeability reservoirs. Accurate prediction of post-fracturing productivity is essential for optimizing fracturing parameter design and establishing scientific production strategies. However, current limitations in understanding post-fracturing production dynamics and the lack of efficient prediction methods severely constrain the evaluation of fracturing effectiveness and the adjustment of development plans. This study proposes a machine learning-based method for predicting post-fracturing productivity in multi-layer commingled production wells and validates its effectiveness using a key block from the PetroChina North China Huabei Oilfield Company. During the data preprocessing stage, the three-sigma rule, median absolute deviation, and density-based spatial clustering of applications with noise were employed to detect outliers, while missing values were imputed using the K-nearest neighbors method. Feature selection was performed using Pearson correlation coefficient and variance inflation factor, resulting in the identification of twelve key parameters as input features. The coefficient of determination served as the evaluation metric, and model hyperparameters were optimized using grid search combined with cross-validation. To address the multi-layer commingled production challenge, seven distinct datasets incorporating production parameters were constructed based on four geological parameter partitioning methods: thickness ratio, porosity–thickness product ratio, permeability–thickness product ratio, and porosity–permeability–thickness product ratio. Twelve machine learning models were then applied for training. Through comparative analysis, the most suitable productivity prediction model for the block was selected, and the block’s productivity patterns were revealed. The results show that after training with block-partitioned data, the accuracy of all models has improved; further stratigraphic subdivision based on block partitioning has led the models to reach peak performance. However, data volume is a critical limiting factor—for blocks with insufficient data, stratigraphic subdivision instead results in a decline in prediction performance. Full article
Show Figures

Figure 1

21 pages, 2310 KB  
Article
Development of a Model for Detecting Spectrum Sensing Data Falsification Attack in Mobile Cognitive Radio Networks Integrating Artificial Intelligence Techniques
by Lina María Yara Cifuentes, Ernesto Cadena Muñoz and Rafael Cubillos Sánchez
Algorithms 2025, 18(10), 596; https://doi.org/10.3390/a18100596 - 24 Sep 2025
Viewed by 117
Abstract
Mobile Cognitive Radio Networks (MCRNs) have emerged as a promising solution to address spectrum scarcity by enabling dynamic access to underutilized frequency bands assigned to Primary or Licensed Users (PUs). These networks rely on Cooperative Spectrum Sensing (CSS) to identify available spectrum, but [...] Read more.
Mobile Cognitive Radio Networks (MCRNs) have emerged as a promising solution to address spectrum scarcity by enabling dynamic access to underutilized frequency bands assigned to Primary or Licensed Users (PUs). These networks rely on Cooperative Spectrum Sensing (CSS) to identify available spectrum, but this collaborative approach also introduces vulnerabilities to security threats—most notably, Spectrum Sensing Data Falsification (SSDF) attacks. In such attacks, malicious nodes deliberately report false sensing information, undermining the reliability and performance of the network. This paper investigates the application of machine learning techniques to detect and mitigate SSDF attacks in MCRNs, particularly considering the additional challenges introduced by node mobility. We propose a hybrid detection framework that integrates a reputation-based weighting mechanism with Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) classifiers to improve detection accuracy and reduce the influence of falsified data. Experimental results on software defined radio (SDR) demonstrate that the proposed method significantly enhances the system’s ability to identify malicious behavior, achieving high detection accuracy, reduces the rate of data falsification by approximately 5–20%, increases the probability of attack detection, and supports the dynamic creation of a blacklist to isolate malicious nodes. These results underscore the potential of combining machine learning with trust-based mechanisms to strengthen the security and reliability of mobile cognitive radio networks. Full article
Show Figures

Figure 1

22 pages, 3646 KB  
Article
Machine Learning in the Classification of RGB Images of Maize (Zea mays L.) Using Texture Attributes and Different Doses of Nitrogen
by Thiago Lima da Silva, Fernanda de Fátima da Silva Devechio, Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Liliane Maria Romualdo Altão, Gabriel Pagin, Adriano Rogério Bruno Tech and Murilo Mesquita Baesso
AgriEngineering 2025, 7(10), 317; https://doi.org/10.3390/agriengineering7100317 - 23 Sep 2025
Viewed by 130
Abstract
Nitrogen fertilization is decisive for maize productivity, fertilizer use efficiency, and sustainability, which calls for fast and nondestructive nutritional diagnosis. This study evaluated the classification of maize plant nutritional status from red, green, and blue (RGB) leaf images using texture attributes. A greenhouse [...] Read more.
Nitrogen fertilization is decisive for maize productivity, fertilizer use efficiency, and sustainability, which calls for fast and nondestructive nutritional diagnosis. This study evaluated the classification of maize plant nutritional status from red, green, and blue (RGB) leaf images using texture attributes. A greenhouse experiment was conducted under a completely randomized factorial design with four nitrogen doses, one maize hybrid Pioneer 30F35, and four replicates, at two sampling times corresponding to distinct phenological stages, totaling thirty-two experimental units. Images were processed with the gray-level cooccurrence matrix computed at three distances 1, 3, and 5 pixels and four orientations 0°, 45°, 90°, and 135°, yielding eight texture descriptors that served as inputs to five supervised classifiers: an artificial neural network, a support vector machine, k nearest neighbors, a decision tree, and Naive Bayes. The results indicated that texture descriptors discriminated nitrogen doses with good performance and moderate computational cost, and that homogeneity, dissimilarity, and contrast were the most informative attributes. The artificial neural network showed the most stable performance at both stages, followed by the support vector machine and k nearest neighbors, whereas the decision tree and Naive Bayes were less suitable. Confusion matrices and receiver operating characteristic curves indicated greater separability for omission and excess classes, with D1 standing out, and the patterns were consistent with the chemical analysis. Future work should include field validation, multiple seasons and genotypes, integration with spectral indices and multisensor data, application of model explainability techniques, and assessment of latency and scalability in operational scenarios. Full article
Show Figures

Figure 1

27 pages, 44538 KB  
Article
Short-Term Load Forecasting in the Greek Power Distribution System: A Comparative Study of Gradient Boosting and Deep Learning Models
by Md Fazle Hasan Shiblee and Paraskevas Koukaras
Energies 2025, 18(19), 5060; https://doi.org/10.3390/en18195060 - 23 Sep 2025
Viewed by 218
Abstract
Accurate short-term electricity load forecasting is essential for efficient energy management, grid reliability, and cost optimization. This study presents a comprehensive comparison of five supervised learning models—Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), a hybrid (CNN-LSTM) architecture, and [...] Read more.
Accurate short-term electricity load forecasting is essential for efficient energy management, grid reliability, and cost optimization. This study presents a comprehensive comparison of five supervised learning models—Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), a hybrid (CNN-LSTM) architecture, and Light Gradient Boosting Machine (LightGBM)—using multivariate data from the Greek electricity market between 2015 and 2024. The dataset incorporates hourly load, temperature, humidity, and holiday indicators. Extensive preprocessing was applied, including K-Nearest Neighbor (KNN) imputation, time-based feature extraction, and normalization. Models were trained using a 70:20:10 train–validation–test split and evaluated with standard performance metrics: MAE, MSE, RMSE, NRMSE, MAPE, and R2. The experimental findings show that LightGBM beat deep learning (DL) models on all evaluation metrics and had the best MAE (69.12 MW), RMSE (101.67 MW), and MAPE (1.20%) and the highest R2 (0.9942) for the test set. It also outperformed models in the literature and operational forecasts conducted in the real world by ENTSO-E. Though LSTM performed well, particularly in long-term dependency capturing, it performed a bit worse in high-variance periods. CNN, GRU, and hybrid models demonstrated moderate results, but they tended to underfit or overfit in some circumstances. These findings highlight the efficacy of LightGBM in structured time-series forecasting tasks, offering a scalable and interpretable alternative to DL models. This study supports its potential for real-world deployment in smart/distribution grid applications and provides valuable insights into the trade-offs between accuracy, complexity, and generalization in load forecasting models. Full article
Show Figures

Figure 1

16 pages, 3708 KB  
Article
Myoelectric and Inertial Data Fusion Through a Novel Attention-Based Spatiotemporal Feature Extraction for Transhumeral Prosthetic Control: An Offline Analysis
by Andrea Tigrini, Alessandro Mengarelli, Ali H. Al-Timemy, Rami N. Khushaba, Rami Mobarak, Mara Scattolini, Gaith K. Sharba, Federica Verdini, Ennio Gambi and Laura Burattini
Sensors 2025, 25(18), 5920; https://doi.org/10.3390/s25185920 - 22 Sep 2025
Viewed by 124
Abstract
This study proposes a feature extraction scheme that fuses accelerometric (ACC) and electromyographic (EMG) data to improve shoulder movement identification in individuals with transhumeral amputation, in whom the clinical need for intuitive control strategies enabling reliable activation of full-arm prostheses is underinvestigated. A [...] Read more.
This study proposes a feature extraction scheme that fuses accelerometric (ACC) and electromyographic (EMG) data to improve shoulder movement identification in individuals with transhumeral amputation, in whom the clinical need for intuitive control strategies enabling reliable activation of full-arm prostheses is underinvestigated. A novel spatiotemporal warping feature extraction architecture was employed to realize EMG and ACC information fusion at the feature level. EMG and ACC data were collected from six participants with intact limbs and four participants with transhumeral amputation using an NI USB-6009 device at 1000 Hz to support the proposed feature extraction scheme. For each participant, a leave-one-trial-out (LOTO) training and testing approach was used for developing pattern recognition models for both the intact-limb (IL) and amputee (AMP) groups. The analysis revealed that the introduction of ACC information has a positive impact when using windows of length (WLs) lower than 150 ms. A linear discriminant analysis (LDA) classifier was able to exceed the accuracy of 90% in each WL condition and for each group. Similar results were observed for an extreme learning machine (ELM), whereas k-nearest neighbors (kNN) and an autonomous learning multi-model classifier showed a mean accuracy of less than 87% for both IL and AMP groups at different WLs, guaranteeing applicability over a large set of shallow pattern-recognition models that can be used in real scenarios. The present work lays the groundwork for future studies involving real-time validation of the proposed methodology on a larger population, acknowledging the current limitation of offline analysis. Full article
(This article belongs to the Special Issue Advanced Sensors and AI Integration for Human–Robot Teaming)
Show Figures

Figure 1

17 pages, 1039 KB  
Article
A Federated Intrusion Detection System for Edge Environments Using Multi-Index Hashing and Attention-Based KNN
by Ying Liu, Xing Liu, Hao Yu, Bowen Guo and Xiao Liu
Symmetry 2025, 17(9), 1580; https://doi.org/10.3390/sym17091580 - 22 Sep 2025
Viewed by 402
Abstract
Edge computing offers low-latency and distributed processing for IoT applications but poses new security challenges, due to limited resources and decentralized data. Intrusion detection systems (IDSs) are essential for real-time threat monitoring, yet traditional IDS frameworks often struggle in edge environments, failing to [...] Read more.
Edge computing offers low-latency and distributed processing for IoT applications but poses new security challenges, due to limited resources and decentralized data. Intrusion detection systems (IDSs) are essential for real-time threat monitoring, yet traditional IDS frameworks often struggle in edge environments, failing to meet efficiency requirements. This paper presents an efficient intrusion detection framework that integrates spatiotemporal hashing, federated learning, and fast K-nearest neighbor (KNN) retrieval. A hashing neural network encodes network traffic into compact binary codes, enabling low-overhead similarity comparison via Hamming distance. To support scalable retrieval, multi-index hashing is applied for sublinear KNN searching. Additionally, we propose an attention-guided federated aggregation strategy that dynamically adjusts client contributions, reducing communication costs. Our experiments on benchmark datasets demonstrate that our method achieves competitive detection accuracy with significantly lower computational, memory, and communication overhead, making it well-suited for edge-based deployment. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 3940 KB  
Article
Research on the Prediction of Liquid Injection Volume and Leaching Rate for In Situ Leaching Uranium Mining Using the CNN–LSTM–LightGBM Model
by Zhifeng Liu, Zirong Jin, Yipeng Zhou, Zhenhua Wei and Huanyu Zhang
Processes 2025, 13(9), 3013; https://doi.org/10.3390/pr13093013 - 21 Sep 2025
Viewed by 177
Abstract
In traditional in situ leaching (ISL) uranium mining, the injection volume depends on technicians’ on-site experience. Therefore, applying artificial intelligence technologies such as machine learning to analyze the relationship between injection volume and leaching rate in ISL uranium mining, thereby reducing human factor [...] Read more.
In traditional in situ leaching (ISL) uranium mining, the injection volume depends on technicians’ on-site experience. Therefore, applying artificial intelligence technologies such as machine learning to analyze the relationship between injection volume and leaching rate in ISL uranium mining, thereby reducing human factor interference, holds significant guiding importance for production process control. This study proposes a novel uranium leaching rate prediction method based on a CNN–LSTM–LightGBM fusion model integrated with an attention mechanism. Ablation experiments demonstrate that the proposed fusion model outperforms its component models across three key metrics: Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE). Furthermore, comparative experiments reveal that this fusion model achieves superior performance on MAE, MAPE, and RMSE metrics compared to six extensively utilized machine learning methods, including Multi-Layer Perceptron, Support Vector Regression, and K-Nearest Neighbors. Specifically, the model achieves an MAE of 0.085%, an MAPE of 0.833%, and an RMSE of 0.201%. This attention-enhanced fusion model provides technical support for production control in ISL uranium mining and offers valuable references for informatization and intelligentization research in uranium mining operations. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

Back to TopTop