Previous Issue
Volume 13, November
 
 

Computers, Volume 13, Issue 12 (December 2024) – 43 articles

Cover Story (view full-size image): Catheter ablation therapy for atrial fibrillation (AF) has a higher recurrence rate as the duration of AF increases. In this study, we used contrast-enhanced computed tomography (CT) to classify AF into paroxysmal AF (PAF) and long-term persistent AF (LSAF), which have different recurrence rates after catheter ablation. CT images of 30 PAF and 30 LSAF patients were input into six pretrained convolutional neural networks (CNNs) for binary classification. The classification was visualized using saliency maps based on score class activation mapping (CAM). The proposed method achieved an 81.7% classification accuracy. The results suggest that this method can classify AF more accurately than physicians, focusing on the left atrium shape, similar to physician judgment criteria. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 1070 KiB  
Article
Multi-Modal MR Image Segmentation Strategy for Brain Tumors Based on Domain Adaptation
by Qihong Yang, Ruijun Jing and Jiliang Mu
Computers 2024, 13(12), 347; https://doi.org/10.3390/computers13120347 - 19 Dec 2024
Viewed by 85
Abstract
During the study of multimodal brain tumor MR image segmentation, the large differences in the image distributions make the assumption that the conditional probabilities are similar when the edge distributions of the target and source domains are similar, and that the edge distributions [...] Read more.
During the study of multimodal brain tumor MR image segmentation, the large differences in the image distributions make the assumption that the conditional probabilities are similar when the edge distributions of the target and source domains are similar, and that the edge distributions are similar when the conditional probabilities are similar, not valid. In addition, the training network is usually trained on single domain data, which creates a tendency for the network to represent the image towards the source domain when the target domain is not labeled. Based on the aforementioned reasons, a new multimodal brain tumor MR segmentation strategy based on domain adaptation is proposed in this study. First, the source domain targets for each modality are derived through the clustering methods in the pre-training stage to select the target domain images with the strongest complementarity in the source domain and further produce the pseudo labels. Second, feature adapters are proposed to improve the feature alignment, and a network sensitive to both source and target domain images is designed to comprehensively leverage the multimodal image information. These measures mitigate the domain shift problem and improve the generalization ability of the model, enhancing the accuracy of multimodal brain tumor MR image segmentation. Full article
Show Figures

Figure 1

24 pages, 2642 KiB  
Article
Identification of Scientific Texts Generated by Large Language Models Using Machine Learning
by David Soto-Osorio, Grigori Sidorov, Liliana Chanona-Hernández and Blanca Cecilia López-Ramírez
Computers 2024, 13(12), 346; https://doi.org/10.3390/computers13120346 - 19 Dec 2024
Viewed by 172
Abstract
Large language models (LLMs) are tools that help us in a variety of activities, from creating well-structured texts to quickly consulting information. But as these new technologies are so easily accessible, many people use them for their own benefit without properly citing the [...] Read more.
Large language models (LLMs) are tools that help us in a variety of activities, from creating well-structured texts to quickly consulting information. But as these new technologies are so easily accessible, many people use them for their own benefit without properly citing the original author, or in other cases the student sector can be heavily compromised because students may opt for a quick answer over understanding and comprehending a specific topic in depth, considerably reducing their basic writing, editing and reading comprehension skills. Therefore, we propose to create a model to identify texts produced by LLM. To do so, we will use natural language processing (NLP) and machine-learning algorithms to recognize texts that mask LLM misuse using different types of adversarial attack, like paraphrasing or translation from one language to another. The main contributions of this work are to identify the texts generated by the large language models, and for this purpose several experiments were developed looking for the best results implementing the f1, accuracy, recall and precision metrics, together with PCA and t-SNE diagrams to see the classification of each one of the texts. Full article
Show Figures

Figure 1

35 pages, 534 KiB  
Article
Blockchain-Enabled Pension System Innovations: A Hungarian Case Study on Business Process Management Integration
by Dániel Kovács, Bálint Molnár and Viktor Weininger
Computers 2024, 13(12), 345; https://doi.org/10.3390/computers13120345 - 18 Dec 2024
Viewed by 237
Abstract
This paper explores the integration of Business Process Management (BPM) with blockchain technology to enhance pension systems, using Hungary as a case study. Specifically, it addresses scientific challenges related to data access management, regulatory compliance, and system scalability within blockchain-based pension frameworks. This [...] Read more.
This paper explores the integration of Business Process Management (BPM) with blockchain technology to enhance pension systems, using Hungary as a case study. Specifically, it addresses scientific challenges related to data access management, regulatory compliance, and system scalability within blockchain-based pension frameworks. This study investigates how BPM can improve the transparency, efficiency, and security of blockchain applications in pension administration by optimizing workflows and automating compliance with regulations such as GDPR. By analyzing operational flow diagrams and implementing architectural models, this paper presents an innovative approach to pension management, demonstrating significant improvements in service quality and operational efficiency. Findings from this research provide empirical evidence of the benefits of BPM-enhanced blockchain systems, offering insights applicable to pension systems beyond the Hungarian context, including examples from other countries. Full article
Show Figures

Figure 1

18 pages, 1302 KiB  
Article
Applying Classification Techniques in Machine Learning to Predict Job Satisfaction of University Professors: A Sociodemographic and Occupational Perspective
by Carlos Alberto Espinosa-Pinos, Paúl Bladimir Acosta-Pérez and Camila Alessandra Valarezo-Calero
Computers 2024, 13(12), 344; https://doi.org/10.3390/computers13120344 - 17 Dec 2024
Viewed by 227
Abstract
This article investigates the factors that affect the job satisfaction of university teachers for which 400 teachers from 4 institutions (public and private) in Ecuador were stratified, resulting in a total of 1600 data points collected through online forms. The research was of [...] Read more.
This article investigates the factors that affect the job satisfaction of university teachers for which 400 teachers from 4 institutions (public and private) in Ecuador were stratified, resulting in a total of 1600 data points collected through online forms. The research was of a cross-sectional design and quantitative and used machine learning techniques of classification and prediction to analyze variables such as ethnic identity, field of knowledge, gender, number of children, job burnout, perceived stress, and occupational risk. The results indicate that the best classification model is neural networks with a precision of 0.7304; the most significant variables for predicting the job satisfaction of university teachers are: the number of children they have, scores related to perceived stress, professional risk, and burnout, province of the university at which the university teacher surveyed works, and city where the teacher works. This is in contrast to marital status, which does not contribute to its prediction. These findings highlight the need for inclusive policies and effective strategies to improve teacher well-being in the university academic environment. Full article
Show Figures

Figure 1

29 pages, 4651 KiB  
Article
Hybrid Vision Transformer and Convolutional Neural Network for Multi-Class and Multi-Label Classification of Tuberculosis Anomalies on Chest X-Ray
by Rizka Yulvina, Stefanus Andika Putra, Mia Rizkinia, Arierta Pujitresnani, Eric Daniel Tenda, Reyhan Eddy Yunus, Dean Handimulya Djumaryo, Prasandhya Astagiri Yusuf and Vanya Valindria
Computers 2024, 13(12), 343; https://doi.org/10.3390/computers13120343 - 17 Dec 2024
Viewed by 278
Abstract
Tuberculosis (TB), caused by Mycobacterium tuberculosis, remains a leading cause of global mortality. While TB detection can be performed through chest X-ray (CXR) analysis, numerous studies have leveraged AI to automate and enhance the diagnostic process. However, existing approaches often focus on partial [...] Read more.
Tuberculosis (TB), caused by Mycobacterium tuberculosis, remains a leading cause of global mortality. While TB detection can be performed through chest X-ray (CXR) analysis, numerous studies have leveraged AI to automate and enhance the diagnostic process. However, existing approaches often focus on partial or incomplete lesion detection, lacking comprehensive multi-class and multi-label solutions for the full range of TB-related anomalies. To address this, we present a hybrid AI model combining vision transformer (ViT) and convolutional neural network (CNN) architectures for efficient multi-class and multi-label classification of 14 TB-related anomalies. Using 133 CXR images from Dr. Cipto Mangunkusumo National Central General Hospital and 214 images from the NIH datasets, we tackled data imbalance with augmentation, class weighting, and focal loss. The model achieved an accuracy of 0.911, a loss of 0.285, and an AUC of 0.510. Given the complexity of handling not only multi-class but also multi-label data with imbalanced and limited samples, the AUC score reflects the challenging nature of the task rather than any shortcoming of the model itself. By classifying the most distinct TB-related labels in a single AI study, this research highlights the potential of AI to enhance both the accuracy and efficiency of detecting TB-related anomalies, offering valuable advancements in combating this global health burden. Full article
Show Figures

Figure 1

24 pages, 7294 KiB  
Article
Augmented Reality for Event Promotion
by Tiago Lameirão, Miguel Melo and Filipe Pinto
Computers 2024, 13(12), 342; https://doi.org/10.3390/computers13120342 - 16 Dec 2024
Viewed by 326
Abstract
This article presents the development of an augmented reality (AR) application aimed at promoting events in urban environments. The main goal of the project was to create an immersive experience that enhances user interaction with their surroundings, leveraging AR technology. The application was [...] Read more.
This article presents the development of an augmented reality (AR) application aimed at promoting events in urban environments. The main goal of the project was to create an immersive experience that enhances user interaction with their surroundings, leveraging AR technology. The application was built using Django Rest Framework (DRF) for backend services and Unity for the AR functionalities and frontend. Key features include user registration and authentication, event viewing, interaction with virtual characters, and feedback on attended events, providing an engaging platform to promote urban events. The development process involved several stages, from requirements analysis and system architecture design to implementation and testing. A series of tests were performed, confirming that the application meets its objectives. These tests highlighted the system’s ability to enhance user interaction with urban environments and demonstrated its potential for commercialization. The results suggest that the AR application contributes to innovation in smart cities, offering a new avenue for promoting events and engaging local communities. Future work will focus on refining the user experience and expanding the app’s functionality to support more complex event scenarios. Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)
Show Figures

Figure 1

18 pages, 6340 KiB  
Article
Identifying Bias in Deep Neural Networks Using Image Transforms
by Sai Teja Erukude, Akhil Joshi and Lior Shamir
Computers 2024, 13(12), 341; https://doi.org/10.3390/computers13120341 - 15 Dec 2024
Viewed by 355
Abstract
CNNs have become one of the most commonly used computational tools in the past two decades. One of the primary downsides of CNNs is that they work as a “black box”, where the user cannot necessarily know how the image data are analyzed, [...] Read more.
CNNs have become one of the most commonly used computational tools in the past two decades. One of the primary downsides of CNNs is that they work as a “black box”, where the user cannot necessarily know how the image data are analyzed, and therefore needs to rely on empirical evaluation to test the efficacy of a trained CNN. This can lead to hidden biases that affect the performance evaluation of neural networks, but are difficult to identify. Here we discuss examples of such hidden biases in common and widely used benchmark datasets, and propose techniques for identifying dataset biases that can affect the standard performance evaluation metrics. One effective approach to identify dataset bias is to perform image classification by using merely blank background parts of the original images. However, in some situations, a blank background in the images is not available, making it more difficult to separate foreground or contextual information from the bias. To overcome this, we propose a method to identify dataset bias without the need to crop background information from the images. The method is based on applying several image transforms to the original images, including Fourier transform, wavelet transforms, median filter, and their combinations. These transforms are applied to recover background bias information that CNNs use to classify images. These transformations affect the contextual visual information in a different manner than it affects the systemic background bias. Therefore, the method can distinguish between contextual information and the bias, and can reveal the presence of background bias even without the need to separate sub-image parts from the blank background of the original images. The code used in the experiments is publicly available. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
Show Figures

Figure 1

16 pages, 240 KiB  
Article
A Comparative Study of Sentiment Analysis on Customer Reviews Using Machine Learning and Deep Learning
by Logan Ashbaugh and Yan Zhang
Computers 2024, 13(12), 340; https://doi.org/10.3390/computers13120340 - 15 Dec 2024
Viewed by 388
Abstract
Sentiment analysis is a key technique in natural language processing that enables computers to understand human emotions expressed in text. It is widely used in applications such as customer feedback analysis, social media monitoring, and product reviews. However, sentiment analysis of customer reviews [...] Read more.
Sentiment analysis is a key technique in natural language processing that enables computers to understand human emotions expressed in text. It is widely used in applications such as customer feedback analysis, social media monitoring, and product reviews. However, sentiment analysis of customer reviews presents unique challenges, including the need for large datasets and the difficulty in accurately capturing subtle emotional nuances in text. In this paper, we present a comparative study of sentiment analysis on customer reviews using both deep learning and traditional machine learning techniques. The deep learning models include Convolutional Neural Network (CNN) and Recursive Neural Network (RNN), while the machine learning methods consist of Logistic Regression, Random Forest, and Naive Bayes. Our dataset is composed of Amazon product reviews, where we utilize the star rating as a proxy for the sentiment expressed in each review. Through comprehensive experiments, we assess the performance of each model in terms of accuracy and effectiveness in detecting sentiment. This study provides valuable insights into the strengths and limitations of both deep learning and traditional machine learning approaches for sentiment analysis. Full article
23 pages, 8899 KiB  
Article
Loading Frequency Classification in Shape Memory Alloys: A Machine Learning Approach
by Dmytro Tymoshchuk, Oleh Yasniy, Pavlo Maruschak, Volodymyr Iasnii and Iryna Didych
Computers 2024, 13(12), 339; https://doi.org/10.3390/computers13120339 - 14 Dec 2024
Viewed by 369
Abstract
This paper investigates the use of machine learning methods to predict the loading frequency of shape memory alloys (SMAs) based on experimental data. SMAs, in particular nickel-titanium (NiTi) alloys, have unique properties that restore the original shape after significant deformation. The frequency of [...] Read more.
This paper investigates the use of machine learning methods to predict the loading frequency of shape memory alloys (SMAs) based on experimental data. SMAs, in particular nickel-titanium (NiTi) alloys, have unique properties that restore the original shape after significant deformation. The frequency of loading significantly affects the functional characteristics of SMAs. Experimental data were obtained from cyclic tensile tests of a 1.5 mm diameter Ni55.8Ti44.2 wire at different loading frequencies (0.1, 0.5, 1.0, and 5.0 Hz). Various machine learning methods were used to predict the loading frequency f (Hz) based on input parameters such as stress σ (MPa), number of cycles N, strain ε (%), and loading–unloading stage: boosted trees, random forest, support vector machines, k-nearest neighbors, and artificial neural networks of the MLP type. Experimental data of 100–140 load–unload cycles for four load frequencies were used for training. The dataset contained 13,365 elements. The results showed that the MLP neural network model demonstrated the highest accuracy in load frequency classification. The boosted trees and random forest models also performed well, although slightly below MLP. The SVM method also performed quite well. The KNN method showed the worst results among all models. Additional testing of the MLP model on cycles that were not included in the training data (200th, 300th, and 1035th cycles) showed that the model retains high efficiency in predicting load frequency, although the accuracy gradually decreases on later cycles due to the accumulation of structural changes in the material. Full article
Show Figures

Figure 1

18 pages, 5030 KiB  
Article
Design and Development of a Low-Cost Educational Platform for Investigating Human-Centric Lighting (HCL) Settings
by George K. Adam and Aris Tsangrassoulis
Computers 2024, 13(12), 338; https://doi.org/10.3390/computers13120338 - 14 Dec 2024
Viewed by 279
Abstract
The design of reliable and accurate indoor lighting control systems for LEDs’ (light-emitting diodes) color temperature and brightness, in an effort to affect human circadian rhythms, has been emerging in the last few years. However, this is quite challenging since parameters, such as [...] Read more.
The design of reliable and accurate indoor lighting control systems for LEDs’ (light-emitting diodes) color temperature and brightness, in an effort to affect human circadian rhythms, has been emerging in the last few years. However, this is quite challenging since parameters, such as the melanopic equivalent daylight illuminance (mEDI), have to be evaluated in real time, using illuminance values and the spectrum of incident light. In this work, to address these issues, a prototype platform has been built based on the low-cost and low-power Arduino UNO R4 Wi-Fi BLE (Bluetooth Low Energy) board, which facilitates experiments with a new control approach for LEDs’ correlated color temperature (CCT). Together with the aforementioned platform, the methodology for mEDI calculation using an 11-channel multi-spectral sensor is presented. With proper calibration of the sensor, the visible spectrum can be reconstructed with a resolution of 1 nm, making the estimation of mEDI more accurate. Full article
Show Figures

Figure 1

13 pages, 615 KiB  
Article
Wearable Sensor-Based Behavioral User Authentication Using a Hybrid Deep Learning Approach with Squeeze-and-Excitation Mechanism
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
Computers 2024, 13(12), 337; https://doi.org/10.3390/computers13120337 - 14 Dec 2024
Viewed by 303
Abstract
Behavior-based user authentication has arisen as a viable method for strengthening cybersecurity in an age of pervasive wearable and mobile technologies. This research introduces an innovative approach for ongoing user authentication via behavioral biometrics obtained from wearable sensors. We present a hybrid deep [...] Read more.
Behavior-based user authentication has arisen as a viable method for strengthening cybersecurity in an age of pervasive wearable and mobile technologies. This research introduces an innovative approach for ongoing user authentication via behavioral biometrics obtained from wearable sensors. We present a hybrid deep learning network called SE-DeepConvNet, which integrates a squeeze-and-excitation (SE) method to proficiently simulate and authenticate user behavior characteristics. Our methodology utilizes data collected by wearable sensors, such as accelerometers, gyroscopes, and magnetometers, to obtain a thorough behavioral appearance. The suggested network design integrates convolutional neural networks for spatial feature extraction, while the SE blocks improve feature identification by flexibly recalibrating channel-wise feature responses. Experiments performed on two datasets, HMOG and USC-HAD, indicate the efficacy of our technique across different tasks. In the HMOG dataset, SE-DeepConvNet attains a minimal equal error rate (EER) of 0.38% and a maximum accuracy of 99.78% for the Read_Walk activity. Our model presents outstanding authentication (0% EER, 100% accuracy) for various walking activities in the USC-HAD dataset, encompassing intricate situations such as ascending and descending stairs. These findings markedly exceed existing deep learning techniques, demonstrating the promise of our technology for secure and inconspicuous continuous authentication in wearable devices. The suggested approach demonstrates the potential for use in individual device security, access management, and ongoing uniqueness verification in sensitive settings. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

39 pages, 4608 KiB  
Review
The YOLO Framework: A Comprehensive Review of Evolution, Applications, and Benchmarks in Object Detection
by Momina Liaqat Ali and Zhou Zhang
Computers 2024, 13(12), 336; https://doi.org/10.3390/computers13120336 - 14 Dec 2024
Viewed by 432
Abstract
This paper provides a comprehensive review of the YOLO (You Only Look Once) framework up to its latest version, YOLO 11. As a state-of-the-art model for object detection, YOLO has revolutionized the field by achieving an optimal balance between speed and accuracy. The [...] Read more.
This paper provides a comprehensive review of the YOLO (You Only Look Once) framework up to its latest version, YOLO 11. As a state-of-the-art model for object detection, YOLO has revolutionized the field by achieving an optimal balance between speed and accuracy. The review traces the evolution of YOLO variants, highlighting key architectural improvements, performance benchmarks, and applications in domains such as healthcare, autonomous vehicles, and robotics. It also evaluates the framework’s strengths and limitations in practical scenarios, addressing challenges like small object detection, environmental variability, and computational constraints. By synthesizing findings from recent research, this work identifies critical gaps in the literature and outlines future directions to enhance YOLO’s adaptability, robustness, and integration into emerging technologies. This review provides researchers and practitioners with valuable insights to drive innovation in object detection and related applications. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Figure 1

18 pages, 2633 KiB  
Article
Software Reliability Prediction Based on Recurrent Neural Network and Ensemble Method
by Wafa Alshehri, Salma Kammoun Jarraya and Arwa Allinjawi
Computers 2024, 13(12), 335; https://doi.org/10.3390/computers13120335 - 13 Dec 2024
Viewed by 301
Abstract
Software reliability is a crucial factor in determining software quality quantitatively. It is also used to estimate the software testing duration. In software reliability testing, traditional parametric software reliability growth models (SRGMs) are effectively used. Nevertheless, a single parametric model cannot provide accurate [...] Read more.
Software reliability is a crucial factor in determining software quality quantitatively. It is also used to estimate the software testing duration. In software reliability testing, traditional parametric software reliability growth models (SRGMs) are effectively used. Nevertheless, a single parametric model cannot provide accurate predictions in all cases. Moreover, non-parametric models have proven to be efficient for predicting software reliability as alternatives to parametric models. In this paper, we adopted a deep learning method for software reliability testing in computer vision systems. Also, we focused on critical computer vision applications that need high reliability. We propose a new deep learning-based model that is combined and based on the ensemble method to improve the performance of software reliability testing. The experimental results of the new model architecture present fairly accurate predictive capability compared to other existing single Neural Network (NN) based models. Full article
Show Figures

Figure 1

16 pages, 785 KiB  
Article
Information and Computing Ecosystem’s Architecture for Monitoring and Forecasting Natural Disasters
by Valeria Gribova and Dmitry Kharitonov
Computers 2024, 13(12), 334; https://doi.org/10.3390/computers13120334 - 13 Dec 2024
Viewed by 359
Abstract
Monitoring natural phenomena using a variety of methods to predict disasters is a trend that is growing over time. However, there is a great disunity among methods and means of data analysis, formats and interfaces of storing and providing data, and software and [...] Read more.
Monitoring natural phenomena using a variety of methods to predict disasters is a trend that is growing over time. However, there is a great disunity among methods and means of data analysis, formats and interfaces of storing and providing data, and software and information systems for data processing. As part of a large project to create a planetary observatory that combines data from spatially distributed geosphere monitoring systems, the efforts of leading institutes of the Russian Academy of Sciences are also aimed at creating an information and computing ecosystem to unite researchers processing and analyzing the data obtained. This article provides a brief overview of the current state of publications on information ecosystems in various applied fields, and it also proposes a concept for an ecosystem on a multiagent basis with unique technical features. The concept of the ecosystem includes the following: the ability to function in a heterogeneous environment on federal principles, the parallelization of data processing between agents using Petri nets as a mechanism ensuring the correct execution of data processing scenarios, the concept of georeferenced alarm events requiring ecosystem reactions and possible notification of responsible persons, and multilevel information protection allowing data owners to control access at each stage of information processing. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

34 pages, 1717 KiB  
Review
Unveiling the Dynamic Landscape of Digital Forensics: The Endless Pursuit
by Muhammad Sharjeel Zareen, Baber Aslam, Shahzaib Tahir, Imran Rasheed and Fawad Khan
Computers 2024, 13(12), 333; https://doi.org/10.3390/computers13120333 - 11 Dec 2024
Viewed by 474
Abstract
The invention of transistors in the 1940s marked the beginning of a technological revolution that has impacted every aspect of our lives. However, along with the positive advancements, the malicious use of computing technologies has become a serious concern. The international community has [...] Read more.
The invention of transistors in the 1940s marked the beginning of a technological revolution that has impacted every aspect of our lives. However, along with the positive advancements, the malicious use of computing technologies has become a serious concern. The international community has been actively collaborating to develop digital forensics techniques to combat the unlawful use of these technologies. However, the evolution of digital forensics has often lagged behind the rapid developments in computing technologies. In addition to their harmful use, computing devices are increasingly involved in crime scenes and accidents, necessitating digital forensics to reconstruct events. This paper provides a comprehensive review of the development of computing technologies from the 1940s to the present, highlighting the trends in their malicious use and the corresponding advancements in digital forensics. The paper also discusses various institutes, laboratories, organizations, and training setups established at national and international levels for digital forensics purposes. Furthermore, it explores the initial legislations related to computer-related crimes and the standards associated with digital forensics. These reviews and discussions conclude at identifying the shortfalls in digital forensics and proposes an all-inclusive digital forensics process model meeting these shortfalls while complying to international standards and meeting regulatory and legal requirements of digital forensics. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

13 pages, 2613 KiB  
Article
Player Performance Analysis in Table Tennis Through Human Action Recognition
by Kangnan Dong and Wei Qi Yan
Computers 2024, 13(12), 332; https://doi.org/10.3390/computers13120332 - 11 Dec 2024
Viewed by 398
Abstract
This paper aims to enhance the effectiveness of table tennis coaching and player performance analysis through human action recognition by using deep learning. In the field of video analysis, human action recognition has emerged as a highly researched area. Beyond post-session analysis, it [...] Read more.
This paper aims to enhance the effectiveness of table tennis coaching and player performance analysis through human action recognition by using deep learning. In the field of video analysis, human action recognition has emerged as a highly researched area. Beyond post-session analysis, it has the potential for real-time applications, such as providing instant feedback or comparing ideal motions with actual player movements. However, the complexity of human actions presents significant challenges. To address these issues, in this paper, we combine the latest computer vision and deep learning algorithms to accurately identify and classify a few table tennis strokes in human action recognition. Through an in-depth review of existing methods, we develop a high-precision offline method for player action recognition. Our experimental results show that the proposed method achieves an average accuracy of 99.85% in recognizing six distinct table tennis actions based on our own dataset. Full article
Show Figures

Figure 1

24 pages, 3859 KiB  
Article
A Coordination Approach to Support Crowdsourced Software-Design Process
by Ohoud Alhagbani and Sultan Alyahya
Computers 2024, 13(12), 331; https://doi.org/10.3390/computers13120331 - 7 Dec 2024
Viewed by 496
Abstract
Crowdsourcing software design (CSD) is the completion of specific software-design tasks on behalf of a client by a large, unspecified group of external individuals who have the specialized knowledge required by an open call. Although current CSD platforms have provided features to improve [...] Read more.
Crowdsourcing software design (CSD) is the completion of specific software-design tasks on behalf of a client by a large, unspecified group of external individuals who have the specialized knowledge required by an open call. Although current CSD platforms have provided features to improve coordination in the CSD process (such as email notifications, chat, and announcements), these features are insufficient to solve the coordination limitations. A lack of appropriate coordination support in CSD activities may cause delays and missed opportunities for participants, and thus the best quality of design contest results may not be guaranteed. This research aims to support the effective management of the CSD process through identifying the key activity dependencies among participants in CSD platforms and designing a set of process models to provide coordination support through managing this activity. In order to do this, a five-stage approach was used: First, the current CSD process was investigated by reviewing 13 CSD platforms. Second, the review resulted in the identification of 17 possible suggestions to improve CSD. These suggestions were evaluated in stage 3 through distributing a survey to 41 participants who had experience in using platforms in the field of CSD. In stage 4, we designed ten process models that could meet the requirements of suggestions, while in stage 5, we evaluated these process models through interviews with domain experts. The result shows that coordination support in the activities of the CSD can make valuable contributions to the development of CSD platforms. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

4 pages, 156 KiB  
Editorial
Editorial “Blockchain Technology—A Breakthrough Innovation for Modern Industries”
by Nino Adamashvili, Caterina Tricase, Otar Zumburidze, Radu State and Roberto Tonelli
Computers 2024, 13(12), 330; https://doi.org/10.3390/computers13120330 - 7 Dec 2024
Viewed by 330
Abstract
In June 2022, the Italian national project PRIN (Research Projects of National Relevance), W [...] Full article
28 pages, 1185 KiB  
Review
Integrating Blockchains with the IoT: A Review of Architectures and Marine Use Cases
by Andreas Polyvios Delladetsimas, Stamatis Papangelou, Elias Iosif and George Giaglis
Computers 2024, 13(12), 329; https://doi.org/10.3390/computers13120329 - 6 Dec 2024
Viewed by 575
Abstract
This review examines the integration of blockchain technology with the IoT in the Marine Internet of Things (MIoT) and Internet of Underwater Things (IoUT), with applications in areas such as oceanographic monitoring and naval defense. These environments present distinct challenges, including a limited [...] Read more.
This review examines the integration of blockchain technology with the IoT in the Marine Internet of Things (MIoT) and Internet of Underwater Things (IoUT), with applications in areas such as oceanographic monitoring and naval defense. These environments present distinct challenges, including a limited communication bandwidth, energy constraints, and secure data handling needs. Enhancing BIoT systems requires a strategic selection of computing paradigms, such as edge and fog computing, and lightweight nodes to reduce latency and improve data processing in resource-limited settings. While a blockchain can improve data integrity and security, it can also introduce complexities, including interoperability issues, high energy consumption, standardization challenges, and costly transitions from legacy systems. The solutions reviewed here include lightweight consensus mechanisms to reduce computational demands. They also utilize established platforms, such as Ethereum and Hyperledger, or custom blockchains designed to meet marine-specific requirements. Additional approaches incorporate technologies such as fog and edge layers, software-defined networking (SDN), the InterPlanetary File System (IPFS) for decentralized storage, and AI-enhanced security measures, all adapted to each application’s needs. Future research will need to prioritize scalability, energy efficiency, and interoperability for effective BIoT deployment. Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
Show Figures

Figure 1

19 pages, 2545 KiB  
Article
Distinguishing Human Journalists from Artificial Storytellers Through Stylistic Fingerprints
by Van Hieu Tran, Yakub Sebastian, Asif Karim and Sami Azam
Computers 2024, 13(12), 328; https://doi.org/10.3390/computers13120328 - 5 Dec 2024
Viewed by 773
Abstract
Background: Artificial intelligence poses a critical challenge to the authenticity of journalistic documents. Objectives: This research proposes a method to automatically identify AI-generated news articles based on various stylistic features. Methods/Approach: We used machine learning algorithms and trained five classifiers [...] Read more.
Background: Artificial intelligence poses a critical challenge to the authenticity of journalistic documents. Objectives: This research proposes a method to automatically identify AI-generated news articles based on various stylistic features. Methods/Approach: We used machine learning algorithms and trained five classifiers to distinguish journalistic news articles from their AI-generated counterparts based on various lexical, syntactic, and readability features. BERTopic was used to extract salient keywords from these articles, which were then used to prompt Google’s Gemini to generate new artificial articles on the same topic. Results: The Random Forest classifier performed the best on the task (accuracy = 98.3%, precision = 0.984, recall = 0.983, and F1-score = 0.983). Random Forest feature importance, Analysis of Variance (ANOVA), Mutual Information, and Recursive Feature Elimination revealed the top five important features: sentence length range, paragraph length coefficient of variation, verb ratio, sentence complex tags, and paragraph length range. Conclusions: This research introduces an innovative approach to prompt engineering using the BERTopic modelling technique and identifies key stylistic features to distinguish AI-generated content from human-generated content. Therefore, it contributes to the ongoing efforts to combat disinformation, enhancing the credibility of content in various industries, such as academic research, education, and journalism. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 677 KiB  
Review
Exploring Data Analysis Methods in Generative Models: From Fine-Tuning to RAG Implementation
by Bogdan Mihai Guțu and Nirvana Popescu
Computers 2024, 13(12), 327; https://doi.org/10.3390/computers13120327 - 5 Dec 2024
Viewed by 563
Abstract
The exponential growth in data from technological advancements has created opportunities across fields like healthcare, finance, and social media, but sensitive data raise security and privacy challenges. Generative models offer solutions by modeling complex data and generating synthetic data, making them useful for [...] Read more.
The exponential growth in data from technological advancements has created opportunities across fields like healthcare, finance, and social media, but sensitive data raise security and privacy challenges. Generative models offer solutions by modeling complex data and generating synthetic data, making them useful for the analysis of large private datasets. This article is a review of data analysis techniques based on generative models, with a focus on large language models (LLMs). It covers the strengths, limitations, and applications of methods like the fine-tuning of LLMs and retrieval-augmented generation (RAG). This study consolidates, analyzes, and interprets the findings from the literature to provide a coherent overview of the current research landscape on this topic, aiming to guide effective, privacy-conscious data analysis and exploring future improvements, especially for low-resource languages. Full article
Show Figures

Figure 1

17 pages, 1195 KiB  
Review
Exploring the Design for Wearability of Wearable Devices: A Scoping Review
by Yeo Weon Seo, Valentina La Marca, Animesh Tandon, Jung-Chih Chiao and Colin K. Drummond
Computers 2024, 13(12), 326; https://doi.org/10.3390/computers13120326 - 5 Dec 2024
Viewed by 589
Abstract
Wearable smart devices have become ubiquitous in modern society, extensively researched for their health monitoring capabilities and convenience features. However, the “wearability” of these devices remains a relatively understudied area, particularly in terms of design informed by clinical trials. Wearable devices possess significant [...] Read more.
Wearable smart devices have become ubiquitous in modern society, extensively researched for their health monitoring capabilities and convenience features. However, the “wearability” of these devices remains a relatively understudied area, particularly in terms of design informed by clinical trials. Wearable devices possess significant potential to enhance daily life, yet their success depends on understanding and validating the design factors that influence comfort, usability, and seamless integration into everyday routines. This review aimed to evaluate the “wearability” of smart devices through a mixed-methods scoping literature review. By analyzing studies on comfort, usability, and daily integration, it sought to identify design improvements and research gaps to enhance user experience and system design. From an initial pool of 130 publications (1998–2024), 19 studies met the inclusion criteria. The review identified three significant outcomes: (1) a lack of standardized assessment methods, (2) the predominance of qualitative over quantitative assessments, and (3) limited utility of findings for informing design. Although qualitative studies provide valuable insights, the absence of quantitative research hampers the development of validated, generalizable design criteria. This underscores the urgent need for future studies to adopt robust quantitative methodologies to better assess wearability and inform evidence-based design strategies. Full article
(This article belongs to the Special Issue Wearable Computing and Activity Recognition)
Show Figures

Figure 1

25 pages, 1610 KiB  
Article
A Novel End-to-End Provenance System for Predictive Maintenance: A Case Study for Industrial Machinery Predictive Maintenance
by Emrullah Gultekin and Mehmet S. Aktas
Computers 2024, 13(12), 325; https://doi.org/10.3390/computers13120325 - 4 Dec 2024
Viewed by 458
Abstract
In this study, we address the critical gap in predictive maintenance systems regarding the absence of a robust provenance system and specification. To tackle this issue, we propose a provenance system based on the PROV-O schema, designed to enhance explainability, accountability, and transparency [...] Read more.
In this study, we address the critical gap in predictive maintenance systems regarding the absence of a robust provenance system and specification. To tackle this issue, we propose a provenance system based on the PROV-O schema, designed to enhance explainability, accountability, and transparency in predictive maintenance processes. Our framework facilitates the collection, processing, recording, and visualization of provenance data, integrating them seamlessly into these systems. We developed a prototype to evaluate the effectiveness of our approach and conducted comprehensive user studies to assess the system’s usability. Participants found the extended PROV-O structure valuable, with improved task completion times. Furthermore, performance tests demonstrated that our system manages high workloads efficiently, with minimal overhead. The contributions of this study include the design of a provenance system tailored for predictive maintenance and a specification that ensures scalability and efficiency. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2024 (ICCSA 2024))
Show Figures

Figure 1

14 pages, 270 KiB  
Article
The Effects of Adaptive Gamification in Science Learning: A Comparison Between Traditional Inquiry-Based Learning and Gender Differences
by Alkinoos-Ioannis Zourmpakis, Michail Kalogiannakis and Stamatios Papadakis
Computers 2024, 13(12), 324; https://doi.org/10.3390/computers13120324 - 4 Dec 2024
Viewed by 543
Abstract
Gamification has become a topic of interest for researchers and educators, particularly in science education, in the last few years. Students of all educational levels have consistently faced challenges when grasping scientific concepts. However, the effectiveness of gamification, especially in terms of academic [...] Read more.
Gamification has become a topic of interest for researchers and educators, particularly in science education, in the last few years. Students of all educational levels have consistently faced challenges when grasping scientific concepts. However, the effectiveness of gamification, especially in terms of academic performance, has shown mixed results. This has led researchers to explore a new alternative approach, adaptive gamification. Our study compared the effects of adaptive gamification with traditional inquiry-based learning. Two classes of 9-year-old students participated, with the experimental group using adaptive gamification and the control group following a more conventional teaching approach using inquiry-based lessons and experiments. Both groups were tested before and after the lessons, and their results were analyzed using SPSS. The findings revealed that while both groups showed a significant difference after the lessons, the experimental group had significantly higher scores than the control group. Particularly significant results were observed regarding learning improvements based on students’ gender, with female and male students in the experimental group demonstrating significant improvement. In contrast, in the control group, only the male students displayed significant learning improvement. This research contributes significantly to the relatively new field of adaptive gamification in science education and the improvement of students’ science learning, particularly in the context of gender differences. Full article
15 pages, 1050 KiB  
Article
Siamese Network-Based Lightweight Framework for Tomato Leaf Disease Recognition
by Selvarajah Thuseethan, Palanisamy Vigneshwaran, Joseph Charles and Chathrie Wimalasooriya
Computers 2024, 13(12), 323; https://doi.org/10.3390/computers13120323 - 4 Dec 2024
Viewed by 372
Abstract
In this paper, a novel Siamese network-based lightweight framework is proposed for automatic tomato leaf disease recognition. This framework achieves the highest accuracy of 96.97% on the tomato subset obtained from the PlantVillage dataset and 95.48% on the Taiwan tomato leaf disease dataset. [...] Read more.
In this paper, a novel Siamese network-based lightweight framework is proposed for automatic tomato leaf disease recognition. This framework achieves the highest accuracy of 96.97% on the tomato subset obtained from the PlantVillage dataset and 95.48% on the Taiwan tomato leaf disease dataset. Experimental results further confirm that the proposed framework is effective with imbalanced and small data. The backbone network integrated with this framework is lightweight with approximately 2.9629 million trainable parameters, which is second to SqueezeNet and significantly lower than other lightweight deep networks. Automatic tomato disease recognition from leaf images is vital to avoid crop losses by applying control measures on time. Even though recent deep learning-based tomato disease recognition methods with classical training procedures showed promising recognition results, they demand large labeled data and involve expensive training. The traditional deep learning models proposed for tomato disease recognition also consume high memory and storage because of a high number of parameters. While lightweight networks overcome some of these issues to a certain extent, they continue to show low performance and struggle to handle imbalanced data. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 976 KiB  
Article
Computer-Supported Strategic Decision Making for Ecosystems Creation
by Patricia Rodriguez-Garcia, Patricia Carracedo, David Lopez-Lopez, Angel A. Juan and Jon A. Martin
Computers 2024, 13(12), 322; https://doi.org/10.3390/computers13120322 - 4 Dec 2024
Viewed by 390
Abstract
In the corporate strategy arena, the concept of ecosystems has emerged as a transformative approach to promote competitive advantage, growth, and innovation. Corporate ecosystems enable companies to benefit from interconnections among diverse partners, products, and services to deliver enhanced value to customers. However, [...] Read more.
In the corporate strategy arena, the concept of ecosystems has emerged as a transformative approach to promote competitive advantage, growth, and innovation. Corporate ecosystems enable companies to benefit from interconnections among diverse partners, products, and services to deliver enhanced value to customers. However, the process of ecosystem creation represents a significant challenge for CEOs, as they must analyze a wide number of alternative sectors, partners, business cases, and other critical elements. Particularly, as it is a strategic decision, it lies beyond the traditional approach of risk-return by incorporating other factors, e.g.: the feasibility, desirability and sustainability of each alternative. This paper investigates how computer-supported optimization algorithms can help to solve the complex problem faced by CEOs when making these factors to create a successful and sustainable ecosystem. The paper shows how a CEO can make informed strategic decisions by identifying the best projects to include in the ecosystem portfolio, balancing financial risk and return with technical feasibility, customer appeal, and technical considerations. Full article
Show Figures

Figure 1

25 pages, 2551 KiB  
Article
Optimizing Scheduled Virtual Machine Requests Placement in Cloud Environments: A Tabu Search Approach
by Mohamed Koubàa, Abdullah S. Karar and Faouzi Bahloul
Computers 2024, 13(12), 321; https://doi.org/10.3390/computers13120321 - 2 Dec 2024
Viewed by 497
Abstract
This paper introduces a novel model for virtual machine (VM) requests with predefined start and end times, referred to as scheduled virtual machine demands (SVMs). In cloud computing environments, SVMs represent anticipated resource requirements derived from historical data, usage trends, and predictive analytics, [...] Read more.
This paper introduces a novel model for virtual machine (VM) requests with predefined start and end times, referred to as scheduled virtual machine demands (SVMs). In cloud computing environments, SVMs represent anticipated resource requirements derived from historical data, usage trends, and predictive analytics, allowing cloud providers to optimize resource allocation for maximum efficiency. Unlike traditional VMs, SVMs are not active concurrently. This allows providers to reuse physical resources such as CPU, RAM, and storage for time-disjoint requests, opening new avenues for optimizing resource distribution in data centers. To leverage this opportunity, we propose an advanced VM placement algorithm designed to maximize the number of hosted SVMs in cloud data centers. We formulate the SVM placement problem (SVMPP) as a combinatorial optimization challenge and introduce a tailored Tabu Search (TS) meta-heuristic to provide an effective solution. Our algorithm demonstrates significant improvements over existing placement methods, achieving up to a 15% increase in resource efficiency compared to baseline approaches. This advancement highlights the TS algorithm’s potential to deliver substantial scalability and optimization benefits, particularly for high-demand scenarios, albeit with a necessary consideration for computational cost. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

17 pages, 3922 KiB  
Article
Hybrid Population-Based Hill Climbing Algorithm for Generating Highly Nonlinear S-boxes
by Oleksandr Kuznetsov, Nikolay Poluyanenko, Kateryna Kuznetsova, Emanuele Frontoni and Marco Arnesano
Computers 2024, 13(12), 320; https://doi.org/10.3390/computers13120320 - 2 Dec 2024
Viewed by 365
Abstract
This paper introduces the hybrid population-based hill-climbing (HPHC) algorithm, a novel approach for generating cryptographically strong S-boxes that combines the efficiency of hill climbing with the exploration capabilities of population-based methods. The algorithm achieves consistent generation of 8-bit S-boxes with a nonlinearity of [...] Read more.
This paper introduces the hybrid population-based hill-climbing (HPHC) algorithm, a novel approach for generating cryptographically strong S-boxes that combines the efficiency of hill climbing with the exploration capabilities of population-based methods. The algorithm achieves consistent generation of 8-bit S-boxes with a nonlinearity of 104, a critical threshold for cryptographic applications. Our approach demonstrates remarkable efficiency, requiring only 49,277 evaluations on average to generate such S-boxes, representing a 600-fold improvement over traditional simulated annealing methods and a 15-fold improvement over recent genetic algorithm variants. We present comprehensive experimental results from extensive parameter space exploration, revealing that minimal populations (often single-individual) combined with moderate mutation rates achieve optimal performance. This paper provides detailed analysis of algorithm behavior, parameter sensitivity, and performance characteristics, supported by rigorous statistical evaluation. We demonstrate that population size should approximate available thread count for optimal parallel execution despite smaller populations being theoretically more efficient. The HPHC algorithm maintains high reliability across diverse parameter settings while requiring minimal computational resources, making it particularly suitable for practical cryptographic applications. Full article
Show Figures

Figure 1

21 pages, 1831 KiB  
Article
Accurate Range-Free Localization Using Cuckoo Search Optimization in IoT and Wireless Sensor Networks
by Abdelali Hadir and Naima Kaabouch
Computers 2024, 13(12), 319; https://doi.org/10.3390/computers13120319 - 2 Dec 2024
Viewed by 511
Abstract
Precise positioning of sensors is critical for the performance of various applications in the Internet of Things and wireless sensor networks. The efficiency of these networks heavily depends on the precision of sensor node locations. Among various localization approaches, DV-Hop is highly recommended [...] Read more.
Precise positioning of sensors is critical for the performance of various applications in the Internet of Things and wireless sensor networks. The efficiency of these networks heavily depends on the precision of sensor node locations. Among various localization approaches, DV-Hop is highly recommended for its simplicity and robustness. However, despite its popularity, DV-Hop suffers from significant accuracy issues, primarily due to its reliance on average hop size for distance estimation. This limitation often results in substantial localization errors, compromising the overall network effectiveness. To address this gap, we developed an enhanced DV-Hop approach that integrates the cuckoo search algorithm (CS). Our solution improves the accuracy of node localization by introducing a normalized average hop size calculation and leveraging the optimization capabilities of CS. This hybrid approach refines the distance estimation process, significantly reducing the errors inherent in traditional DV-Hop. Findings from simulations reveal that the developed approach surpasses the accuracy of both the original DV-Hop and multiple other current localization methods, providing a more precise and reliable localization method for IoT and WSN applications. Full article
Show Figures

Figure 1

31 pages, 3335 KiB  
Article
Unified Ecosystem for Data Sharing and AI-Driven Predictive Maintenance in Aviation
by Igor Kabashkin and Vitaly Susanin
Computers 2024, 13(12), 318; https://doi.org/10.3390/computers13120318 - 28 Nov 2024
Viewed by 767
Abstract
The aviation industry faces considerable challenges in maintenance management due to the complexities of data standardization, data sharing, and predictive maintenance capabilities. This paper introduces a unified ecosystem for data sharing and AI-driven predictive maintenance designed to address these challenges by integrating real-time [...] Read more.
The aviation industry faces considerable challenges in maintenance management due to the complexities of data standardization, data sharing, and predictive maintenance capabilities. This paper introduces a unified ecosystem for data sharing and AI-driven predictive maintenance designed to address these challenges by integrating real-time and historical data from diverse sources, including aircraft sensors, maintenance logs, and operational records. The proposed ecosystem enables predictive analytics and anomaly detection, enhancing decision-making processes for airlines, maintenance, repair, and overhaul providers, and regulatory bodies. Key elements of the ecosystem include a modular design with feedback loops, scalable AI models for predictive maintenance, and robust data-sharing frameworks. This paper outlines the architecture of a unified aviation maintenance ecosystem built around multiple data sources, including aircraft sensors, maintenance logs, flight data, weather data, and manufacturer specifications. By integrating various components and stakeholders, the system achieves its full potential through several key use cases: monitoring aircraft component health, predicting component failures, receiving maintenance alerts, performing preventive maintenance, and generating compliance reports. Each use case is described in detail and supported by illustrative dataflow diagrams. The findings underscore the transformative impact of such an ecosystem on aviation maintenance practices, marking a significant step toward safer, more efficient, and sustainable aviation operations. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

Previous Issue
Back to TopTop