Next Issue
Volume 8, October
Previous Issue
Volume 8, June
 
 

Appl. Syst. Innov., Volume 8, Issue 4 (August 2025) – 32 articles

Cover Story (view full-size image): This study introduces a novel cloud-based framework to address the dual challenges of enhancing operational efficiency and promoting environmental sustainability in manufacturing. Historically, these critical goals have been pursued independently, leading to suboptimal results. Our integrated approach combines real-time process mining for workflow analysis, dynamic life cycle assessment with live sensor data, and multi-objective optimization to find balanced solutions. Validated with over 390,000 events from a tube manufacturing facility, the framework simultaneously increased operational efficiency by 5.1% and reduced carbon emissions by 12.4%. This research demonstrates that economic and environmental performance can be optimized in tandem, providing a powerful tool for sustainable industrial transformation. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 1729 KB  
Article
Tailoring the Systems Engineering Design Process for the Attitude and Orbit Control System of a Formation-Flying Small-Satellite Constellation
by Iván Felipe Rodríguez, Geilson Loureiro, Danny Stevens Traslaviña and Cristian Lozano Tafur
Appl. Syst. Innov. 2025, 8(4), 117; https://doi.org/10.3390/asi8040117 - 21 Aug 2025
Viewed by 342
Abstract
This research proposes a tailored Systems Engineering (SE) design process for the development of Attitude and Orbit Control Systems (AOCS) for small satellites operating in formation. These missions, known as Distributed Spacecraft Missions (DSMs), involve groups of satellites—commonly referred to as satellite constellations—whose [...] Read more.
This research proposes a tailored Systems Engineering (SE) design process for the development of Attitude and Orbit Control Systems (AOCS) for small satellites operating in formation. These missions, known as Distributed Spacecraft Missions (DSMs), involve groups of satellites—commonly referred to as satellite constellations—whose primary objective is to maintain controlled relative positioning in three dimensions. In these configurations, each satellite may serve a specific role. For instance, one may act as a navigation reference, while another functions as a communication relay. These roles support synchronized control and ensure mission cohesion. To achieve precise relative positioning, the system must integrate specialized sensors and maintain continuous inter-satellite communication. This capability enables precise navigation across both the space and ground segments, while ensuring high control accuracy. As such, the development of AOCS must be approached as a complex systems challenge, involving the coordinated behavior of multiple autonomous elements working toward a shared mission objective. This study tailors the SE process using the ISO/IEC 15288 standard and incorporates a Model-Based Systems Engineering (MBSE) approach to enhance traceability, consistency, and architectural coherence throughout the system lifecycle. As a result, it proposes a customized SE process for AOCS development that begins in the mission’s conceptual phase and addresses the specific functional and operational demands of formation flying. A conceptual example illustrates the proposed process. It focuses on subsystem coordination, communication needs, and the architecture required to support an AOCS for autonomous satellite formations. Full article
Show Figures

Figure 1

28 pages, 2697 KB  
Review
Classification and Comparative Analysis of Acoustic Agglomeration Systems for Fine Particle Removal
by Vladyslav Shybetsky, Igor Korobiichuk, Myroslava Kalinina, Michał Nowicki, Zlata Shopova and Daryna Khyzhna
Appl. Syst. Innov. 2025, 8(4), 116; https://doi.org/10.3390/asi8040116 - 20 Aug 2025
Viewed by 248
Abstract
This study presents a systematic classification of acoustic agglomeration systems, developed on the basis of an extensive review of experimental and numerical studies, specifically addressing fine particles. The classification framework encompasses wave type, geometric orientation, level of functional integration, chamber composition, and auxiliary [...] Read more.
This study presents a systematic classification of acoustic agglomeration systems, developed on the basis of an extensive review of experimental and numerical studies, specifically addressing fine particles. The classification framework encompasses wave type, geometric orientation, level of functional integration, chamber composition, and auxiliary enhancement mechanisms. By organizing the diverse configurations into consistent categories, this study enables a comparative analysis of system performance and suitability for practical applications. This review highlights typical design features, operational ranges, and implementation contexts, while identifying key advantages and limitations of each system type. Strengths such as scalability, compatibility with filtration units, and enhancement of particle capture are contrasted with challenges including acoustic intensity requirements, resonance sensitivity, and integration constraints. The proposed classification serves as a practical tool for guiding future design, optimization, and application of acoustic agglomeration technologies in air pollution control. Full article
Show Figures

Figure 1

20 pages, 1063 KB  
Article
A Tri-Level Distributionally Robust Defender–Attacker–Defender Model for Grid Resilience Enhancement Under Repair Time Uncertainty
by Ze Zhang, Xucheng Huang and Tao Zhang
Appl. Syst. Innov. 2025, 8(4), 115; https://doi.org/10.3390/asi8040115 - 20 Aug 2025
Viewed by 243
Abstract
Extreme damage poses a serious challenge to the safe operation of power grids. Optimizing the allocation of defense resources to improve the grid’s disaster resistance capabilities is the main concern of the power system. In this paper, a distributed robust optimal defense resource [...] Read more.
Extreme damage poses a serious challenge to the safe operation of power grids. Optimizing the allocation of defense resources to improve the grid’s disaster resistance capabilities is the main concern of the power system. In this paper, a distributed robust optimal defense resource allocation method based on the defender–attacker–defender model is proposed to improve the disaster resilience of power grids. This method takes into account the uncertainty of restoration time due to different damage intensities and improves the efficiency of restoration resource scheduling in the restoration process. Meanwhile, a set covering-column and constraint generation (SC-C&CG) algorithm is proposed for the case that the mixed integer model does not satisfy the Karush–Kuhn–Tucker (KKT) condition. A case study based on the IEEE 24-bus system is conducted, and the results verify that the proposed method can minimize the system dumping load under the uncertainty of the maintenance time involved. Full article
Show Figures

Figure 1

41 pages, 4171 KB  
Article
Development of a System for Recognising and Classifying Motor Activity to Control an Upper-Limb Exoskeleton
by Artem Obukhov, Mikhail Krasnyansky, Yaroslav Merkuryev and Maxim Rybachok
Appl. Syst. Innov. 2025, 8(4), 114; https://doi.org/10.3390/asi8040114 - 19 Aug 2025
Viewed by 361
Abstract
This paper addresses the problem of recognising and classifying hand movements to control an upper-limb exoskeleton. To solve this problem, a multisensory system based on the fusion of data from electromyography (EMG) sensors, inertial measurement units (IMUs), and virtual reality (VR) trackers is [...] Read more.
This paper addresses the problem of recognising and classifying hand movements to control an upper-limb exoskeleton. To solve this problem, a multisensory system based on the fusion of data from electromyography (EMG) sensors, inertial measurement units (IMUs), and virtual reality (VR) trackers is proposed, which provides highly accurate detection of users’ movements. Signal preprocessing (noise filtering, segmentation, normalisation) and feature extraction were performed to generate input data for regression and classification models. Various machine learning algorithms are used to recognise motor activity, ranging from classical algorithms (logistic regression, k-nearest neighbors, decision trees) and ensemble methods (random forest, AdaBoost, eXtreme Gradient Boosting, stacking, voting) to deep neural networks, including convolutional neural networks (CNNs), gated recurrent units (GRUs), and transformers. The algorithm for integrating machine learning models into the exoskeleton control system is considered. In experiments aimed at abandoning proprietary tracking systems (VR trackers), absolute position regression was performed using data from IMU sensors with 14 regression algorithms: The random forest ensemble provided the best accuracy (mean absolute error = 0.0022 metres). The task of classifying activity categories out of nine types is considered below. Ablation analysis showed that IMU and VR trackers produce a sufficient informative minimum, while adding EMG also introduces noise, which degrades the performance of simpler models but is successfully compensated for by deep networks. In the classification task using all signals, the maximum result (99.2%) was obtained on Transformer; the fully connected neural network generated slightly worse results (98.4%). When using only IMU data, fully connected neural network, Transformer, and CNN–GRU networks provide 100% accuracy. Experimental results confirm the effectiveness of the proposed architectures for motor activity classification, as well as the use of a multi-sensor approach that allows one to compensate for the limitations of individual types of sensors. The obtained results make it possible to continue research in this direction towards the creation of control systems for upper exoskeletons, including those used in rehabilitation and virtual simulation systems. Full article
Show Figures

Figure 1

22 pages, 1775 KB  
Article
Comprehensive Assessment Approach for the Design of Automatic Control Systems in Gas Field Stations
by Zhixiang Dai, Jun Zhou, Wei Zhang, Jinrui Zhong, Feng Wang, Li Xu, Taiwu Xia, Qinghua Feng, Minhao Wang and Xi Chen
Appl. Syst. Innov. 2025, 8(4), 113; https://doi.org/10.3390/asi8040113 - 14 Aug 2025
Viewed by 305
Abstract
The design of automatic control systems is critical for ensuring safety in gas field surface engineering production. However, over-reliance on standardized design approaches within the context of automation technology can compromise system flexibility and neglect individualized cost-effectiveness considerations. This paper identifies a comprehensive [...] Read more.
The design of automatic control systems is critical for ensuring safety in gas field surface engineering production. However, over-reliance on standardized design approaches within the context of automation technology can compromise system flexibility and neglect individualized cost-effectiveness considerations. This paper identifies a comprehensive evaluation method as the preferred approach for assessing station control systems by comparing the advantages and disadvantages of various common evaluation techniques. We propose an integrated semi-quantitative and quantitative evaluation method designed to comprehensively and accurately assess the effectiveness of station automatic control systems. For the semi-quantitative framework, we first establish a specific indicator system for the control system and employ the Analytic Hierarchy Process (AHP) to determine indicator weights tailored to different station types, achieving a scientific quantification of evaluation criteria. Additionally, we utilize quantitative calculation methods, specifically reliability and availability analyses, to evaluate the station’s automatic control system. Differential research is conducted to customize the evaluation based on the distinct process characteristics of various gas field stations. Differential design calculations and analyses were performed for a single station, improving the economy and adaptability of the automatic control system design. The proposed comprehensive evaluation method ensures the safe and stable operation of control system designs and provides a new approach for the automation and intelligent transformation of gas field surface engineering. Full article
Show Figures

Figure 1

15 pages, 65226 KB  
Article
Optimization of Water Tank Shape in Terms of Firefighting Vehicle Stability
by Jaroslav Matej and Michaela Hnilicová
Appl. Syst. Innov. 2025, 8(4), 112; https://doi.org/10.3390/asi8040112 - 11 Aug 2025
Viewed by 227
Abstract
In this work we present the shape optimization of a 2000 L water tank placed behind the rear axle of a forestry skidder. The main criterion is the static stability of the vehicle. The purpose of the research is to decrease the impact [...] Read more.
In this work we present the shape optimization of a 2000 L water tank placed behind the rear axle of a forestry skidder. The main criterion is the static stability of the vehicle. The purpose of the research is to decrease the impact of the tank on stability of the vehicle. The stability is determined in the form of distances of vectors of a stability triangle and a gravity vector. The tank is divided into small elements and their impact on stability is evaluated independently. Then, the gravity vector, placed in the center of gravity of the vehicle with the tank, combines the gravities of the vehicle and the tank composed of as many elements as required for the desired volume. The Python 3.13 programming language is used to implement the solution. The results for various shapes of the tank are displayed in the form of heatmaps. A slope angle of 20 degrees is used for the analysis. The results show that the longitudinal or lateral stability can be improved by shape modifications of the tank. The most interesting output is the final shape of the tank that improves terrain accessibility of the vehicle. The optimization method is universal and can also be used for different vehicles, tank placements and also auxiliary devices added in general positions. Full article
Show Figures

Figure 1

43 pages, 5258 KB  
Article
Twin Self-Supervised Learning Framework for Glaucoma Diagnosis Using Fundus Images
by Suguna Gnanaprakasam and Rolant Gini John Barnabas
Appl. Syst. Innov. 2025, 8(4), 111; https://doi.org/10.3390/asi8040111 - 11 Aug 2025
Viewed by 275
Abstract
Glaucoma is a serious eye condition that damages the optic nerve and affects the transmission of visual information to the brain. It is the second leading cause of blindness worldwide. With deep learning, CAD systems have shown promising results in diagnosing glaucoma but [...] Read more.
Glaucoma is a serious eye condition that damages the optic nerve and affects the transmission of visual information to the brain. It is the second leading cause of blindness worldwide. With deep learning, CAD systems have shown promising results in diagnosing glaucoma but mostly rely on small-labeled datasets. Annotated fundus image datasets improve deep learning predictions by aiding pattern identification but require extensive curation. In contrast, unlabeled fundus images are more accessible. The proposed method employs a semi-supervised learning approach to utilize both labeled and unlabeled data effectively. It follows traditional supervised training with the generation of pseudo-labels for unlabeled data, and incorporates self-supervised techniques that eliminate the need for manual annotation. It uses a twin self-supervised learning approach to improve glaucoma diagnosis by integrating pseudo-labels from one model into another self-supervised model for effective detection. The self-supervised patch-based exemplar CNN generates pseudo-labels in the first stage. These pseudo-labeled data, combined with labeled data, train a convolutional auto-encoder classification model in the second stage to identify glaucoma features. A support vector machine classifier handles the final classification of glaucoma in the model, achieving 98% accuracy and 0.98 AUC on the internal, same-source combined fundus image datasets. Also, the model maintains reasonably good generalization to the external (fully unseen) data, achieving AUC of 0.91 on the CRFO dataset and AUC of 0.87 on the Papilla dataset. These results demonstrate the method’s effectiveness, robustness, and adaptability in addressing limited labeled fundus data and aid in improved health and lifestyle. Full article
Show Figures

Figure 1

27 pages, 19279 KB  
Article
Smart Hydroponic Cultivation System for Lettuce (Lactuca sativa L.) Growth Under Different Nutrient Solution Concentrations in a Controlled Environment
by Raul Herrera-Arroyo, Juan Martínez-Nolasco, Enrique Botello-Álvarez, Víctor Sámano-Ortega, Coral Martínez-Nolasco and Cristal Moreno-Aguilera
Appl. Syst. Innov. 2025, 8(4), 110; https://doi.org/10.3390/asi8040110 - 7 Aug 2025
Viewed by 1572
Abstract
The inclusion of the Internet of Things (IoT) in indoor agricultural systems has become a fundamental tool for improving cultivation systems by providing key information for decision-making in pursuit of better performance. This article presents the design and implementation of an IoT-based agricultural [...] Read more.
The inclusion of the Internet of Things (IoT) in indoor agricultural systems has become a fundamental tool for improving cultivation systems by providing key information for decision-making in pursuit of better performance. This article presents the design and implementation of an IoT-based agricultural system installed in a plant growth chamber for hydroponic cultivation under controlled conditions. The growth chamber is equipped with sensors for air temperature, relative humidity (RH), carbon dioxide (CO2) and photosynthetically active photon flux, as well as control mechanisms such as humidifiers, full-spectrum Light Emitting Diode (LED) lamps, mini split air conditioner, pumps, a Wi-Fi surveillance camera, remote monitoring via a web application and three Nutrient Film Technique (NFT) hydroponic systems with a capacity of ten plants each. An ATmega2560 microcontroller manages the smart system using the MODBUS RS-485 communication protocol. To validate the proper functionality of the proposed system, a case study was conducted using lettuce crops, in which the impact of different nutrient solution concentrations (50%, 75% and 100%) on the phenotypic development and nutritional content of the plants was evaluated. The results obtained from the cultivation experiment, analyzed through analysis of variance (ANOVA), show that the treatment with 75% nutrient concentration provides an appropriate balance between resource use and nutritional quality, without affecting the chlorophyll content. This system represents a scalable and replicable alternative for protected agriculture. Full article
(This article belongs to the Special Issue Smart Sensors and Devices: Recent Advances and Applications Volume II)
Show Figures

Figure 1

20 pages, 1971 KB  
Article
FFG-YOLO: Improved YOLOv8 for Target Detection of Lightweight Unmanned Aerial Vehicles
by Tongxu Wang, Sizhe Yang, Ming Wan and Yanqiu Liu
Appl. Syst. Innov. 2025, 8(4), 109; https://doi.org/10.3390/asi8040109 - 4 Aug 2025
Viewed by 762
Abstract
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), [...] Read more.
Target detection is essential in intelligent transportation and autonomous control of unmanned aerial vehicles (UAVs), with single-stage detection algorithms used widely due to their speed. However, these algorithms face limitations in detecting small targets, especially in aerial photography from unmanned aerial vehicles (UAVs), where small targets are often occluded, multi-scale semantic information is easily lost, and there is a trade-off between real-time processing and computational resources. Existing algorithms struggle to effectively extract multi-dimensional features and deep semantic information from images and to balance detection accuracy with model complexity. To address these limitations, we developed FFG-YOLO, a lightweight small-target detection method for UAVs based on YOLOv8. FFG-YOLO incorporates three modules: a feature enhancement block (FEB), a feature concat block (FCB), and a global context awareness block (GCAB). These modules strengthen feature extraction from small targets, resolve semantic bias in multi-scale feature fusion, and help differentiate small targets from complex backgrounds. We also improved the positioning accuracy of small targets using the Wasserstein distance loss function. Experiments showed that FFG-YOLO outperformed other algorithms, including YOLOv8n, in small-target detection due to its lightweight nature, meeting the stringent real-time performance and deployment requirements of UAVs. Full article
Show Figures

Figure 1

25 pages, 10870 KB  
Article
XTTS-Based Data Augmentation for Profanity Keyword Recognition in Low-Resource Speech Scenarios
by Shin-Chi Lai, Yi-Chang Zhu, Szu-Ting Wang, Yen-Ching Chang, Ying-Hsiu Hung, Jhen-Kai Tang and Wen-Kai Tsai
Appl. Syst. Innov. 2025, 8(4), 108; https://doi.org/10.3390/asi8040108 - 31 Jul 2025
Viewed by 392
Abstract
As voice cloning technology rapidly advances, the risk of personal voices being misused by malicious actors for fraud or other illegal activities has significantly increased, making the collection of speech data increasingly challenging. To address this issue, this study proposes a data augmentation [...] Read more.
As voice cloning technology rapidly advances, the risk of personal voices being misused by malicious actors for fraud or other illegal activities has significantly increased, making the collection of speech data increasingly challenging. To address this issue, this study proposes a data augmentation method based on XText-to-Speech (XTTS) synthesis to tackle the challenges of small-sample, multi-class speech recognition, using profanity as a case study to achieve high-accuracy keyword recognition. Two models were therefore evaluated: a CNN model (Proposed-I) and a CNN-Transformer hybrid model (Proposed-II). Proposed-I leverages local feature extraction, improving accuracy on a real human speech (RHS) test set from 55.35% without augmentation to 80.36% with XTTS-enhanced data. Proposed-II integrates CNN’s local feature extraction with Transformer’s long-range dependency modeling, further boosting test set accuracy to 88.90% while reducing the parameter count by approximately 41%, significantly enhancing computational efficiency. Compared to a previously proposed incremental architecture, the Proposed-II model achieves an 8.49% higher accuracy while reducing parameters by about 98.81% and MACs by about 98.97%, demonstrating exceptional resource efficiency. By utilizing XTTS and public corpora to generate a novel keyword speech dataset, this study enhances sample diversity and reduces reliance on large-scale original speech data. Experimental analysis reveals that an optimal synthetic-to-real speech ratio of 1:5 significantly improves the overall system accuracy, effectively addressing data scarcity. Additionally, the Proposed-I and Proposed-II models achieve accuracies of 97.54% and 98.66%, respectively, in distinguishing real from synthetic speech, demonstrating their strong potential for speech security and anti-spoofing applications. Full article
(This article belongs to the Special Issue Advancements in Deep Learning and Its Applications)
Show Figures

Figure 1

20 pages, 1426 KB  
Article
Hybrid CNN-NLP Model for Detecting LSB Steganography in Digital Images
by Karen Angulo, Danilo Gil, Andrés Yáñez and Helbert Espitia
Appl. Syst. Innov. 2025, 8(4), 107; https://doi.org/10.3390/asi8040107 - 30 Jul 2025
Viewed by 540
Abstract
This paper proposes a hybrid model that combines convolutional neural networks with natural language processing techniques for least significant bit-based steganography detection in grayscale digital images. The proposed approach identifies hidden messages by analyzing subtle alterations in the least significant bits and validates [...] Read more.
This paper proposes a hybrid model that combines convolutional neural networks with natural language processing techniques for least significant bit-based steganography detection in grayscale digital images. The proposed approach identifies hidden messages by analyzing subtle alterations in the least significant bits and validates the linguistic coherence of the extracted content using a semantic filter implemented with spaCy. The system is trained and evaluated on datasets ranging from 5000 to 12,500 images per class, consistently using an 80% training and 20% validation partition. As a result, the model achieves a maximum accuracy and precision of 99.96%, outperforming recognized architectures such as Xu-Net, Yedroudj-Net, and SRNet. Unlike traditional methods, the model reduces false positives by discarding statistically suspicious but semantically incoherent outputs, which is essential in forensic contexts. Full article
Show Figures

Figure 1

36 pages, 1411 KB  
Review
A Critical Analysis and Roadmap for the Development of Industry 4-Oriented Facilities for Education, Training, and Research in Academia
by Ziyue Jin, Romeo M. Marian and Javaan S. Chahl
Appl. Syst. Innov. 2025, 8(4), 106; https://doi.org/10.3390/asi8040106 - 29 Jul 2025
Viewed by 823
Abstract
The development of Industry 4-oriented facilities in academia for training and research purposes is playing a significant role in pushing forward the Fourth Industrial Revolution. This study can serve academic staff who are intending to build their Industry 4 facilities, to better understand [...] Read more.
The development of Industry 4-oriented facilities in academia for training and research purposes is playing a significant role in pushing forward the Fourth Industrial Revolution. This study can serve academic staff who are intending to build their Industry 4 facilities, to better understand the key features, constraints, and opportunities. This paper presents a systematic literature review of 145 peer-reviewed studies published between 2011 and 2023, which are identified across Scopus, SpringerLink, and Web of Science. As a result, we emphasise the significance of developing Industry 4 learning facilities in academia and outline the main design principles of the Industry 4 ecosystems. We also investigate and discuss the key Industry 4-related technologies that have been extensively used and represented in the reviewed literature, and summarise the challenges and roadblocks that current participants are facing. From these insights, we identify research gaps, outline technology mapping and maturity level, and propose a strategic roadmap for future implementation of Industry 4 facilities. The results of the research are expected to support current and future participants in increasing their awareness of the significance of the development, clarifying the research scope and objectives, and preparing them to deal with inherent complexity and skills issues. Full article
Show Figures

Figure 1

18 pages, 1072 KB  
Article
Complexity of Supply Chains Using Shannon Entropy: Strategic Relationship with Competitive Priorities
by Miguel Afonso Sellitto, Ismael Cristofer Baierle and Marta Rinaldi
Appl. Syst. Innov. 2025, 8(4), 105; https://doi.org/10.3390/asi8040105 - 29 Jul 2025
Viewed by 465
Abstract
Entropy is a foundational concept across scientific domains, playing a role in understanding disorder, randomness, and uncertainty within systems. This study applies Shannon’s entropy in information theory to evaluate and manage complexity in industrial supply chain management. The purpose of the study is [...] Read more.
Entropy is a foundational concept across scientific domains, playing a role in understanding disorder, randomness, and uncertainty within systems. This study applies Shannon’s entropy in information theory to evaluate and manage complexity in industrial supply chain management. The purpose of the study is to propose a quantitative modeling method, employing Shannon’s entropy model as a proxy to assess the complexity in SCs. The underlying assumption is that information entropy serves as a proxy for the complexity of the SC. The research method is quantitative modeling, which is applied to four focal companies from the agrifood and metalworking industries in Southern Brazil. The results showed that companies prioritizing cost and quality exhibit lower complexity compared to those emphasizing flexibility and dependability. Additionally, information flows related to specially engineered products and deliveries show significant differences in average entropies, indicating that organizational complexities vary according to competitive priorities. The implications of this suggest that a focus on cost and quality in SCM may lead to lower complexity, in opposition to a focus on flexibility and dependability, influencing strategic decision making in industrial contexts. This research introduces the novel application of information entropy to assess and control complexity within industrial SCs. Future studies can explore and validate these insights, contributing to the evolving field of supply chain management. Full article
Show Figures

Figure 1

23 pages, 3847 KB  
Article
Optimizing Sentiment Analysis in Multilingual Balanced Datasets: A New Comparative Approach to Enhancing Feature Extraction Performance with ML and DL Classifiers
by Hamza Jakha, Souad El Houssaini, Mohammed-Alamine El Houssaini, Souad Ajjaj and Abdelali Hadir
Appl. Syst. Innov. 2025, 8(4), 104; https://doi.org/10.3390/asi8040104 - 28 Jul 2025
Viewed by 607
Abstract
Social network platforms have a big impact on the development of companies by influencing clients’ behaviors and sentiments, which directly affect corporate reputations. Analyzing this feedback has become an essential component of business intelligence, supporting the improvement of long-term marketing strategies on a [...] Read more.
Social network platforms have a big impact on the development of companies by influencing clients’ behaviors and sentiments, which directly affect corporate reputations. Analyzing this feedback has become an essential component of business intelligence, supporting the improvement of long-term marketing strategies on a larger scale. The implementation of powerful sentiment analysis models requires a comprehensive and in-depth examination of each stage of the process. In this study, we present a new comparative approach for several feature extraction techniques, including TF-IDF, Word2Vec, FastText, and BERT embeddings. These methods are applied to three multilingual datasets collected from hotel review platforms in the tourism sector in English, French, and Arabic languages. Those datasets were preprocessed through cleaning, normalization, labeling, and balancing before being trained on various machine learning and deep learning algorithms. The effectiveness of each feature extraction method was evaluated using metrics such as accuracy, F1-score, precision, recall, ROC AUC curve, and a new metric that measures the execution time for generating word representations. Our extensive experiments demonstrate significant and excellent results, achieving accuracy rates of approximately 99% for the English dataset, 94% for the Arabic dataset, and 89% for the French dataset. These findings confirm the important impact of vectorization techniques on the performance of sentiment analysis models. They also highlight the important relationship between balanced datasets, effective feature extraction methods, and the choice of classification algorithms. So, this study aims to simplify the selection of feature extraction methods and appropriate classifiers for each language, thereby contributing to advancements in sentiment analysis. Full article
(This article belongs to the Topic Social Sciences and Intelligence Management, 2nd Volume)
Show Figures

Figure 1

26 pages, 984 KB  
Article
Assessing and Prioritizing Service Innovation Challenges in UAE Government Entities: A Network-Based Approach for Effective Decision-Making
by Abeer Abuzanjal and Hamdi Bashir
Appl. Syst. Innov. 2025, 8(4), 103; https://doi.org/10.3390/asi8040103 - 28 Jul 2025
Viewed by 640
Abstract
Public service innovation research often focuses on the private or general public sectors, leaving the distinct challenges government entities face unexplored. An empirical study was carried out to bridge this gap using survey results from the United Arab Emirates (UAE) government entities. This [...] Read more.
Public service innovation research often focuses on the private or general public sectors, leaving the distinct challenges government entities face unexplored. An empirical study was carried out to bridge this gap using survey results from the United Arab Emirates (UAE) government entities. This study built on that research by further analyzing the relationships among these challenges through a social network approach, visualizing and analyzing the connections between them by utilizing betweenness centrality and eigenvector centrality as key metrics. Based on this analysis, the challenges were classified into different categories; 8 out of 22 challenges were identified as critical due to their high values in both metrics. Addressing these critical challenges is expected to create a cascading impact, helping to resolve many others. Targeted strategies are proposed, and leveraging open innovation is highlighted as an effective and versatile solution to address and mitigate these challenges. This study is one of the few to adopt a social network analysis perspective to visualize and analyze the relationships among challenges, enabling the identification of critical ones. This research offers novel and valuable insights that could assist decision-makers in UAE government entities and countries with similar contexts with actionable strategies to advance public service innovation. Full article
Show Figures

Figure 1

18 pages, 500 KB  
Article
Hybrid Model-Based Traffic Network Control Using Population Games
by Sindy Paola Amaya, Pablo Andrés Ñañez, David Alejandro Martínez Vásquez, Juan Manuel Calderón Chávez and Armando Mateus Rojas
Appl. Syst. Innov. 2025, 8(4), 102; https://doi.org/10.3390/asi8040102 - 25 Jul 2025
Viewed by 403
Abstract
Modern traffic management requires sophisticated approaches to address the complexities of urban road networks, which continue to grow in complexity due to increasing urbanization and vehicle usage. Traditional methods often fall short in mitigating congestion and optimizing traffic flow, inducing the exploration of [...] Read more.
Modern traffic management requires sophisticated approaches to address the complexities of urban road networks, which continue to grow in complexity due to increasing urbanization and vehicle usage. Traditional methods often fall short in mitigating congestion and optimizing traffic flow, inducing the exploration of innovative traffic control strategies based on advanced theoretical frameworks. In this sense, we explore different game theory-based control strategies in an eight-intersection traffic network modeled by means of hybrid systems and graph theory, using a software simulator that combines the multi-modal traffic simulation software VISSIM and MATLAB to integrate traffic network parameters and population game criteria. Across five distinct network scenarios with varying saturation conditions, we explore a fixed-time scheme of signaling by means of fictitious play dynamics and adaptive schemes, using dynamics such as Smith, replicator, Logit and Brown–Von Neumann–Nash (BNN). Results show better performance for Smith and replicator dynamics in terms of traffic parameters both for fixed and variable signaling times, with an interesting outcome of fictitious play over BNN and Logit. Full article
Show Figures

Figure 1

22 pages, 2652 KB  
Article
Niching-Driven Divide-and-Conquer Hill Exploration
by Junchen Wang, Changhe Li and Yiya Diao
Appl. Syst. Innov. 2025, 8(4), 101; https://doi.org/10.3390/asi8040101 - 22 Jul 2025
Viewed by 489
Abstract
Optimization problems often feature local optima with a significant difference in the basin of attraction (BoA), making evolutionary computation methods prone to discarding solutions located in less-attractive BoAs, thereby posing challenges to the search for optima in these BoAs. To enhance the ability [...] Read more.
Optimization problems often feature local optima with a significant difference in the basin of attraction (BoA), making evolutionary computation methods prone to discarding solutions located in less-attractive BoAs, thereby posing challenges to the search for optima in these BoAs. To enhance the ability to find these optima, various niching methods have been proposed to restrict the competition scope of individuals to their specific neighborhoods. However, redundant searches in more-attractive BoAs as well as necessary searches in less-attractive BoAs can only be promoted simultaneously by these methods. To address this issue, we propose a general framework for niching methods named niching-driven divide-and-conquer hill exploration (NDDCHE). Through gradually learning BoAs from the search results of a niching method and dividing the problem into subproblems with a much smaller number of optima, NDDCHE aims to bring a more balanced distribution of searches in the BoAs of optima found so far, and thus enhance the niching method’s ability to find optima in less-attractive BoAs. Through experiments where niching methods with different categories of niching techniques are integrated with NDDCHE and tested on problems with significant differences in the size of the BoA, the effectiveness and the generalization ability of NDDCHE are proven. Full article
Show Figures

Figure 1

25 pages, 1344 KB  
Article
Cloud-Based Data-Driven Framework for Optimizing Operational Efficiency and Sustainability in Tube Manufacturing
by Michael Maiko Matonya and István Budai
Appl. Syst. Innov. 2025, 8(4), 100; https://doi.org/10.3390/asi8040100 - 22 Jul 2025
Viewed by 596
Abstract
Modern manufacturing strives for peak efficiency while facing pressing demands for environmental sustainability. Balancing these often-conflicting objectives represents a fundamental trade-off in modern manufacturing, as traditional methods typically address them in isolation, leading to suboptimal outcomes. Process mining offers operational insights but often [...] Read more.
Modern manufacturing strives for peak efficiency while facing pressing demands for environmental sustainability. Balancing these often-conflicting objectives represents a fundamental trade-off in modern manufacturing, as traditional methods typically address them in isolation, leading to suboptimal outcomes. Process mining offers operational insights but often lacks dynamic environmental indicators, while standard Life Cycle Assessment (LCA) provides environmental evaluation but uses static data unsuitable for real-time optimization. Frameworks integrating real-time data for dynamic multi-objective optimization are scarce. This study proposes a comprehensive, data-driven, cloud-based framework that overcomes these limitations. It uniquely combines three key components: (1) real-time Process Mining for actual workflows and operational KPIs; (2) dynamic LCA using live sensor data for instance-level environmental impacts (energy, emissions, waste) and (3) Multi-Objective Optimization (NSGA-II) to identify Pareto-optimal solutions balancing efficiency and sustainability. TOPSIS assists decision-making by ranking these solutions. Validated using extensive real-world data from a tube manufacturing facility processing over 390,000 events, the framework demonstrated significant, quantifiable improvements. The optimization yielded a Pareto front of solutions that surpassed baseline performance (87% efficiency; 2007.5 kg CO2/day). The optimal balanced solution identified by TOPSIS simultaneously increased operational efficiency by 5.1% and reduced carbon emissions by 12.4%. Further analysis quantified the efficiency-sustainability trade-offs and confirmed the framework’s adaptability to varying strategic priorities through sensitivity analysis. This research offers a validated framework for industrial applications that enables manufacturers to improve both operational efficiency and environmental sustainability in a unified manner, moving beyond the limitations of disconnected tools. The validated integrated framework provides a powerful, data-driven tool, recommended as a valuable approach for industrial applications seeking continuous improvement in both economic and environmental performance dimensions. Full article
Show Figures

Figure 1

32 pages, 1156 KB  
Article
A Study of the Response Surface Methodology Model with Regression Analysis in Three Fields of Engineering
by Hsuan-Yu Chen and Chiachung Chen
Appl. Syst. Innov. 2025, 8(4), 99; https://doi.org/10.3390/asi8040099 - 21 Jul 2025
Viewed by 782
Abstract
Researchers conduct experiments to discover factors influencing the experimental subjects, so the experimental design is essential. The response surface methodology (RSM) is a special experimental design used to evaluate factors significantly affecting a process and determine the optimal conditions for different factors. The [...] Read more.
Researchers conduct experiments to discover factors influencing the experimental subjects, so the experimental design is essential. The response surface methodology (RSM) is a special experimental design used to evaluate factors significantly affecting a process and determine the optimal conditions for different factors. The relationship between response values and influencing factors is mainly established using regression analysis techniques. These equations are then used to generate contour and surface response plots to provide researchers with further insights. The impact of regression techniques on response surface methodology (RSM) model building has not been studied in detail. This study uses complete regression techniques to analyze sixteen datasets from the literature on semiconductor manufacturing, steel materials, and nanomaterials. Whether each variable significantly affected the response value was assessed using backward elimination and a t-test. The complete regression techniques used in this study included considering the significant influencing variables of the model, testing for normality and constant variance, using predictive performance criteria, and examining influential data points. The results of this study revealed some problems with model building in RSM studies in the literature from three engineering fields, including the direct use of complete equations without statistical testing, deletion of variables with p-values above a preset value without further examination, existence of non-normality and non-constant variance conditions of the dataset without testing, and presence of some influential data points without examination. Researchers should strengthen training in regression techniques to enhance the RSM model-building process. Full article
Show Figures

Figure 1

25 pages, 4186 KB  
Review
Total Productive Maintenance and Industry 4.0: A Literature-Based Path Toward a Proposed Standardized Framework
by Zineb Mouhib, Maryam Gallab, Safae Merzouk, Aziz Soulhi and Mario Di Nardo
Appl. Syst. Innov. 2025, 8(4), 98; https://doi.org/10.3390/asi8040098 - 21 Jul 2025
Viewed by 1084
Abstract
In the context of Industry 4.0, Total Productive Maintenance (TPM) is undergoing a major shift driven by digital technologies such as the IoT, AI, cloud computing, and Cyber–Physical systems. This study explores how these technologies reshape traditional TPM pillars and practices through a [...] Read more.
In the context of Industry 4.0, Total Productive Maintenance (TPM) is undergoing a major shift driven by digital technologies such as the IoT, AI, cloud computing, and Cyber–Physical systems. This study explores how these technologies reshape traditional TPM pillars and practices through a two-phase methodology: bibliometric analysis, which reveals global research trends, key contributors, and emerging themes, and a systematic review, which discusses how core TPM practices are being transformed by advanced technologies. It also identifies key challenges of this transition, including data aggregation, a lack of skills, and resistance. However, despite the growing body of research on digital TPM, a major gap persists: the lack of a standardized model applicable across industries. Existing approaches are often fragmented or too context-specific, limiting scalability. Addressing this gap requires a structured approach that aligns technological advancements with TPM’s foundational principles. Taking a cue from these findings, this article formulates a systematic and scalable framework for TPM 4.0 deployment. The framework is based on four pillars: modular technological architecture, phased deployment, workforce integration, and standardized performance indicators. The ultimate goal is to provide a basis for a universal digital TPM standard that enhances the efficiency, resilience, and efficacy of smart maintenance systems. Full article
(This article belongs to the Section Industrial and Manufacturing Engineering)
Show Figures

Figure 1

27 pages, 2527 KB  
Review
A Systematic Review of Responsible Artificial Intelligence Principles and Practice
by Lakshitha Gunasekara, Nicole El-Haber, Swati Nagpal, Harsha Moraliyage, Zafar Issadeen, Milos Manic and Daswin De Silva
Appl. Syst. Innov. 2025, 8(4), 97; https://doi.org/10.3390/asi8040097 - 21 Jul 2025
Viewed by 1793
Abstract
The accelerated development of Artificial Intelligence (AI) capabilities and systems is driving a paradigm shift in productivity, innovation and growth. Despite this generational opportunity, AI is fraught with significant challenges and risks. To address these challenges, responsible AI has emerged as a modus [...] Read more.
The accelerated development of Artificial Intelligence (AI) capabilities and systems is driving a paradigm shift in productivity, innovation and growth. Despite this generational opportunity, AI is fraught with significant challenges and risks. To address these challenges, responsible AI has emerged as a modus operandi that ensures protections while not stifling innovations. Responsible AI minimizes risks to people, society, and the environment. However, responsible AI principles and practice are impacted by ‘principle proliferation’ as they are diverse and distributed across the applications, stakeholders, risks, and downstream impact of AI systems. This article presents a systematic review of responsible AI principles and practice with the objectives of discovering the current state, the foundations and the need for responsible AI, followed by the principles of responsible AI, and translation of these principles into the responsible practice of AI. Starting with 22,711 relevant peer-reviewed articles from comprehensive bibliographic databases, the review filters through to 9700 at de-duplication, 5205 at abstract screening, 1230 at semantic screening and 553 at final full-text screening. The analysis of this final corpus is presented as six findings that contribute towards the increased understanding and informed implementation of responsible AI. Full article
Show Figures

Figure 1

26 pages, 3622 KB  
Article
Shear Strength Prediction for RCDBs Utilizing Data-Driven Machine Learning Approach: Enhanced CatBoost with SHAP and PDPs Analyses
by Imad Shakir Abbood, Noorhazlinda Abd Rahman and Badorul Hisham Abu Bakar
Appl. Syst. Innov. 2025, 8(4), 96; https://doi.org/10.3390/asi8040096 - 10 Jul 2025
Viewed by 660
Abstract
Reinforced concrete deep beams (RCDBs) provide significant strength and serviceability for building structures. However, a simple, general, and universally accepted procedure for predicting their shear strength (SS) has yet to be established. This study proposes a novel data-driven approach to predicting the SS [...] Read more.
Reinforced concrete deep beams (RCDBs) provide significant strength and serviceability for building structures. However, a simple, general, and universally accepted procedure for predicting their shear strength (SS) has yet to be established. This study proposes a novel data-driven approach to predicting the SS of RCDBs using an enhanced CatBoost (CB) model. For this purpose, a newly comprehensive database of RCDBs with shear failure, including 950 experimental specimens, was established and adopted. The model was developed through a customized procedure including feature selection, data preprocessing, hyperparameter tuning, and model evaluation. The CB model was further evaluated against three data-driven models (e.g., Random Forest, Extra Trees, and AdaBoost) as well as three prominent mechanics-driven models (e.g., ACI 318, CSA A23.3, and EU2). Finally, the SHAP algorithm was employed for interpretation to increase the model’s reliability. The results revealed that the CB model yielded a superior accuracy and outperformed all other models. In addition, the interpretation results showed similar trends between the CB model and mechanics-driven models. The geometric dimensions and concrete properties are the most influential input features on the SS, followed by reinforcement properties. In which the SS can be significantly improved by increasing beam width and concert strength, and by reducing shear span-to-depth ratio. Thus, the proposed interpretable data-driven model has a high potential to be an alternative approach for design practice in structural engineering. Full article
(This article belongs to the Special Issue Recent Developments in Data Science and Knowledge Discovery)
Show Figures

Figure 1

20 pages, 4572 KB  
Article
Nonlinear Output Feedback Control for Parrot Mambo UAV: Robust Complex Structure Design and Experimental Validation
by Asmaa Taame, Ibtissam Lachkar, Abdelmajid Abouloifa, Ismail Mouchrif and Abdelali El Aroudi
Appl. Syst. Innov. 2025, 8(4), 95; https://doi.org/10.3390/asi8040095 - 7 Jul 2025
Viewed by 646
Abstract
This paper addresses the problem of controlling quadcopters operating in an environment characterized by unpredictable disturbances such as wind gusts. From a control point of view, this is a nonstandard, highly challenging problem. Fundamentally, these quadcopters are high-order dynamical systems characterized by an [...] Read more.
This paper addresses the problem of controlling quadcopters operating in an environment characterized by unpredictable disturbances such as wind gusts. From a control point of view, this is a nonstandard, highly challenging problem. Fundamentally, these quadcopters are high-order dynamical systems characterized by an under-actuated and highly nonlinear model with coupling between several state variables. The main objective of this work is to achieve a trajectory by tracking desired altitude and attitude. The problem was tackled using a robust control approach with a multi-loop nonlinear controller combined with extended Kalman filtering (EKF). Specifically, the flight control system consists of two regulation loops. The first one is an outer loop based on the backstepping approach and allows for control of the elevation as well as the yaw of the quadcopter, while the second one is the inner loop, which allows the maintenance of the desired attitude by adjusting the roll and pitch, whose references are generated by the outer loop through a standard PID, to limit the 2D trajectory to a desired set path. The investigation integrates EKF technique for sensor signal processing to increase measurements accuracy, hence improving robustness of the flight. The proposed control system was formally developed and experimentally validated through indoor tests using the well-known Parrot Mambo unmanned aerial vehicle (UAV). The obtained results show that the proposed flight control system is efficient and robust, making it suitable for advanced UAV navigation in dynamic scenarios with disturbances. Full article
(This article belongs to the Section Control and Systems Engineering)
Show Figures

Figure 1

20 pages, 2918 KB  
Article
Randomized Feature and Bootstrapped Naive Bayes Classification
by Bharameeporn Phatcharathada and Patchanok Srisuradetchai
Appl. Syst. Innov. 2025, 8(4), 94; https://doi.org/10.3390/asi8040094 - 2 Jul 2025
Viewed by 868
Abstract
Naive Bayes (NB) classifiers are widely used for their simplicity, computational efficiency, and interpretability. However, their predictive performance can degrade significantly in real-world settings where the conditional independence assumption is often violated. More complex NB variants address this issue but typically introduce structural [...] Read more.
Naive Bayes (NB) classifiers are widely used for their simplicity, computational efficiency, and interpretability. However, their predictive performance can degrade significantly in real-world settings where the conditional independence assumption is often violated. More complex NB variants address this issue but typically introduce structural complexity or require explicit dependency modeling, limiting their scalability and transparency. This study proposes two lightweight ensemble-based extensions—randomized feature naive Bayes (RF-NB) and randomized feature bootstrapped naive Bayes (RFB-NB)—designed to enhance robustness and predictive stability without altering the underlying NB model. By integrating randomized feature selection and bootstrap resampling, these methods implicitly reduce feature dependence and noise-induced variance. Evaluation across twenty real-world datasets spanning medical, financial, and industrial domains demonstrates that RFB-NB consistently outperformed classical NB, RF-NB, and k-nearest neighbor in several cases. Although random forest achieved higher average accuracy overall, RFB-NB demonstrated comparable accuracy with notably lower variance and improved predictive stability specifically in datasets characterized by high noise levels, large dimensionality, or significant class imbalance. These findings underscore the practical and complementary advantages of RFB-NB in challenging classification scenarios. Full article
(This article belongs to the Special Issue Recent Developments in Data Science and Knowledge Discovery)
Show Figures

Figure 1

16 pages, 3059 KB  
Article
OFF-The-Hook: A Tool to Detect Zero-Font and Traditional Phishing Attacks in Real Time
by Nazar Abbas Saqib, Zahrah Ali AlMuraihel, Reema Zaki AlMustafa, Farah Amer AlRuwaili, Jana Mohammed AlQahtani, Amal Aodah Alahmadi, Deemah Alqahtani, Saad Abdulrahman Alharthi, Sghaier Chabani and Duaa Ali AL Kubaisy
Appl. Syst. Innov. 2025, 8(4), 93; https://doi.org/10.3390/asi8040093 - 30 Jun 2025
Viewed by 821
Abstract
Phishing attacks continue to pose serious challenges to cybersecurity, with attackers constantly refining their methods to bypass detection systems. One particularly evasive technique is Zero-Font phishing, which involves the insertion of invisible or zero-sized characters into email content to deceive both users and [...] Read more.
Phishing attacks continue to pose serious challenges to cybersecurity, with attackers constantly refining their methods to bypass detection systems. One particularly evasive technique is Zero-Font phishing, which involves the insertion of invisible or zero-sized characters into email content to deceive both users and traditional email filters. Because these characters are not visible to human readers but still processed by email systems, they can be used to evade detection by traditional email filters, obscuring malicious intent in ways that bypass basic content inspection. This study introduces a proactive phishing detection tool capable of identifying both traditional and Zero-Font phishing attempts. The proposed tool leverages a multi-layered security framework, combining structural inspection and machine learning-based classification to detect both traditional and Zero-Font phishing attempts. At its core, the system incorporates an advanced machine learning model trained on a well-established dataset comprising both phishing and legitimate emails. The model alone achieves an accuracy rate of up to 98.8%, contributing significantly to the overall effectiveness of the tool. This hybrid approach enhances the system’s robustness and detection accuracy across diverse phishing scenarios. The findings underscore the importance of multi-faceted detection mechanisms and contribute to the development of more resilient defenses in the ever-evolving landscape of cybersecurity threats. Full article
(This article belongs to the Special Issue The Intrusion Detection and Intrusion Prevention Systems)
Show Figures

Figure 1

30 pages, 3461 KB  
Article
A Privacy-Preserving Record Linkage Method Based on Secret Sharing and Blockchain
by Shumin Han, Zikang Wang, Qiang Zhao, Derong Shen, Chuang Wang and Yangyang Xue
Appl. Syst. Innov. 2025, 8(4), 92; https://doi.org/10.3390/asi8040092 - 28 Jun 2025
Viewed by 686
Abstract
Privacy-preserving record linkage (PPRL) aims to link records from different data sources while ensuring sensitive information is not disclosed. Utilizing blockchain as a trusted third party is an effective strategy for enhancing transparency and auditability in PPRL. However, to ensure data privacy during [...] Read more.
Privacy-preserving record linkage (PPRL) aims to link records from different data sources while ensuring sensitive information is not disclosed. Utilizing blockchain as a trusted third party is an effective strategy for enhancing transparency and auditability in PPRL. However, to ensure data privacy during computation, such approaches often require computationally intensive cryptographic techniques. This can introduce significant computational overhead, limiting the method’s efficiency and scalability. To address this performance bottleneck, we combine blockchain with the distributed computation of secret sharing to propose a PPRL method based on blockchain-coordinated distributed computation. At its core, the approach utilizes Bloom filters to encode data and employs Boolean and arithmetic secret sharing to decompose the data into secret shares, which are uploaded to the InterPlanetary File System (IPFS). Combined with masking and random permutation mechanisms, it enhances privacy protection. Computing nodes perform similarity calculations locally, interacting with IPFS only a limited number of times, effectively reducing communication overhead. Furthermore, blockchain manages the entire computation process through smart contracts, ensuring transparency and correctness of the computation, achieving efficient and secure record linkage. Experimental results demonstrate that this method effectively safeguards data privacy while exhibiting high linkage quality and scalability. Full article
Show Figures

Figure 1

26 pages, 1806 KB  
Article
From Transactions to Transformations: A Bibliometric Study on Technology Convergence in E-Payments
by Priyanka C. Bhatt, Yu-Chun Hsu, Kuei-Kuei Lai and Vinayak A. Drave
Appl. Syst. Innov. 2025, 8(4), 91; https://doi.org/10.3390/asi8040091 - 28 Jun 2025
Viewed by 849
Abstract
This study investigates the convergence of blockchain, artificial intelligence (AI), near-field communication (NFC), and mobile technologies in electronic payment (e-payment) systems, proposing an innovative integrative framework to deconstruct the systemic innovations and transformative impacts driven by such technological synergy. Unlike prior research, which [...] Read more.
This study investigates the convergence of blockchain, artificial intelligence (AI), near-field communication (NFC), and mobile technologies in electronic payment (e-payment) systems, proposing an innovative integrative framework to deconstruct the systemic innovations and transformative impacts driven by such technological synergy. Unlike prior research, which often focuses on single-technology adoption, this study uniquely adopts a cross-technology convergence perspective. To our knowledge, this is the first study to empirically map the multi-technology convergence landscape in e-payment using scientometric techniques. By employing bibliometric and thematic network analysis methods, the research maps the intellectual evolution and key research themes of technology convergence in e-payment systems. Findings reveal that while the integration of these technologies holds significant promise, improving transparency, scalability, and responsiveness, it also presents challenges, including interoperability barriers, privacy concerns, and regulatory complexity. Furthermore, this study highlights the potential for convergent technologies to unintentionally deepen the digital divide if not inclusively designed. The novelty of this study is threefold: (1) theoretical contribution—this study expands existing frameworks of technology adoption and digital governance by introducing an integrated perspective on cross-technology adoption and regulatory responsiveness; (2) practical relevance—it offers actionable, stakeholder-specific recommendations for policymakers, financial institutions, developers, and end-users; (3) methodological innovation—it leverages scientometric and topic modeling techniques to capture the macro-level trajectory of technology convergence, complementing traditional qualitative insights. In conclusion, this study advances the theoretical foundations of digital finance and provides forward-looking policy and managerial implications, paving the way for a more secure, inclusive, and innovation-driven digital payment ecosystem. Full article
(This article belongs to the Topic Social Sciences and Intelligence Management, 2nd Volume)
Show Figures

Figure 1

23 pages, 3736 KB  
Article
Performance Analysis of a Hybrid Complex-Valued CNN-TCN Model for Automatic Modulation Recognition in Wireless Communication Systems
by Hamza Ouamna, Anass Kharbouche, Noureddine El-Haryqy, Zhour Madini and Younes Zouine
Appl. Syst. Innov. 2025, 8(4), 90; https://doi.org/10.3390/asi8040090 - 28 Jun 2025
Viewed by 823
Abstract
This paper presents a novel deep learning-based automatic modulation recognition (AMR) model, designed to classify ten modulation types from complex I/Q signal data. The proposed architecture, named CV-CNN-TCN, integrates Complex-Valued Convolutional Neural Networks (CV-CNNs) with Temporal Convolutional Networks (TCNs) to jointly extract spatial [...] Read more.
This paper presents a novel deep learning-based automatic modulation recognition (AMR) model, designed to classify ten modulation types from complex I/Q signal data. The proposed architecture, named CV-CNN-TCN, integrates Complex-Valued Convolutional Neural Networks (CV-CNNs) with Temporal Convolutional Networks (TCNs) to jointly extract spatial and temporal features while preserving the inherent phase information of the signal. An enhanced variant, CV-CNN-TCN-DCC, incorporates dilated causal convolutions to further strengthen temporal representation. The models are trained and evaluated on the benchmark RadioML2016.10b dataset. At SNR = −10 dB, the CV-CNN-TCN achieves a classification accuracy of 37%, while the CV-CNN-TCN-DCC improves to 40%. In comparison, ResNet reaches 33%, and other models such as CLDNN (convolutional LSTM dense neural network) and SCRNN (Sequential Convolutional Recurrent Neural Network) remain below 30%. At 0 dB SNR, the CV-CNN-TCN-DCC achieves a Jaccard index of 0.58 and an MCC of 0.67, outperforming ResNet (0.55, 0.64) and CNN (0.53, 0.61). Furthermore, the CV-CNN-TCN-DCC achieves 75% accuracy at SNR = 10 dB and maintains over 90% classification accuracy for SNRs above 2 dB. These results demonstrate that the proposed architectures, particularly with dilated causal convolutional enhancements, significantly improve robustness and generalization under low-SNR conditions, outperforming state-of-the-art models in both accuracy and reliability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

26 pages, 8949 KB  
Article
Real-Time Detection of Hole-Type Defects on Industrial Components Using Raspberry Pi 5
by Mehmet Deniz, Ismail Bogrekci and Pinar Demircioglu
Appl. Syst. Innov. 2025, 8(4), 89; https://doi.org/10.3390/asi8040089 - 27 Jun 2025
Viewed by 933
Abstract
In modern manufacturing, ensuring quality control for geometric features is critical, yet detecting anomalies in circular components remains underexplored. This study proposes a real-time defect detection framework for metal parts with holes, optimized for deployment on a Raspberry Pi 5 edge device. We [...] Read more.
In modern manufacturing, ensuring quality control for geometric features is critical, yet detecting anomalies in circular components remains underexplored. This study proposes a real-time defect detection framework for metal parts with holes, optimized for deployment on a Raspberry Pi 5 edge device. We fine-tuned and evaluated three deep learning models ResNet50, EfficientNet-B3, and MobileNetV3-Large on a grayscale image dataset (43,482 samples) containing various hole defects and imbalances. Through extensive data augmentation and class-weighting, the models achieved near-perfect binary classification of defective vs. non-defective parts. Notably, ResNet50 attained 99.98% accuracy (precision 0.9994, recall 1.0000), correctly identifying all defects with only one false alarm. MobileNetV3-Large and EfficientNet-B3 likewise exceeded 99.9% accuracy, with slightly more false positives, but offered advantages in model size or interpretability. Gradient-weighted Class Activation Mapping (Grad-CAM) visualizations confirmed that each network focuses on meaningful geometric features (misaligned or irregular holes) when predicting defects, enhancing explainability. These results demonstrate that lightweight CNNs can reliably detect geometric deviations (e.g., mispositioned or missing holes) in real time. The proposed system significantly improves inline quality assurance by enabling timely, accurate, and interpretable defect detection on low-cost hardware, paving the way for smarter manufacturing inspection. Full article
Show Figures

Figure 1

20 pages, 1652 KB  
Article
Analysis of Spatiotemporal Characteristics of Intercity Travelers Within Urban Agglomeration Based on Trip Chain and K-Prototypes Algorithm
by Shuai Yu, Yuqing Liu and Song Hu
Appl. Syst. Innov. 2025, 8(4), 88; https://doi.org/10.3390/asi8040088 - 26 Jun 2025
Viewed by 635
Abstract
In the rapid process of urbanization, urban agglomerations have become a key driving factor for regional development and spatial reorganization. The formation and development of urban agglomerations rely on communication between cities. However, the spatiotemporal characteristics of intercity travelers are not fully grasped [...] Read more.
In the rapid process of urbanization, urban agglomerations have become a key driving factor for regional development and spatial reorganization. The formation and development of urban agglomerations rely on communication between cities. However, the spatiotemporal characteristics of intercity travelers are not fully grasped throughout the entire trip chain. This study proposes a spatiotemporal analysis method for intercity travel in urban agglomerations by constructing origin-to-destination (OD) trip chains using smartphone data, with the Beijing–Tianjin–Hebei urban agglomeration as a case study. The study employed Cramer’s V and Spearman correlation coefficients for multivariate feature selection, identifying 12 key variables from an initial set of 20. Then, optimal cluster configuration was determined via silhouette analysis. Finally, the K-prototypes algorithm was applied to cluster 161,797 intercity trip chains across six transportation corridors in 2019 and 2021, facilitating a comparative spatiotemporal analysis of travel patterns. Results show the following: (1) Intercity travelers are predominantly males aged 19–35, with significantly higher weekday volumes; (2) Modal split exhibits significant spatial heterogeneity—the metro predominates in Beijing while road transport prevails elsewhere; (3) Departure hubs’ waiting times increased significantly in 2021 relative to 2019 baselines; (4) Increased metro mileage correlates positively with extended intra-city travel distances. The results substantially contribute to transportation planning, particularly in optimizing multimodal hub operations and infrastructure investment allocation. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop