Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q2 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.2 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.6 (2023);
5-Year Impact Factor:
2.4 (2023)
Latest Articles
A Novel Data Analytics Methodology for Discovering Behavioral Risk Profiles: The Case of Diners During a Pandemic
Computers 2024, 13(10), 272; https://doi.org/10.3390/computers13100272 (registering DOI) - 19 Oct 2024
Abstract
Understanding tourist profiles and behaviors during health pandemics is key to better preparedness for unforeseen future outbreaks, particularly for tourism and hospitality businesses. This study develops and applies a novel data analytics methodology to gain insights into the health risk reduction behavior of
[...] Read more.
Understanding tourist profiles and behaviors during health pandemics is key to better preparedness for unforeseen future outbreaks, particularly for tourism and hospitality businesses. This study develops and applies a novel data analytics methodology to gain insights into the health risk reduction behavior of restaurant diners/patrons during their dining out experiences in a pandemic. The methodology builds on data relating to four constructs (question categories) and measurements (questions and attributes), with the constructs being worry, health risk prevention behavior, health risk reduction behavior, and demographic characteristics. As a unique contribution, the methodology generates a behavioral typology by identifying risk profiles, which are expressed as one- and two-level decision rules. For example, the results highlighted the significance of restaurants’ adherence to cautionary measures and diners’ perception of seclusion. These and other factors enable a multifaceted analysis, typology, and understanding of diners’ risk profiles, offering valuable guidance for developing managerial strategies and skill development programs to promote safer dining experiences during pandemics. Besides yielding novel types of insights through rules, another practical contribution of the research is the development of a public web-based analytics dashboard for interactive insight discovery and decision support.
Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024)
►
Show Figures
Open AccessSystematic Review
Context-Aware Embedding Techniques for Addressing Meaning Conflation Deficiency in Morphologically Rich Languages Word Embedding: A Systematic Review and Meta Analysis
by
Mosima Anna Masethe, Hlaudi Daniel Masethe and Sunday O. Ojo
Computers 2024, 13(10), 271; https://doi.org/10.3390/computers13100271 - 17 Oct 2024
Abstract
This systematic literature review aims to evaluate and synthesize the effectiveness of various embedding techniques—word embeddings, contextual word embeddings, and context-aware embeddings—in addressing Meaning Conflation Deficiency (MCD). Using the PRISMA framework, this study assesses the current state of research and provides insights into
[...] Read more.
This systematic literature review aims to evaluate and synthesize the effectiveness of various embedding techniques—word embeddings, contextual word embeddings, and context-aware embeddings—in addressing Meaning Conflation Deficiency (MCD). Using the PRISMA framework, this study assesses the current state of research and provides insights into the impact of these techniques on resolving meaning conflation issues. After a thorough literature search, 403 articles on the subject were found. A thorough screening and selection process resulted in the inclusion of 25 studies in the meta-analysis. The evaluation adhered to the PRISMA principles, guaranteeing a methodical and lucid process. To estimate effect sizes and evaluate heterogeneity and publication bias among the chosen papers, meta-analytic approaches were utilized such as the tau-squared (τ2) which represents a statistical parameter used in random-effects, H-squared (H2) is a statistic used to measure heterogeneity, and I-squared (I2) quantify the degree of heterogeneity. The meta-analysis demonstrated a high degree of variation in effect sizes among the studies, with a τ2 value of 8.8724. The significant degree of heterogeneity was further emphasized by the H2 score of 8.10 and the I2 value of 87.65%. A trim and fill analysis with a beta value of 5.95, a standard error of 4.767, a Z-value (or Z-score) of 1.25 which is a statistical term used to express the number of standard deviations a data point deviates from the established mean, and a p-value (probability value) of 0.2 was performed to account for publication bias which is one statistical tool that can be used to assess the importance of hypothesis test results. The results point to a sizable impact size, but the estimates are highly unclear, as evidenced by the huge standard error and non-significant p-value. The review concludes that although contextually aware embeddings have promise in treating Meaning Conflation Deficiency, there is a great deal of variability and uncertainty in the available data. The varied findings among studies are highlighted by the large τ2, I2, and H2 values, and the trim and fill analysis show that changes in publication bias do not alter the impact size’s non-significance. To generate more trustworthy insights, future research should concentrate on enhancing methodological consistency, investigating other embedding strategies, and extending analysis across various languages and contexts. Even though the results demonstrate a significant impact size in addressing MCD through sophisticated word embedding techniques, like context-aware embeddings, there is still a great deal of variability and uncertainty because of various factors, including the different languages studied, the sizes of the corpuses, and the embedding techniques used. These differences show how future research methods must be standardized to guarantee that study results can be compared to one another. The results emphasize how crucial it is to extend the linguistic scope to more morphologically rich and low-resource languages, where MCD is especially difficult. The creation of language-specific models for low-resource languages is one way to increase performance and consistency across Natural Language Processing (NLP) applications in a practical sense. By taking these actions, we can advance our understanding of MCD more thoroughly, which will ultimately improve the performance of NLP systems in a variety of language circumstances.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
►▼
Show Figures
Figure 1
Open AccessReview
Intelligent Tutoring Systems in Mathematics Education: A Systematic Literature Review Using the Substitution, Augmentation, Modification, Redefinition Model
by
Taekwon Son
Computers 2024, 13(10), 270; https://doi.org/10.3390/computers13100270 (registering DOI) - 15 Oct 2024
Abstract
Scholars have claimed that artificial intelligence can be used in education to transform learning. However, there is insufficient evidence on whether intelligent tutoring systems (ITSs), a representative form of artificial intelligence in education, has transformed the teaching and learning of mathematics. To fill
[...] Read more.
Scholars have claimed that artificial intelligence can be used in education to transform learning. However, there is insufficient evidence on whether intelligent tutoring systems (ITSs), a representative form of artificial intelligence in education, has transformed the teaching and learning of mathematics. To fill this gap, this systematic review was conducted to examine empirical studies from 2003 to 2023 that used ITSs in mathematics education. Technology integration was coded using the substitution, augmentation, modification, redefinition (SAMR) model, which was extended to suit ITSs in a mathematics education context. How different contexts and teacher roles are intertwined with SAMR levels were examined. The results show that while ITSs in mathematics education primarily augmented existing learning, recent ITS studies have transformed students’ learning experiences. ITSs were most commonly applied at the elementary school level, and most ITS studies focused on the areas of number and arithmetic, algebra, and geometry. The level of SAMR varied depending on the research purpose, and ITS studies in mathematics education were mainly conducted in a way that minimized teacher intervention. The results of this study suggest that the affordance of an ITS, the educational context, and the teacher’s role should be considered simultaneously to demonstrate the transformative power of ITSs in mathematics education.
Full article
(This article belongs to the Special Issue Smart Learning Environments)
►▼
Show Figures
Figure 1
Open AccessArticle
An Efficient Detection Mechanism of Network Intrusions in IoT Environments Using Autoencoder and Data Partitioning
by
Yiran Xiao, Yaokai Feng and Kouichi Sakurai
Computers 2024, 13(10), 269; https://doi.org/10.3390/computers13100269 - 14 Oct 2024
Abstract
In recent years, with the development of the Internet of Things and distributed computing, the “server-edge device” architecture has been widely deployed. This study focuses on leveraging autoencoder technology to address the binary classification problem in network intrusion detection, aiming to develop a
[...] Read more.
In recent years, with the development of the Internet of Things and distributed computing, the “server-edge device” architecture has been widely deployed. This study focuses on leveraging autoencoder technology to address the binary classification problem in network intrusion detection, aiming to develop a lightweight model suitable for edge devices. Traditional intrusion detection models face two main challenges when directly ported to edge devices: inadequate computational resources to support large-scale models and the need to improve the accuracy of simpler models. To tackle these issues, this research utilizes the Extreme Learning Machine for its efficient training speed and compact model size to implement autoencoders. Two improvements over the latest related work are proposed: First, to improve data purity and ultimately enhance detection performance, the data are partitioned into multiple regions based on the prediction results of these autoencoders. Second, autoencoder characteristics are leveraged to further investigate the data within each region. We used the public dataset NSL-KDD to test the behavior of the proposed mechanism. The experimental results show that when dealing with multi-class attacks, the model’s performance was significantly improved, and the accuracy and F1-Score were improved by 3.5% and 2.9%, respectively, maintaining its lightweight nature.
Full article
(This article belongs to the Special Issue Wireless Sensor Network, IoT and Cloud Computing Technologies for Smart Cities)
►▼
Show Figures
Figure 1
Open AccessArticle
Employing Different Algorithms of Lightweight Convolutional Neural Network Models in Image Distortion Classification
by
Ismail Taha Ahmed, Falah Amer Abdulazeez and Baraa Tareq Hammad
Computers 2024, 13(10), 268; https://doi.org/10.3390/computers13100268 - 12 Oct 2024
Abstract
The majority of applications use automatic image recognition technologies to carry out a range of tasks. Therefore, it is crucial to identify and classify image distortions to improve image quality. Despite efforts in this area, there are still many challenges in accurately and
[...] Read more.
The majority of applications use automatic image recognition technologies to carry out a range of tasks. Therefore, it is crucial to identify and classify image distortions to improve image quality. Despite efforts in this area, there are still many challenges in accurately and reliably classifying distorted images. In this paper, we offer a comprehensive analysis of models of both non-lightweight and lightweight deep convolutional neural networks (CNNs) for the classification of distorted images. Subsequently, an effective method is proposed to enhance the overall performance of distortion image classification. This method involves selecting features from the pretrained models’ capabilities and using a strong classifier. The experiments utilized the kadid10k dataset to assess the effectiveness of the results. The K-nearest neighbor (KNN) classifier showed better performance than the naïve classifier in terms of accuracy, precision, error rate, recall and F1 score. Additionally, SqueezeNet outperformed other deep CNN models, both lightweight and non-lightweight, across every evaluation metric. The experimental results demonstrate that combining SqueezeNet with KNN can effectively and accurately classify distorted images into the correct categories. The proposed SqueezeNet-KNN method achieved an accuracy rate of 89%. As detailed in the results section, the proposed method outperforms state-of-the-art methods in accuracy, precision, error, recall, and F1 score measures.
Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
Internet of Things-Driven Precision in Fish Farming: A Deep Dive into Automated Temperature, Oxygen, and pH Regulation
by
Md. Naymul Islam Nayoun, Syed Akhter Hossain, Karim Mohammed Rezaul, Kazy Noor e Alam Siddiquee, Md. Shabiul Islam and Tajnuva Jannat
Computers 2024, 13(10), 267; https://doi.org/10.3390/computers13100267 - 12 Oct 2024
Abstract
The research introduces a revolutionary Internet of Things (IoT)-based system for fish farming, designed to significantly enhance efficiency and cost-effectiveness. By integrating the NodeMcu12E ESP8266 microcontroller, this system automates the management of critical water quality parameters such as pH, temperature, and oxygen levels,
[...] Read more.
The research introduces a revolutionary Internet of Things (IoT)-based system for fish farming, designed to significantly enhance efficiency and cost-effectiveness. By integrating the NodeMcu12E ESP8266 microcontroller, this system automates the management of critical water quality parameters such as pH, temperature, and oxygen levels, essential for fostering optimal fish growth conditions and minimizing mortality rates. The core of this innovation lies in its intelligent monitoring and control mechanism, which not only supports accelerated fish development but also ensures the robustness of the farming process through automated adjustments whenever the monitored parameters deviate from desired thresholds. This smart fish farming solution features an Arduino IoT cloud-based framework, offering a user-friendly web interface that enables fish farmers to remotely monitor and manage their operations from any global location. This aspect of the system emphasizes the importance of efficient information management and the transformation of sensor data into actionable insights, thereby reducing the need for constant human oversight and significantly increasing operational reliability. The autonomous functionality of the system is a key highlight, designed to persist in adjusting the environmental conditions within the fish farm until the optimal parameters are restored. This capability greatly diminishes the risks associated with manual monitoring and adjustments, allowing even those with limited expertise in aquaculture to achieve high levels of production efficiency and sustainability. By leveraging data-driven technologies and IoT innovations, this study not only addresses the immediate needs of the fish farming industry but also contributes to solving the broader global challenge of protein production. It presents a scalable and accessible approach to modern aquaculture, empowering stakeholders to maximize output and minimize risks associated with fish farming, thereby paving the way for a more sustainable and efficient future in the global food supply.
Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
►▼
Show Figures
Figure 1
Open AccessArticle
Zero-Shot Learning for Accurate Project Duration Prediction in Crowdsourcing Software Development
by
Tahir Rashid, Inam Illahi, Qasim Umer, Muhammad Arfan Jaffar, Waheed Yousuf Ramay and Hanadi Hakami
Computers 2024, 13(10), 266; https://doi.org/10.3390/computers13100266 - 12 Oct 2024
Abstract
Crowdsourcing Software Development (CSD) platforms, i.e., TopCoder, function as intermediaries connecting clients with developers. Despite employing systematic methodologies, these platforms frequently encounter high task abandonment rates, with approximately 19% of projects failing to meet satisfactory outcomes. Although existing research has focused on task
[...] Read more.
Crowdsourcing Software Development (CSD) platforms, i.e., TopCoder, function as intermediaries connecting clients with developers. Despite employing systematic methodologies, these platforms frequently encounter high task abandonment rates, with approximately 19% of projects failing to meet satisfactory outcomes. Although existing research has focused on task scheduling, developer recommendations, and reward mechanisms, there has been insufficient attention to the support of platform moderators, or copilots, who are essential to project success. A critical responsibility of copilots is estimating project duration; however, manual predictions often lead to inconsistencies and delays. This paper introduces an innovative machine learning approach designed to automate the prediction of project duration on CSD platforms. Utilizing historical data from TopCoder, the proposed method extracts pertinent project attributes and preprocesses textual data through Natural Language Processing (NLP). Bidirectional Encoder Representations from Transformers (BERT) are employed to convert textual information into vectors, which are then analyzed using various machine learning algorithms. Zero-shot learning algorithms exhibit superior performance, with an average accuracy of 92.76%, precision of 92.76%, recall of 99.33%, and an f-measure of 95.93%. The implementation of the proposed automated duration prediction model is crucial for enhancing the success rate of crowdsourcing projects, optimizing resource allocation, managing budgets effectively, and improving stakeholder satisfaction.
Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
►▼
Show Figures
Figure 1
Open AccessArticle
Area–Time-Efficient High-Radix Modular Inversion Algorithm and Hardware Implementation for ECC over Prime Fields
by
Yamin Li
Computers 2024, 13(10), 265; https://doi.org/10.3390/computers13100265 - 12 Oct 2024
Abstract
Elliptic curve cryptography (ECC) is widely used for secure communications, because it can provide the same level of security as RSA with a much smaller key size. In constrained environments, it is important to consider efficiency, in terms of execution time and hardware
[...] Read more.
Elliptic curve cryptography (ECC) is widely used for secure communications, because it can provide the same level of security as RSA with a much smaller key size. In constrained environments, it is important to consider efficiency, in terms of execution time and hardware costs. Modular inversion is a key time-consuming calculation used in ECC. Its hardware implementation requires extensive hardware resources, such as lookup tables and registers. We investigate the state-of-the-art modular inversion algorithms, and evaluate the performance and cost of the algorithms and their hardware implementations. We then propose a high-radix modular inversion algorithm aimed at reducing the execution time and hardware costs. We present a detailed radix-8 hardware implementation based on 256-bit primes in Verilog HDL and compare its cost performance to other implementations. Our implementation on the Altera Cyclone V FPGA chip used 1227 ALMs (adaptive logic modules) and 1037 registers. The modular inversion calculation took 3.67 ms. The AT (area–time) factor was 8.30, outperforming the other implementations. We also present an implementation of ECC using the proposed radix-8 modular inversion algorithm. The implementation results also showed that our modular inversion algorithm was more efficient in area–time than the other algorithms.
Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
►▼
Show Figures
Figure 1
Open AccessArticle
AI-Generated Spam Review Detection Framework with Deep Learning Algorithms and Natural Language Processing
by
Mudasir Ahmad Wani, Mohammed ElAffendi and Kashish Ara Shakil
Computers 2024, 13(10), 264; https://doi.org/10.3390/computers13100264 - 12 Oct 2024
Abstract
Spam reviews pose a significant challenge to the integrity of online platforms, misleading consumers and undermining the credibility of genuine feedback. This paper introduces an innovative AI-generated spam review detection framework that leverages Deep Learning algorithms and Natural Language Processing (NLP) techniques to
[...] Read more.
Spam reviews pose a significant challenge to the integrity of online platforms, misleading consumers and undermining the credibility of genuine feedback. This paper introduces an innovative AI-generated spam review detection framework that leverages Deep Learning algorithms and Natural Language Processing (NLP) techniques to identify and mitigate spam reviews effectively. Our framework utilizes multiple Deep Learning models, including Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, Gated Recurrent Unit (GRU), and Bidirectional LSTM (BiLSTM), to capture intricate patterns in textual data. The system processes and analyzes large volumes of review content to detect deceptive patterns by utilizing advanced NLP and text embedding techniques such as One-Hot Encoding, Word2Vec, and Term Frequency-Inverse Document Frequency (TF-IDF). By combining three embedding techniques with four Deep Learning algorithms, a total of twelve exhaustive experiments were conducted to detect AI-generated spam reviews. The experimental results demonstrate that our approach outperforms the traditional machine learning models, offering a robust solution for ensuring the authenticity of online reviews. Among the models evaluated, those employing Word2Vec embeddings, particularly the BiLSTM_Word2Vec model, exhibited the strongest performance. The BiLSTM model with Word2Vec achieved the highest performance, with an exceptional accuracy of 98.46%, a precision of 0.98, a recall of 0.97, and an F1-score of 0.98, reflecting a near-perfect balance between precision and recall. Its high F2-score (0.9810) and F0.5-score (0.9857) further highlight its effectiveness in accurately detecting AI-generated spam while minimizing false positives, making it the most reliable option for this task. Similarly, the Word2Vec-based LSTM model also performed exceptionally well, with an accuracy of 97.58%, a precision of 0.97, a recall of 0.96, and an F1-score of 0.97. The CNN model with Word2Vec similarly delivered strong results, achieving an accuracy of 97.61%, a precision of 0.97, a recall of 0.96, and an F1-score of 0.97. This study is unique in its focus on detecting spam reviews specifically generated by AI-based tools rather than solely detecting spam reviews or AI-generated text. This research contributes to the field of spam detection by offering a scalable, efficient, and accurate framework that can be integrated into various online platforms, enhancing user trust and the decision-making processes.
Full article
(This article belongs to the Special Issue When Natural Language Processing Meets Machine Learning—Opportunities, Challenges and Solutions)
►▼
Show Figures
Figure 1
Open AccessArticle
access@tour by Action: A Platform for Improving Accessible Tourism Conditions
by
Pedro Teixeira, Celeste Eusébio and Leonor Teixeira
Computers 2024, 13(10), 263; https://doi.org/10.3390/computers13100263 - 12 Oct 2024
Abstract
►▼
Show Figures
Accessible tourism has become relevant, generating significant economic and social impacts. Even though the accessible tourism market is rising and presents an excellent business opportunity, this market is largely ignored, as it is challenging to stimulate the flow of accessibility information. Accessible technologies,
[...] Read more.
Accessible tourism has become relevant, generating significant economic and social impacts. Even though the accessible tourism market is rising and presents an excellent business opportunity, this market is largely ignored, as it is challenging to stimulate the flow of accessibility information. Accessible technologies, such as tourism information systems, can be a potential solution, increasing accessibility through communication. However, these solutions are few and tend to fail the integration of users upon development processes. This research aims to present a technological platform to improve accessibility in the tourism industry. The name of this accessible and adaptable technological solution is access@tour by action, and it was created following a user-centered design methodology. This development involved a requirement engineering process based on three crucial stakeholders in accessible tourism: educational institutions, supply agents, and demand agents. The design phase was achieved with the help of a conceptual model based on a unified modeling language. The initial prototype of the solution, created in Adobe XD, implements a wide range of informational and accessibility requirements. Some access@tour by action interfaces outline the design, content, and primary functionalities. By linking technological development, tourism, and social inclusion components, this study highlights the relevance and interdisciplinarity of processes in developing accessible information systems.
Full article
Figure 1
Open AccessReview
A Bibliometric Analysis Exploring the Acceptance of Virtual Reality among Older Adults: A Review
by
Pei-Gang Wang, Nazlena Mohamad Ali and Mahidur R. Sarker
Computers 2024, 13(10), 262; https://doi.org/10.3390/computers13100262 - 12 Oct 2024
Abstract
In recent years, there has been a widespread integration of virtual reality (VR) technology across various sectors including healthcare, education, and entertainment, marking a significant rise in its societal importance. However, with the ongoing trend of population ageing, understanding the elderly’s acceptance of
[...] Read more.
In recent years, there has been a widespread integration of virtual reality (VR) technology across various sectors including healthcare, education, and entertainment, marking a significant rise in its societal importance. However, with the ongoing trend of population ageing, understanding the elderly’s acceptance of such new technologies has become a focal point in both academic and industrial discourse. Despite the attention it garners, there exists a gap in understanding the attitudes of older adults towards VR adoption, along with evident needs and barriers within this demographic. Hence, gaining an in-depth comprehension of the factors influencing the acceptance of VR technology among older adults becomes imperative to enhance its utility and efficacy within this group. This study employs renowned databases such as WoS and Scopus to scrutinize and analyze the utilization of VR among the elderly population. Utilizing VOSviewer software (version 1.6.20), statistical analysis is conducted on the pertinent literature to delve into research lacunae, obstacles, and recommendations in this domain. The findings unveil a notable surge in literature studies concerning VR usage among older adults, particularly evident since 2019. This study documents significant journals, authors, citations, countries, and research domains contributing to this area. Furthermore, it highlights pertinent issues and challenges surrounding the adoption of VR by older users, aiming to identify prevailing constraints, research voids, and future technological trajectories. Simultaneously, this study furnishes guidelines and suggestions tailored towards enhancing VR acceptance among the elderly, thereby fostering a more inclusive technological milieu. Ultimately, this research aspires to establish an encompassing technological ecosystem empowering older adults to harness VR technology for enriched engagement, learning, and social interactions.
Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education 2024)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhancing Information Exchange in Ship Maintenance through Digital Twins and IoT: A Comprehensive Framework
by
Andrii Golovan, Vasyl Mateichyk, Igor Gritsuk, Alexander Lavrov, Miroslaw Smieszek, Iryna Honcharuk and Olena Volska
Computers 2024, 13(10), 261; https://doi.org/10.3390/computers13100261 - 11 Oct 2024
Abstract
This article proposes a comprehensive framework for enhancing information exchange in ship maintenance through the integration of Digital Twins (DTs) and the Internet of Things (IoT). The maritime industry faces significant challenges in maintaining ships due to issues like data silos, delayed information
[...] Read more.
This article proposes a comprehensive framework for enhancing information exchange in ship maintenance through the integration of Digital Twins (DTs) and the Internet of Things (IoT). The maritime industry faces significant challenges in maintaining ships due to issues like data silos, delayed information flow, and insufficient real-time updates. By leveraging advanced technologies such as DTs and IoT, this framework aims to optimize maintenance processes, improve decision-making, and increase the operational efficiency of maritime vessels. Digital Twins create virtual replicas of physical assets, allowing for continuous monitoring, simulation, and prediction of ship conditions. Meanwhile, IoT devices enable real-time data collection and transmission from various ship components, facilitating a seamless flow of information. This integrated approach enhances predictive maintenance capabilities, reduces downtime, and improves resource allocation. The article will delve into the architecture of the proposed framework, implementation steps, and potential challenges, supported by case studies that demonstrate its practical application and benefits. By addressing these aspects, the framework aims to provide a robust solution for modernizing ship maintenance operations and ensuring the longevity and reliability of maritime assets.
Full article
(This article belongs to the Special Issue Wireless Sensor Network, IoT and Cloud Computing Technologies for Smart Cities)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhanced Neonatal Brain Tissue Analysis via Minimum Spanning Tree Segmentation and the Brier Score Coupled Classifier
by
Tushar Hrishikesh Jaware, Chittaranjan Nayak, Priyadarsan Parida, Nawaf Ali, Yogesh Sharma and Wael Hadi
Computers 2024, 13(10), 260; https://doi.org/10.3390/computers13100260 - 11 Oct 2024
Abstract
Automatic assessment of brain regions in an MR image has emerged as a pivotal tool in advancing diagnosis and continual monitoring of neurological disorders through different phases of life. Nevertheless, current solutions often exhibit specificity to particular age groups, thereby constraining their utility
[...] Read more.
Automatic assessment of brain regions in an MR image has emerged as a pivotal tool in advancing diagnosis and continual monitoring of neurological disorders through different phases of life. Nevertheless, current solutions often exhibit specificity to particular age groups, thereby constraining their utility in observing brain development from infancy to late adulthood. In our research, we introduce a novel approach for segmenting and classifying neonatal brain images. Our methodology capitalizes on minimum spanning tree (MST) segmentation employing the Manhattan distance, complemented by a shrunken centroid classifier empowered by the Brier score. This fusion enhances the accuracy of tissue classification, effectively addressing the complexities inherent in age-specific segmentation. Moreover, we propose a novel threshold estimation method utilizing the Brier score, further refining the classification process. The proposed approach yields a competitive Dice similarity index of 0.88 and a Jaccard index of 0.95. This approach marks a significant step toward neonatal brain tissue segmentation, showcasing the efficacy of our proposed methodology in comparison to the latest cutting-edge methods.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
►▼
Show Figures
Figure 1
Open AccessArticle
Specialized Genetic Operators for the Planning of Passive Optical Networks
by
Oeber Izidoro Pereira, Edgar Manuel Carreño-Franco, Jesús M. López-Lezama and Nicolás Muñoz-Galeano
Computers 2024, 13(10), 259; https://doi.org/10.3390/computers13100259 - 10 Oct 2024
Abstract
Passive Optical Networks (PONs) are telecommunication technologies that use fiber-optic cables to deliver high-speed internet and other communication services to end users. PONs split optical signals from a single fiber into multiple fibers, serving multiple homes or businesses without requiring active electronic components.
[...] Read more.
Passive Optical Networks (PONs) are telecommunication technologies that use fiber-optic cables to deliver high-speed internet and other communication services to end users. PONs split optical signals from a single fiber into multiple fibers, serving multiple homes or businesses without requiring active electronic components. PONs planning involves designing and optimizing the infrastructure for delivering fiber-optic communications to end users. The main contribution of this paper is the introduction of tailored operators within a genetic algorithm (GA) optimization approach for PONs planning. A three vector and an aggregator vector are devised to account, respectively, for physical and logical connections of the network, facilitating the execution of GA operators. This codification and these operators are versatile and can be applied to any population-based algorithm, not limited to GAs alone. Furthermore, the proposed operators are specifically designed to exploit the unique characteristics of PONs, thereby minimizing the occurrence of unfeasible solutions and accelerating convergence towards an optimal network design. By incorporating these specialized operators, this research aims to enhance the efficiency of PONs planning, ultimately leading to reduced costs and improved network performance.
Full article
(This article belongs to the Special Issue Wireless Sensor Network, IoT and Cloud Computing Technologies for Smart Cities)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhancement of Named Entity Recognition in Low-Resource Languages with Data Augmentation and BERT Models: A Case Study on Urdu
by
Fida Ullah, Alexander Gelbukh, Muhammad Tayyab Zamir, Edgardo Manuel Felipe Riverόn and Grigori Sidorov
Computers 2024, 13(10), 258; https://doi.org/10.3390/computers13100258 - 10 Oct 2024
Abstract
Identifying and categorizing proper nouns in text, known as named entity recognition (NER), is crucial for various natural language processing tasks. However, developing effective NER techniques for low-resource languages like Urdu poses challenges due to limited training data, particularly in the nastaliq script.
[...] Read more.
Identifying and categorizing proper nouns in text, known as named entity recognition (NER), is crucial for various natural language processing tasks. However, developing effective NER techniques for low-resource languages like Urdu poses challenges due to limited training data, particularly in the nastaliq script. To address this, our study introduces a novel data augmentation method, “contextual word embeddings augmentation” (CWEA), for Urdu, aiming to enrich existing datasets. The extended dataset, comprising 160,132 tokens and 114,912 labeled entities, significantly enhances the coverage of named entities compared to previous datasets. We evaluated several transformer models on this augmented dataset, including BERT-multilingual, RoBERTa-Urdu-small, BERT-base-cased, and BERT-large-cased. Notably, the BERT-multilingual model outperformed others, achieving the highest macro F1 score of 0.982%. This surpassed the macro f1 scores of the RoBERTa-Urdu-small (0.884%), BERT-large-cased (0.916%), and BERT-base-cased (0.908%) models. Additionally, our neural network model achieved a micro F1 score of 96%, while the RNN model achieved 97% and the BiLSTM model achieved a macro F1 score of 96% on augmented data. Our findings underscore the efficacy of data augmentation techniques in enhancing NER performance for low-resource languages like Urdu.
Full article
(This article belongs to the Special Issue Harnessing Artificial Intelligence for Social and Semantic Understanding)
►▼
Show Figures
Figure 1
Open AccessArticle
Assessing Large Language Models Used for Extracting Table Information from Annual Financial Reports
by
David Balsiger, Hans-Rudolf Dimmler, Samuel Egger-Horstmann and Thomas Hanne
Computers 2024, 13(10), 257; https://doi.org/10.3390/computers13100257 - 9 Oct 2024
Abstract
The extraction of data from tables in PDF documents has been a longstanding challenge in the field of data processing and analysis. While traditional methods have been explored in depth, the rise of Large Language Models (LLMs) offers new possibilities. This article addresses
[...] Read more.
The extraction of data from tables in PDF documents has been a longstanding challenge in the field of data processing and analysis. While traditional methods have been explored in depth, the rise of Large Language Models (LLMs) offers new possibilities. This article addresses the knowledge gaps regarding LLMs, specifically ChatGPT-4 and BARD, for extracting and interpreting data from financial tables in PDF format. This research is motivated by the real-world need to efficiently gather and analyze corporate financial information. The hypothesis is that LLMs—in this case, ChatGPT-4 and BARD—can accurately extract key financial data, such as balance sheets and income statements. The methodology involves selecting representative pages from 46 annual reports of large Swiss corporations listed in the SMI Expanded Index from 2022 and copy–pasting text from these into LLMs. Eight analytical questions were posed to the LLMs, and their responses were assessed for accuracy and for identifying potential error sources in data extraction. The findings revealed significant variance in the performance of ChatGPT-4 and another LLM, BARD, with ChatGPT-4 generally exhibiting superior accuracy. This research contributes to understanding the capabilities and limitations of LLMs in processing and interpreting complex financial data from corporate documents.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
►▼
Show Figures
Figure 1
Open AccessArticle
Audio Deep Fake Detection with Sonic Sleuth Model
by
Anfal Alshehri, Danah Almalki, Eaman Alharbi and Somayah Albaradei
Computers 2024, 13(10), 256; https://doi.org/10.3390/computers13100256 - 8 Oct 2024
Abstract
Information dissemination and preservation are crucial for societal progress, especially in the technological age. While technology fosters knowledge sharing, it also risks spreading misinformation. Audio deepfakes—convincingly fabricated audio created using artificial intelligence (AI)—exacerbate this issue. We present Sonic Sleuth, a novel AI model
[...] Read more.
Information dissemination and preservation are crucial for societal progress, especially in the technological age. While technology fosters knowledge sharing, it also risks spreading misinformation. Audio deepfakes—convincingly fabricated audio created using artificial intelligence (AI)—exacerbate this issue. We present Sonic Sleuth, a novel AI model designed specifically for detecting audio deepfakes. Our approach utilizes advanced deep learning (DL) techniques, including a custom CNN model, to enhance detection accuracy in audio misinformation, with practical applications in journalism and social media. Through meticulous data preprocessing and rigorous experimentation, we achieved a remarkable 98.27% accuracy and a 0.016 equal error rate (EER) on a substantial dataset of real and synthetic audio. Additionally, Sonic Sleuth demonstrated 84.92% accuracy and a 0.085 EER on an external dataset. The novelty of this research lies in its integration of datasets that closely simulate real-world conditions, including noise and linguistic diversity, enabling the model to generalize across a wide array of audio inputs. These results underscore Sonic Sleuth’s potential as a powerful tool for combating misinformation and enhancing integrity in digital communications.
Full article
(This article belongs to the Special Issue Current Issue and Future Directions in Multimedia Hiding and Signal Processing)
►▼
Show Figures
Figure 1
Open AccessArticle
Spatiotemporal Bayesian Machine Learning for Estimation of an Empirical Lower Bound for Probability of Detection with Applications to Stationary Wildlife Photography
by
Mohamed Jaber, Robert D. Breininger, Farag Hamad and Nezamoddin N. Kachouie
Computers 2024, 13(10), 255; https://doi.org/10.3390/computers13100255 - 8 Oct 2024
Abstract
An important parameter in the monitoring and surveillance systems is the probability of detection. Advanced wildlife monitoring systems rely on camera traps for stationary wildlife photography and have been broadly used for estimation of population size and density. Camera encounters are collected for
[...] Read more.
An important parameter in the monitoring and surveillance systems is the probability of detection. Advanced wildlife monitoring systems rely on camera traps for stationary wildlife photography and have been broadly used for estimation of population size and density. Camera encounters are collected for estimation and management of a growing population size using spatial capture models. The accuracy of the estimated population size relies on the detection probability of the individual animals, and in turn depends on observed frequency of the animal encounters with the camera traps. Therefore, optimal coverage by the camera grid is essential for reliable estimation of the population size and density. The goal of this research is implementing a spatiotemporal Bayesian machine learning model to estimate a lower bound for probability of detection of a monitoring system. To obtain an accurate estimate of population size in this study, an empirical lower bound for probability of detection is realized considering the sensitivity of the model to the augmented sample size. The monitoring system must attain a probability of detection greater than the established empirical lower bound to achieve a pertinent estimation accuracy. It was found that for stationary wildlife photography, a camera grid with a detection probability of at least 0.3 is required for accurate estimation of the population size. A notable outcome is that a moderate probability of detection or better is required to obtain a reliable estimate of the population size using spatiotemporal machine learning. As a result, the required probability of detection is recommended when designing an automated monitoring system. The number and location of cameras in the camera grid will determine the camera coverage. Consequently, camera coverage and the individual home-range verify the probability of detection.
Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
Technical Innovations and Social Implications: Mapping Global Research Focus in AI, Blockchain, Cybersecurity, and Privacy
by
Emanuela Bran, Răzvan Rughiniș, Dinu Țurcanu and Gheorghe Nadoleanu
Computers 2024, 13(10), 254; https://doi.org/10.3390/computers13100254 - 8 Oct 2024
Abstract
This study examines the balance between technical and social focus in artificial intelligence, blockchain, cybersecurity, and privacy publications in Web of Science across countries, exploring the social factors that influence these research priorities. We use regression analysis to identify predictors of research focus
[...] Read more.
This study examines the balance between technical and social focus in artificial intelligence, blockchain, cybersecurity, and privacy publications in Web of Science across countries, exploring the social factors that influence these research priorities. We use regression analysis to identify predictors of research focus and cluster analysis to reveal patterns across countries, combining these methods to provide a broader view of global research priorities. Our findings reveal that liberal democracy index, life expectancy, and happiness are significant predictors of research focus, while traditional indicators like education and income show weaker relationships. This unexpected result challenges conventional assumptions about the drivers of research priorities in digital technologies. The study identifies distinct clusters of countries with similar patterns of research focus across the four technologies, revealing previously unrecognized global typologies. Notably, more democratic societies tend to emphasize social implications of technologies, while some rapidly developing countries focus more on technical aspects. These findings suggest that political and social factors may play a larger role in shaping research agendas than previously thought, necessitating a re-evaluation of how we understand and predict research focus in rapidly evolving technological fields. The study provides valuable information for policymakers and researchers, informing strategies for technological development and international collaboration in an increasingly digital world.
Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
►▼
Show Figures
Figure 1
Open AccessArticle
Development of a Children’s Educational Dictionary for a Low-Resource Language Using AI Tools
by
Diana Rakhimova, Aidana Karibayeva, Vladislav Karyukin, Assem Turarbek, Zhansaya Duisenbekkyzy and Rashid Aliyev
Computers 2024, 13(10), 253; https://doi.org/10.3390/computers13100253 - 2 Oct 2024
Abstract
Today, various interactive tools or partially available artificial intelligence applications are actively used in educational processes to solve multiple problems for resource-rich languages, such as English, Spanish, French, etc. Unfortunately, the situation is different and more complex for low-resource languages, like Kazakh, Uzbek,
[...] Read more.
Today, various interactive tools or partially available artificial intelligence applications are actively used in educational processes to solve multiple problems for resource-rich languages, such as English, Spanish, French, etc. Unfortunately, the situation is different and more complex for low-resource languages, like Kazakh, Uzbek, Mongolian, and others, due to the lack of qualitative and accessible resources, morphological complexity, and the semantics of agglutinative languages. This article presents research on early childhood learning resources for the low-resource Kazakh language. Generally, a dictionary for children differs from classical educational dictionaries. The difference between dictionaries for children and adults lies in their purpose and methods of presenting information. A themed dictionary will make learning and remembering new words easier for children because they will be presented in a specific context. This article discusses developing an approach to creating a thematic children’s dictionary of the low-resource Kazakh language using artificial intelligence. The proposed approach is based on several important stages: the initial formation of a list of English words with the use of ChatGPT; identification of their semantic weights; generation of phrases and sentences with the use of the list of semantically related words; translation of obtained phrases and sentences from English to Kazakh, dividing them into bigrams and trigrams; and processing with Kazakh language POS pattern tag templates to adapt them for children. When the dictionary was formed, the semantic proximity of words and phrases to the given theme and age restrictions for children were taken into account. The formed dictionary phrases were evaluated using the cosine similarity, Euclidean similarity, and Manhattan distance metrics. Moreover, the dictionary was extended with video and audio data by implementing models like DALL-E 3, Midjourney, and Stable Diffusion to illustrate the dictionary data and TTS (Text to Speech) technology for the Kazakh language for voice synthesis. The developed thematic dictionary approach was tested, and a SUS (System Usability Scale) assessment of the application was conducted. The experimental results demonstrate the proposed approach’s high efficiency and its potential for wide use in educational purposes.
Full article
(This article belongs to the Special Issue Smart Learning Environments)
►▼
Show Figures
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies including Selected Papers from ICGHIT
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 October 2024
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario BrcicDeadline: 10 December 2024
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 31 December 2024
Topic in
Energies, Applied Sciences, Mathematics, Entropy, Computers
Numerical Methods and Computer Simulations in Energy Analysis, 2nd Edition
Topic Editors: Marcin Kamiński, Mateus MendesDeadline: 20 January 2025
Conferences
Special Issues
Special Issue in
Computers
Applications of Machine Learning and Artificial Intelligence for Healthcare
Guest Editor: Elias DritsasDeadline: 31 October 2024
Special Issue in
Computers
Software Engineering Methodologies and Languages for Event Driven and Large-Scale Management Systems (SLEMS)
Guest Editors: Mehmet Aksit, Valter Vieira de CamargoDeadline: 31 October 2024
Special Issue in
Computers
Smart Learning Environments
Guest Editor: Ananda MaitiDeadline: 31 October 2024
Special Issue in
Computers
Computational Science and Its Applications 2024 (ICCSA 2024)
Guest Editor: Osvaldo GervasiDeadline: 15 November 2024