Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q2 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.7 days after submission; acceptance to publication is undertaken in 3.7 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.6 (2023);
5-Year Impact Factor:
2.4 (2023)
Latest Articles
Modeling Autonomous Vehicle Responses to Novel Observations Using Hierarchical Cognitive Representations Inspired Active Inference
Computers 2024, 13(7), 161; https://doi.org/10.3390/computers13070161 (registering DOI) - 28 Jun 2024
Abstract
Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making
[...] Read more.
Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making processes similar to the human brain, focusing on the agent’s preferences and the principle of free energy. This approach is combined with imitation learning to enhance the vehicle’s ability to adapt to new observations and make human-like decisions. The research involved developing a multi-modal self-awareness architecture for autonomous driving systems and testing this model in driving scenarios, including abnormal observations. The results demonstrated the model’s effectiveness in enabling the vehicle to make safe decisions, particularly in unobserved or dynamic environments. The study concludes that the integration of active inference with imitation learning significantly improves the performance of autonomous vehicles, offering a promising direction for future developments in intelligent transportation systems.
Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2023)
Open AccessArticle
An NLP-Based Exploration of Variance in Student Writing and Syntax: Implications for Automated Writing Evaluation
by
Maria Goldshtein, Amin G. Alhashim and Rod D. Roscoe
Computers 2024, 13(7), 160; https://doi.org/10.3390/computers13070160 - 25 Jun 2024
Abstract
In writing assessment, expert human evaluators ideally judge individual essays with attention to variance among writers’ syntactic patterns. There are many ways to compose text successfully or less successfully. For automated writing evaluation (AWE) systems to provide accurate assessment and relevant feedback, they
[...] Read more.
In writing assessment, expert human evaluators ideally judge individual essays with attention to variance among writers’ syntactic patterns. There are many ways to compose text successfully or less successfully. For automated writing evaluation (AWE) systems to provide accurate assessment and relevant feedback, they must be able to consider similar kinds of variance. The current study employed natural language processing (NLP) to explore variance in syntactic complexity and sophistication across clusters characterized in a large corpus (n = 36,207) of middle school and high school argumentative essays. Using NLP tools, k-means clustering, and discriminant function analysis (DFA), we observed that student writers employed four distinct syntactic patterns: (1) familiar and descriptive language, (2) consistently simple noun phrases, (3) variably complex noun phrases, and (4) moderate complexity with less familiar language. Importantly, each pattern spanned the full range of writing quality; there were no syntactic patterns consistently evaluated as “good” or “bad”. These findings support the need for nuanced approaches in automated writing assessment while informing ways that AWE can participate in that process. Future AWE research can and should explore similar variability across other detectable elements of writing (e.g., vocabulary, cohesion, discursive cues, and sentiment) via diverse modeling methods.
Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Open AccessArticle
Enhanced Security Access Control Using Statistical-Based Legitimate or Counterfeit Identification System
by
Aisha Edrah and Abdelkader Ouda
Computers 2024, 13(7), 159; https://doi.org/10.3390/computers13070159 - 22 Jun 2024
Abstract
With our increasing reliance on technology, there is a growing demand for efficient and seamless access control systems. Smartphone-centric biometric methods offer a diverse range of potential solutions capable of verifying users and providing an additional layer of security to prevent unauthorized access.
[...] Read more.
With our increasing reliance on technology, there is a growing demand for efficient and seamless access control systems. Smartphone-centric biometric methods offer a diverse range of potential solutions capable of verifying users and providing an additional layer of security to prevent unauthorized access. To ensure the security and accuracy of smartphone-centric biometric identification, it is crucial that the phone reliably identifies its legitimate owner. Once the legitimate holder has been successfully determined, the phone can effortlessly provide real-time identity verification for various applications. To achieve this, we introduce a novel smartphone-integrated detection and control system called Identification: Legitimate or Counterfeit (ILC), which utilizes gait cycle analysis. The ILC system employs the smartphone’s accelerometer sensor, along with advanced statistical methods, to detect the user’s gait pattern, enabling real-time identification of the smartphone owner. This approach relies on statistical analysis of measurements obtained from the accelerometer sensor, specifically, peaks extracted from the X-axis data. Subsequently, the derived feature’s probability distribution function (PDF) is computed and compared to the known user’s PDF. The calculated probability verifies the similarity between the distributions, and a decision is made with 92.18% accuracy based on a predetermined verification threshold.
Full article
(This article belongs to the Special Issue Wireless Sensor Network, IoT and Cloud Computing Technologies for Smart Cities)
Open AccessArticle
Personalized Classifier Selection for EEG-Based BCIs
by
Javad Rahimipour Anaraki, Antonina Kolokolova and Tom Chau
Computers 2024, 13(7), 158; https://doi.org/10.3390/computers13070158 - 21 Jun 2024
Abstract
The most important component of an Electroencephalogram (EEG) Brain–Computer Interface (BCI) is its classifier, which translates EEG signals in real time into meaningful commands. The accuracy and speed of the classifier determine the utility of the BCI. However, there is significant intra- and
[...] Read more.
The most important component of an Electroencephalogram (EEG) Brain–Computer Interface (BCI) is its classifier, which translates EEG signals in real time into meaningful commands. The accuracy and speed of the classifier determine the utility of the BCI. However, there is significant intra- and inter-subject variability in EEG data, complicating the choice of the best classifier for different individuals over time. There is a keen need for an automatic approach to selecting a personalized classifier suited to an individual’s current needs. To this end, we have developed a systematic methodology for individual classifier selection, wherein the structural characteristics of an EEG dataset are used to predict a classifier that will perform with high accuracy. The method was evaluated using motor imagery EEG data from Physionet. We confirmed that our approach could consistently predict a classifier whose performance was no worse than the single-best-performing classifier across the participants. Furthermore, Kullback–Leibler divergences between reference distributions and signal amplitude and class label distributions emerged as the most important characteristics for classifier prediction, suggesting that classifier choice depends heavily on the morphology of signal amplitude densities and the degree of class imbalance in an EEG dataset.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Open AccessArticle
Advancing Skin Cancer Prediction Using Ensemble Models
by
Priya Natha and Pothuraju RajaRajeswari
Computers 2024, 13(7), 157; https://doi.org/10.3390/computers13070157 - 21 Jun 2024
Abstract
There are many different kinds of skin cancer, and an early and precise diagnosis is crucial because skin cancer is both frequent and deadly. The key to effective treatment is accurately classifying the various skin cancers, which have unique traits. Dermoscopy and other
[...] Read more.
There are many different kinds of skin cancer, and an early and precise diagnosis is crucial because skin cancer is both frequent and deadly. The key to effective treatment is accurately classifying the various skin cancers, which have unique traits. Dermoscopy and other advanced imaging techniques have enhanced early detection by providing detailed images of lesions. However, accurately interpreting these images to distinguish between benign and malignant tumors remains a difficult task. Improved predictive modeling techniques are necessary due to the frequent occurrence of erroneous and inconsistent outcomes in the present diagnostic processes. Machine learning (ML) models have become essential in the field of dermatology for the automated identification and categorization of skin cancer lesions using image data. The aim of this work is to develop improved skin cancer predictions by using ensemble models, which combine numerous machine learning approaches to maximize their combined strengths and reduce their individual shortcomings. This paper proposes a fresh and special approach for ensemble model optimization for skin cancer classification: the Max Voting method. We trained and assessed five different ensemble models using the ISIC 2018 and HAM10000 datasets: AdaBoost, CatBoost, Random Forest, Gradient Boosting, and Extra Trees. Their combined predictions enhance the overall performance with the Max Voting method. Moreover, the ensemble models were fed with feature vectors that were optimally generated from the image data by a genetic algorithm (GA). We show that, with an accuracy of 95.80%, the Max Voting approach significantly improves the predictive performance when compared to the five ensemble models individually. Obtaining the best results for F1-measure, recall, and precision, the Max Voting method turned out to be the most dependable and robust. The novel aspect of this work is that skin cancer lesions are more robustly and reliably classified using the Max Voting technique. Several pre-trained machine learning models’ benefits are combined in this approach.
Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Open AccessArticle
Chef Dalle: Transforming Cooking with Multi-Model Multimodal AI
by
Brendan Hannon, Yulia Kumar, J. Jenny Li and Patricia Morreale
Computers 2024, 13(7), 156; https://doi.org/10.3390/computers13070156 - 21 Jun 2024
Abstract
In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application
[...] Read more.
In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application integrates voice-to-text conversion via Whisper and ingredient image recognition through GPT-Vision. It employs an advanced recipe filtering system that utilizes user-provided ingredients to fetch recipes, which are then evaluated through multi-model AI through integrations of OpenAI, Google Gemini, Claude, and/or Anthropic APIs to deliver highly personalized recommendations. These methods enable users to interact with the system using voice, text, or images, accommodating various dietary restrictions and preferences. Furthermore, the utilization of DALL-E 3 for generating recipe images enhances user engagement. User feedback mechanisms allow for the refinement of future recommendations, demonstrating the system’s adaptability. Chef Dalle showcases potential applications ranging from home kitchens to grocery stores and restaurant menu customization, addressing accessibility and promoting healthier eating habits. This paper underscores the significance of multimodal HCI in enhancing culinary experiences, setting a precedent for future developments in the field.
Full article
(This article belongs to the Special Issue Harnessing Artificial Intelligence for Social and Semantic Understanding)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00156/article_deploy/html/images/computers-13-00156-g001-550.jpg?1718964493)
Figure 1
Open AccessArticle
A Contextual Model for Visual Information Processing
by
Illia Khurtin and Mukesh Prasad
Computers 2024, 13(6), 155; https://doi.org/10.3390/computers13060155 - 20 Jun 2024
Abstract
Despite significant achievements in the artificial narrow intelligence sphere, the mechanisms of human-like (general) intelligence are still undeveloped. There is a theory stating that the human brain extracts the meaning of information rather than recognizes the features of a phenomenon. Extracting the meaning
[...] Read more.
Despite significant achievements in the artificial narrow intelligence sphere, the mechanisms of human-like (general) intelligence are still undeveloped. There is a theory stating that the human brain extracts the meaning of information rather than recognizes the features of a phenomenon. Extracting the meaning is finding a set of transformation rules (context) and applying them to the incoming information, producing an interpretation. Then, the interpretation is compared to something already seen and is stored in memory. Information can have different meanings in different contexts. A mathematical model of a context processor and a differential contextual space which can perform the interpretation is discussed and developed in this paper. This study examines whether the basic principles of differential contextual spaces work in practice. The model is developed with Rust programming language and trained on black and white images which are rotated and shifted both horizontally and vertically according to the saccades and torsion movements of a human eye. Then, a picture that has never been seen in the particular transformation, but has been seen in another one, is exposed to the model. The model considers the image in all known contexts and extracts the meaning. The results show that the program can successfully process black and white images which are transformed by shifts and rotations. This research prepares the grounding for further investigations of the contextual model principles with which general intelligence might operate.
Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00155/article_deploy/html/images/computers-13-00155-g001-550.jpg?1719299632)
Figure 1
Open AccessArticle
Deep Convolutional Generative Adversarial Networks in Image-Based Android Malware Detection
by
Francesco Mercaldo, Fabio Martinelli and Antonella Santone
Computers 2024, 13(6), 154; https://doi.org/10.3390/computers13060154 - 19 Jun 2024
Abstract
The recent advancements in generative adversarial networks have showcased their remarkable ability to create images that are indistinguishable from real ones. This has prompted both the academic and industrial communities to tackle the challenge of distinguishing fake images from genuine ones. We introduce
[...] Read more.
The recent advancements in generative adversarial networks have showcased their remarkable ability to create images that are indistinguishable from real ones. This has prompted both the academic and industrial communities to tackle the challenge of distinguishing fake images from genuine ones. We introduce a method to assess whether images generated by generative adversarial networks, using a dataset of real-world Android malware applications, can be distinguished from actual images. Our experiments involved two types of deep convolutional generative adversarial networks, and utilize images derived from both static analysis (which does not require running the application) and dynamic analysis (which does require running the application). After generating the images, we trained several supervised machine learning models to determine if these classifiers can differentiate between real and generated malicious applications. Our results indicate that, despite being visually indistinguishable to the human eye, the generated images were correctly identified by a classifier with an F-measure of approximately 0.8. While most generated images were accurately recognized as fake, some were not, leading them to be considered as images produced by real applications.
Full article
(This article belongs to the Special Issue Current Issue and Future Directions in Multimedia Hiding and Signal Processing)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00154/article_deploy/html/images/computers-13-00154-g001-550.jpg?1718795691)
Figure 1
Open AccessArticle
Empowering Communication: A Deep Learning Framework for Arabic Sign Language Recognition with an Attention Mechanism
by
R. S. Abdul Ameer, M. A. Ahmed, Z. T. Al-Qaysi, M. M. Salih and Moceheb Lazam Shuwandy
Computers 2024, 13(6), 153; https://doi.org/10.3390/computers13060153 - 19 Jun 2024
Abstract
This article emphasises the urgent need for appropriate communication tools for communities of people who are deaf or hard-of-hearing, with a specific emphasis on Arabic Sign Language (ArSL). In this study, we use long short-term memory (LSTM) models in conjunction with MediaPipe to
[...] Read more.
This article emphasises the urgent need for appropriate communication tools for communities of people who are deaf or hard-of-hearing, with a specific emphasis on Arabic Sign Language (ArSL). In this study, we use long short-term memory (LSTM) models in conjunction with MediaPipe to reduce the barriers to effective communication and social integration for deaf communities. The model design incorporates LSTM units and an attention mechanism to handle the input sequences of extracted keypoints from recorded gestures. The attention layer selectively directs its focus toward relevant segments of the input sequence, whereas the LSTM layer handles temporal relationships and encodes the sequential data. A comprehensive dataset comprised of fifty frequently used words and numbers in ArSL was collected for developing the recognition model. This dataset comprises many instances of gestures recorded by five volunteers. The results of the experiment support the effectiveness of the proposed approach, as the model achieved accuracies of more than 85% (individual volunteers) and 83% (combined data). The high level of precision emphasises the potential of artificial intelligence-powered translation software to improve effective communication for people with hearing impairments and to enable them to interact with the larger community more easily.
Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00153/article_deploy/html/images/computers-13-00153-g001-550.jpg?1718789670)
Figure 1
Open AccessReview
Hybrid Architectures Used in the Protection of Large Healthcare Records Based on Cloud and Blockchain Integration: A Review
by
Leonardo Juan Ramirez Lopez, David Millan Mayorga, Luis Hernando Martinez Poveda, Andres Felipe Carbonell Amaya and Wilson Rojas Reales
Computers 2024, 13(6), 152; https://doi.org/10.3390/computers13060152 - 12 Jun 2024
Abstract
The management of large medical files poses a critical challenge in the health sector, with conventional systems facing deficiencies in security, scalability, and efficiency. Blockchain ensures the immutability and traceability of medical records, while the cloud allows scalable and efficient storage. Together, they
[...] Read more.
The management of large medical files poses a critical challenge in the health sector, with conventional systems facing deficiencies in security, scalability, and efficiency. Blockchain ensures the immutability and traceability of medical records, while the cloud allows scalable and efficient storage. Together, they can transform the data management of electronic health record applications. The method used was the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to choose and select the relevant studies that contribute to this research, with special emphasis set on maintaining the integrity and security of the blockchain while tackling the potential and efficiency of cloud infrastructures. The study’s focus is to provide a comprehensive and insightful examination of the modern landscape concerning the integration of blockchain and cloud advances, highlighting the current challenges and building a solid foundation for future development. Furthermore, it is very important to increase the integration of blockchain security with the dynamic potential of cloud computing while guaranteeing information integrity and security remain uncompromised. In conclusion, this paper serves as an important resource for analysts, specialists, and partners looking to delve into and develop the integration of blockchain and cloud innovations.
Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00152/article_deploy/html/images/computers-13-00152-g001-550.jpg?1718243308)
Figure 1
Open AccessArticle
InfoSTGCAN: An Information-Maximizing Spatial-Temporal Graph Convolutional Attention Network for Heterogeneous Human Trajectory Prediction
by
Kangrui Ruan and Xuan Di
Computers 2024, 13(6), 151; https://doi.org/10.3390/computers13060151 - 11 Jun 2024
Abstract
►▼
Show Figures
Predicting the future trajectories of multiple interacting pedestrians within a scene has increasingly gained importance in various fields, e.g., autonomous driving, human–robot interaction, and so on. The complexity of this problem is heightened due to the social dynamics among different pedestrians and their
[...] Read more.
Predicting the future trajectories of multiple interacting pedestrians within a scene has increasingly gained importance in various fields, e.g., autonomous driving, human–robot interaction, and so on. The complexity of this problem is heightened due to the social dynamics among different pedestrians and their heterogeneous implicit preferences. In this paper, we present Information Maximizing Spatial-Temporal Graph Convolutional Attention Network (InfoSTGCAN), which takes into account both pedestrian interactions and heterogeneous behavior choice modeling. To effectively capture the complex interactions among pedestrians, we integrate spatial-temporal graph convolution and spatial-temporal graph attention. For grasping the heterogeneity in pedestrians’ behavior choices, our model goes a step further by learning to predict an individual-level latent code for each pedestrian. Each latent code represents a distinct pattern of movement choice. Finally, based on the observed historical trajectory and the learned latent code, the proposed method is trained to cover the ground-truth future trajectory of this pedestrian with a bi-variate Gaussian distribution. We evaluate the proposed method through a comprehensive list of experiments and demonstrate that our method outperforms all baseline methods on the commonly used metrics, Average Displacement Error and Final Displacement Error. Notably, visualizations of the generated trajectories reveal our method’s capacity to handle different scenarios.
Full article
![](https://pub.mdpi-res.com/computers/computers-13-00151/article_deploy/html/images/computers-13-00151-g001-550.jpg?1718101360)
Figure 1
Open AccessArticle
On Predicting Exam Performance Using Version Control Systems’ Features
by
Lorenzo Canale, Luca Cagliero, Laura Farinetti and Marco Torchiano
Computers 2024, 13(6), 150; https://doi.org/10.3390/computers13060150 - 9 Jun 2024
Abstract
The advent of Version Control Systems (VCS) in computer science education has significantly improved the learning experience. The Learning Analytics community has started to analyze the interactions between students and VCSs to evaluate the behavioral and cognitive aspects of the learning process. Within
[...] Read more.
The advent of Version Control Systems (VCS) in computer science education has significantly improved the learning experience. The Learning Analytics community has started to analyze the interactions between students and VCSs to evaluate the behavioral and cognitive aspects of the learning process. Within the aforesaid scope, a promising research direction is the use of Artificial Intelligence (AI) to predict students’ exam outcomes early based on VCS usage data. Previous AI-based solutions have two main drawbacks: (i) They rely on static models, which disregard temporal changes in the student–VCS interactions. (ii) AI reasoning is not transparent to end-users. This paper proposes a time-dependent approach to early predict student performance from VCS data. It applies and compares different classification models trained at various course stages. To gain insights into exam performance predictions it combines classification with explainable AI techniques. It visualizes the explanations of the time-varying performance predictors. The results of a real case study show that the effect of VCS-based features on the exam success rate is relevant much earlier than the end of the course, whereas the timely submission of the first lab assignment is a reliable predictor of the exam grade.
Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00150/article_deploy/html/images/computers-13-00150-g001-550.jpg?1719210584)
Figure 1
Open AccessArticle
A Clustering and PL/SQL-Based Method for Assessing MLP-Kmeans Modeling
by
Victor Hugo Silva-Blancas, Hugo Jiménez-Hernández, Ana Marcela Herrera-Navarro, José M. Álvarez-Alvarado, Diana Margarita Córdova-Esparza and Juvenal Rodríguez-Reséndiz
Computers 2024, 13(6), 149; https://doi.org/10.3390/computers13060149 - 9 Jun 2024
Abstract
►▼
Show Figures
With new high-performance server technology in data centers and bunkers, optimizing search engines to process time and resource consumption efficiently is necessary. The database query system, upheld by the standard SQL language, has maintained the same functional design since the advent of PL/SQL.
[...] Read more.
With new high-performance server technology in data centers and bunkers, optimizing search engines to process time and resource consumption efficiently is necessary. The database query system, upheld by the standard SQL language, has maintained the same functional design since the advent of PL/SQL. This situation is caused by recent research focused on computer resource management, encryption, and security rather than improving data mining based on AI tools, machine learning (ML), and artificial neural networks (ANNs). This work presents a projected methodology integrating a multilayer perceptron (MLP) with Kmeans. This methodology is compared with traditional PL/SQL tools and aims to improve the database response time while outlining future advantages for ML and Kmeans in data processing. We propose a new corollary: , executed on application software querying data collections with more than 306 thousand records. This study produced a comparative table between PL/SQL and MLP-Kmeans based on three hypotheses: line query, group query, and total query. The results show that line query increased to 9 ms, group query increased from 88 to 2460 ms, and total query from 13 to 279 ms. Testing one methodology against the other not only shows the incremental fatigue and time consumption that training brings to database query but also that the complexity of the use of a neural network is capable of producing more precision results than the simple use of PL/SQL instructions, and this will be more important in the future for domain-specific problems.
Full article
![](https://pub.mdpi-res.com/computers/computers-13-00149/article_deploy/html/images/computers-13-00149-g001-550.jpg?1719281250)
Figure 1
Open AccessArticle
ARPocketLab—A Mobile Augmented Reality System for Pedagogic Applications
by
Miguel Nunes, Telmo Adão, Somayeh Shahrabadi, António Capela, Diana Carneiro, Pedro Branco, Luís Magalhães, Raul Morais and Emanuel Peres
Computers 2024, 13(6), 148; https://doi.org/10.3390/computers13060148 - 8 Jun 2024
Abstract
The widespread adoption of digital technologies in educational systems has been globally reflecting a shift in pedagogic content delivery that seems to fit modern generations of students while tackling relevant challenges faced by the current scholar context, e.g., progress traceability, pedagogic content fair
[...] Read more.
The widespread adoption of digital technologies in educational systems has been globally reflecting a shift in pedagogic content delivery that seems to fit modern generations of students while tackling relevant challenges faced by the current scholar context, e.g., progress traceability, pedagogic content fair access and intuitive visual representativeness, mobility issue mitigation, and sustainability in crisis situations. Among these technologies, augmented reality (AR) emerges as a particularly promising approach, allowing the visualization of computer-generated interactive data on top of real-world elements, thus enhancing comprehension and intuition regarding educational content, often in mobile settings. While the application of AR to education has been widely addressed, issues related to performance interaction and cognitive performance are commonly addressed, with lesser attention paid to the limitations associated with setup complexity, mostly related to experiences configurating tools, or contextual range, i.e., technical/scientific domain targeting versatility. Therefore, this paper introduces ARPocketLab, a digital, mobile, flexible, and scalable solution designed for the dynamic needs of modern tutorship. With a dual-interface system, it allows both educators and students to interactively design and engage with AR content directly tied to educational outcomes. Moreover, ARPocketLab’s design, aimed at handheld operationalization using a minimal set of physical resources, is particularly relevant in environments where educational materials are scarce or in situations where remote learning becomes necessary. Its versatility stems from the fact that it only requires a marker or a surface (e.g., a table) to function at full capacity. To evaluate the solution, tests were conducted with 8th-grade Portuguese students within the context of Physics and Chemistry subject. Results demonstrate the application’s effectiveness in providing didactic assistance, with positive feedback not only in terms of usability but also regarding learning performance. The participants also reported openness for the adoption of AR in pedagogic contexts.
Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00148/article_deploy/html/images/computers-13-00148-g001-550.jpg?1719312065)
Figure 1
Open AccessReview
Integrating Machine Learning with Non-Fungible Tokens
by
Elias Iosif and Leonidas Katelaris
Computers 2024, 13(6), 147; https://doi.org/10.3390/computers13060147 - 7 Jun 2024
Abstract
In this paper, we undertake a thorough comparative examination of data resources pertinent to Non-Fungible Tokens (NFTs) within the framework of Machine Learning (ML). The core research question of the present work is how the integration of ML techniques and NFTs manifests across
[...] Read more.
In this paper, we undertake a thorough comparative examination of data resources pertinent to Non-Fungible Tokens (NFTs) within the framework of Machine Learning (ML). The core research question of the present work is how the integration of ML techniques and NFTs manifests across various domains. Our primary contribution lies in proposing a structured perspective for this analysis, encompassing a comprehensive array of criteria that collectively span the entire spectrum of NFT-related data. To demonstrate the application of the proposed perspective, we systematically survey a selection of indicative research works, drawing insights from diverse sources. By evaluating these data resources against established criteria, we aim to provide a nuanced understanding of their respective strengths, limitations, and potential applications within the intersection of NFTs and ML.
Full article
(This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00147/article_deploy/html/images/computers-13-00147-g001-550.jpg?1719310713)
Figure 1
Open AccessArticle
Unlocking Blockchain UTXO Transactional Patterns and Their Effect on Storage and Throughput Trade-Offs
by
David Melo, Saúl Eduardo Pomares-Hernández, Lil María Xibai Rodríguez-Henríquez and Julio César Pérez-Sansalvador
Computers 2024, 13(6), 146; https://doi.org/10.3390/computers13060146 - 7 Jun 2024
Abstract
Blockchain technology ensures record-keeping by redundantly storing and verifying transactions on a distributed network of nodes. Permissionless blockchains have pushed the development of decentralized applications (DApps) characterized by distributed business logic, resilience to centralized failures, and data immutability. However, storage scalability without sacrificing
[...] Read more.
Blockchain technology ensures record-keeping by redundantly storing and verifying transactions on a distributed network of nodes. Permissionless blockchains have pushed the development of decentralized applications (DApps) characterized by distributed business logic, resilience to centralized failures, and data immutability. However, storage scalability without sacrificing throughput is one of the remaining open challenges in permissionless blockchains. Enhancing throughput often compromises storage, as seen in projects such as Elastico, OmniLedger, and RapidChain. On the other hand, solutions seeking to save storage, such as CUB, Jidar, SASLedger, and SE-Chain, reduce the transactional throughput. To our knowledge, no analysis has been performed that relates storage growth to transactional throughput. In this article, we delve into the execution of the Bitcoin and Ethereum transactional models, unlocking patterns that represent any transaction on the blockchain. We reveal the trade-off between transactional throughput and storage. To achieve this, we introduce the spent-by relation, a new abstraction of the UTXO model that utilizes a directed acyclic graph (DAG) to reveal the patterns and allows for a graph with granular information. We then analyze the transactional patterns to identify the most storage-intensive ones and those that offer greater flexibility in the throughput/storage trade-off. Finally, we present an analytical study showing that the UTXO model is more storage-intensive than the account model but scales better in transactional throughput.
Full article
(This article belongs to the Special Issue Blockchain Technology—a Breakthrough Innovation for Modern Industries)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00146/article_deploy/html/images/computers-13-00146-g001-550.jpg?1717764537)
Figure 1
Open AccessArticle
Quasi/Periodic Noise Reduction in Images Using Modified Multiresolution-Convolutional Neural Networks for 3D Object Reconstructions and Comparison with Other Convolutional Neural Network Models
by
Osmar Antonio Espinosa-Bernal, Jesús Carlos Pedraza-Ortega, Marco Antonio Aceves-Fernandez, Victor Manuel Martínez-Suárez, Saul Tovar-Arriaga, Juan Manuel Ramos-Arreguín and Efrén Gorrostieta-Hurtado
Computers 2024, 13(6), 145; https://doi.org/10.3390/computers13060145 - 7 Jun 2024
Abstract
►▼
Show Figures
The modeling of real objects digitally is an area that has generated a high demand due to the need to obtain systems that are able to reproduce 3D objects from real objects. To this end, several techniques have been proposed to model objects
[...] Read more.
The modeling of real objects digitally is an area that has generated a high demand due to the need to obtain systems that are able to reproduce 3D objects from real objects. To this end, several techniques have been proposed to model objects in a computer, with the fringe profilometry technique being the one that has been most researched. However, this technique has the disadvantage of generating Moire noise that ends up affecting the accuracy of the final 3D reconstructed object. In order to try to obtain 3D objects as close as possible to the original object, different techniques have been developed to attenuate the quasi/periodic noise, namely the application of convolutional neural networks (CNNs), a method that has been recently applied for restoration and reduction and/or elimination of noise in images applied as a pre-processing in the generation of 3D objects. For this purpose, this work is carried out to attenuate the quasi/periodic noise in images acquired by the fringe profilometry technique, using a modified CNN-Multiresolution network. The results obtained are compared with the original CNN-Multiresolution network, the UNet network, and the FCN32s network and a quantitative comparison is made using the Image Mean Square Error E (IMMS), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Profile (MSE) metrics.
Full article
![](https://pub.mdpi-res.com/computers/computers-13-00145/article_deploy/html/images/computers-13-00145-g001-550.jpg?1717749232)
Figure 1
Open AccessArticle
Searching Questions and Learning Problems in Large Problem Banks: Constructing Tests and Assignments on the Fly
by
Oleg Sychev
Computers 2024, 13(6), 144; https://doi.org/10.3390/computers13060144 - 5 Jun 2024
Abstract
Modern advances in creating shared banks of learning problems and automatic question and problem generation have led to the creation of large question banks in which human teachers cannot view every question. These questions are classified according to the knowledge necessary to solve
[...] Read more.
Modern advances in creating shared banks of learning problems and automatic question and problem generation have led to the creation of large question banks in which human teachers cannot view every question. These questions are classified according to the knowledge necessary to solve them and the question difficulties. Constructing tests and assignments on the fly at the teacher’s request eliminates the possibility of cheating by sharing solutions because each student receives a unique set of questions. However, the random generation of predictable and effective assignments from a set of problems is a non-trivial task. In this article, an algorithm for generating assignments based on teachers’ requests for their content is proposed. The algorithm is evaluated on a bank of expression-evaluation questions containing more than 5000 questions. The evaluation shows that the proposed algorithm can guarantee the minimum expected number of target concepts (rules) in an exercise with any settings. The available bank and exercise difficulty chiefly determine the difficulty of the found questions. It almost does not depend on the number of target concepts per item in the exercise: teaching more rules is achieved by rotating them among the exercise items on lower difficulty settings. An ablation study show that all the principal components of the algorithm contribute to its performance. The proposed algorithm can be used to reliably generate individual exercises from large, automatically generated question banks according to teachers’ requests, which is important in massive open online courses.
Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00144/article_deploy/html/images/computers-13-00144-g001-550.jpg?1717594784)
Figure 1
Open AccessArticle
A GIS-Based Fuzzy Model to Detect Critical Polluted Urban Areas in Presence of Heatwave Scenarios
by
Barbara Cardone, Ferdinando Di Martino and Vittorio Miraglia
Computers 2024, 13(6), 143; https://doi.org/10.3390/computers13060143 - 5 Jun 2024
Abstract
This research presents a new method for detecting urban areas critical for the presence of air pollutants during periods of heatwaves. The proposed method uses a geospatial model based on the construction of Thiessen polygons and a fuzzy model based on assessing, starting
[...] Read more.
This research presents a new method for detecting urban areas critical for the presence of air pollutants during periods of heatwaves. The proposed method uses a geospatial model based on the construction of Thiessen polygons and a fuzzy model based on assessing, starting from air quality control unit measurement data, how concentrations of air pollutants are distributed in the urban study area during periods of heatwaves and determine the most critical areas as hotspots. The proposed method represents an optimal trade-off between the accuracy of the detection of critical areas and the computational speed; the use of fuzzy techniques for assessing the intensity of concentrations of air pollutants allows evaluators to model the assessments of critical areas more naturally. The method is implemented in a GIS-based platform and has been tested in the city of Bologna, Italy. The resulting criticality maps of PM10, NO2, and PM2.5 pollutants during a heatwave period that occurred from 10 to 14 July 2023 revealed highly critical hotspots with high pollutant concentrations in densely populated areas. This framework provides a portable and easily interpretable decision support tool which allows you to evaluate which urban areas are most affected by air pollution during heatwaves, potentially posing health risks to the exposed population.
Full article
(This article belongs to the Special Issue Feature Papers in Computers 2024)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00143/article_deploy/html/images/computers-13-00143-g001-550.jpg?1717574490)
Figure 1
Open AccessArticle
Observer-Based Suboptimal Controller Design for Permanent Magnet Synchronous Motors: State-Dependent Riccati Equation Controller and Impulsive Observer Approaches
by
Nasrin Kalamian, Masoud Soltani, Fariba Bouzari Liavoli and Mona Faraji Niri
Computers 2024, 13(6), 142; https://doi.org/10.3390/computers13060142 - 4 Jun 2024
Abstract
Permanent Magnet Synchronous Motors (PMSMs) with high energy efficiency, reliable performance, and a relatively simple structure are widely utilised in various applications. In this paper, a suboptimal controller is proposed for PMSMs without sensors based on the state-dependent Riccati equation (SDRE) technique combined
[...] Read more.
Permanent Magnet Synchronous Motors (PMSMs) with high energy efficiency, reliable performance, and a relatively simple structure are widely utilised in various applications. In this paper, a suboptimal controller is proposed for PMSMs without sensors based on the state-dependent Riccati equation (SDRE) technique combined with customised impulsive observers (IOs). Here, the SDRE technique facilitates a pseudo-linearised display of the motor with state-dependent coefficients (SDCs) while preserving all its nonlinear features. Considering the risk of non-available/non-measurable states in the motor due to sensor and instrumentation costs, the SDRE is combined with IOs to estimate the PMSM speed and position states. Customised IOs are proven to be capable of obtaining quality, continuous estimates of the motor states despite the discrete format of the output signals. The simulation results in this work illustrate an accurate state estimation and control mechanism for the speed of the PMSM in the presence of load torque disturbances and reference speed changes. It is clearly shown that the SDRE-IO design is superior compared to the most popular existing regulators in the literature for sensorless speed control.
Full article
(This article belongs to the Topic Numerical Methods and Computer Simulations in Energy Analysis, 2nd Volume)
►▼
Show Figures
![](https://pub.mdpi-res.com/computers/computers-13-00142/article_deploy/html/images/computers-13-00142-g001-550.jpg?1717505367)
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Digital, Electronics, Smart Cities
Artificial Intelligence Models, Tools and Applications
Topic Editors: Phivos Mylonas, Katia Lida Kermanidis, Manolis MaragoudakisDeadline: 31 August 2024
Topic in
Biomedicines, Computers, Information, IJERPH, JPM
eHealth and mHealth: Challenges and Prospects, 2nd Volume
Topic Editors: Antonis Billis, Manuel Dominguez-Morales, Anton CivitDeadline: 30 September 2024
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies including Selected Papers from ICGHIT 2024
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 October 2024
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 31 December 2024
![loading...](https://pub.mdpi-res.com/img/loading_circle.gif?9a82694213036313?1719563568)
Conferences
Special Issues
Special Issue in
Computers
Game-Based Learning, Gamification in Education and Serious Games 2023
Guest Editors: Carlos Vaz de Carvalho, Hariklia Tsalapatas, Ricardo BaptistaDeadline: 30 June 2024
Special Issue in
Computers
Xtended or Mixed Reality (AR+VR) for Education 2024
Guest Editors: Veronica Rossano, Michele FiorentinoDeadline: 1 August 2024
Special Issue in
Computers
Best Practices, Challenges and Opportunities in Software Engineering
Guest Editor: Yan LiuDeadline: 31 August 2024
Special Issue in
Computers
Uncertainty-Aware Artificial Intelligence
Guest Editors: Hussain Mohammed Dipu Kabir, Syed Bahauddin Alam, Subrota Kumar Mondal, Jeremy StraubDeadline: 30 September 2024