Journal Description
Informatics
Informatics
is an international, peer-reviewed, open access journal on information and communication technologies, human–computer interaction, and social informatics, and is published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, and other databases.
- Journal Rank: CiteScore - Q1 (Communication)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 30.3 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.1 (2022);
5-Year Impact Factor:
2.7 (2022)
Latest Articles
Causes and Mitigation Practices of Requirement Volatility in Agile Software Development
Informatics 2024, 11(1), 12; https://doi.org/10.3390/informatics11010012 - 13 Mar 2024
Abstract
One of the main obstacles in software development projects is requirement volatility (RV), which is defined as uncertainty or changes in software requirements during the development process. Therefore, this research tries to understand the underlying factors behind the RV and the best practices
[...] Read more.
One of the main obstacles in software development projects is requirement volatility (RV), which is defined as uncertainty or changes in software requirements during the development process. Therefore, this research tries to understand the underlying factors behind the RV and the best practices to reduce it. The methodology used for this research is based upon qualitative research using interviews with 12 participants with experience in agile software development projects. The participants hailed from Austria, Nigeria, the USA, the Philippines, Armenia, Sri Lanka, Germany, Egypt, Canada, and Turkey and held roles such as project managers, software developers, Scrum Masters, testers, business analysts, and product owners. Our findings based on our empirical data revealed six primary factors that cause RV and three main agile practices that help to mitigate it. Theoretically, this study contributes to the body of knowledge relating to RV management. Practically, this research is expected to aid software development teams in comprehending the reasons behind RV and the best practices to effectively minimize it.
Full article
Open AccessArticle
Exploring Multidimensional Embeddings for Decision Support Using Advanced Visualization Techniques
by
Olga Kurasova, Arnoldas Budžys and Viktor Medvedev
Informatics 2024, 11(1), 11; https://doi.org/10.3390/informatics11010011 - 26 Feb 2024
Abstract
As artificial intelligence has evolved, deep learning models have become important in extracting and interpreting complex patterns from raw multidimensional data. These models produce multidimensional embeddings that, while containing a lot of information, are often not directly understandable. Dimensionality reduction techniques play an
[...] Read more.
As artificial intelligence has evolved, deep learning models have become important in extracting and interpreting complex patterns from raw multidimensional data. These models produce multidimensional embeddings that, while containing a lot of information, are often not directly understandable. Dimensionality reduction techniques play an important role in transforming multidimensional data into interpretable formats for decision support systems. To address this problem, the paper presents an analysis of dimensionality reduction and visualization techniques that embrace complex data representations and are useful inferences for decision systems. A novel framework is proposed, utilizing a Siamese neural network with a triplet loss function to analyze multidimensional data encoded into images, thus transforming these data into multidimensional embeddings. This approach uses dimensionality reduction techniques to transform these embeddings into a lower-dimensional space. This transformation not only improves interpretability but also maintains the integrity of the complex data structures. The efficacy of this approach is demonstrated using a keystroke dynamics dataset. The results support the integration of these visualization techniques into decision support systems. The visualization process not only simplifies the complexity of the data, but also reveals deep patterns and relationships hidden in the embeddings. Thus, a comprehensive framework for visualizing and interpreting complex keystroke dynamics is described, making a significant contribution to the field of user authentication.
Full article
(This article belongs to the Section Machine Learning)
►▼
Show Figures
Figure 1
Open AccessArticle
Unveiling Insights: A Bibliometric Analysis of Artificial Intelligence in Teaching
by
Malinka Ivanova, Gabriela Grosseck and Carmen Holotescu
Informatics 2024, 11(1), 10; https://doi.org/10.3390/informatics11010010 - 25 Feb 2024
Abstract
►▼
Show Figures
The penetration of intelligent applications in education is rapidly increasing, posing a number of questions of a different nature to the educational community. This paper is coming to analyze and outline the influence of artificial intelligence (AI) on teaching practice which is an
[...] Read more.
The penetration of intelligent applications in education is rapidly increasing, posing a number of questions of a different nature to the educational community. This paper is coming to analyze and outline the influence of artificial intelligence (AI) on teaching practice which is an essential problem considering its growing utilization and pervasion on a global scale. A bibliometric approach is applied to outdraw the “big picture” considering gathered bibliographic data from scientific databases Scopus and Web of Science. Data on relevant publications matching the query “artificial intelligence and teaching” over the past 5 years have been researched and processed through Biblioshiny in R environment in order to establish a descriptive structure of the scientific production, to determine the impact of scientific publications, to trace collaboration patterns and to identify key research areas and emerging trends. The results point out the growth in scientific production lately that is an indicator of increased interest in the investigated topic by researchers who mainly work in collaborative teams as some of them are from different countries and institutions. The identified key research areas include techniques used in educational applications, such as artificial intelligence, machine learning, and deep learning. Additionally, there is a focus on applicable technologies like ChatGPT, learning analytics, and virtual reality. The research also explores the context of application for these techniques and technologies in various educational settings, including teaching, higher education, active learning, e-learning, and online learning. Based on our findings, the trending research topics can be encapsulated by terms such as ChatGPT, chatbots, AI, generative AI, machine learning, emotion recognition, large language models, convolutional neural networks, and decision theory. These findings offer valuable insights into the current landscape of research interests in the field.
Full article
Figure 1
Open AccessArticle
Genealogical Data-Driven Visits of Historical Cemeteries
by
Angelica Lo Duca, Matteo Abrate, Andrea Marchetti and Manuela Moretti
Informatics 2024, 11(1), 9; https://doi.org/10.3390/informatics11010009 - 22 Feb 2024
Abstract
This paper describes the Integration of Archives and Cultural Places (IaCuP) project, which aims to integrate information about a historical cemetery, including its map and grave inventory, with genealogical and documentary knowledge extracted from relevant historical archives. The integrated data are accessible to
[...] Read more.
This paper describes the Integration of Archives and Cultural Places (IaCuP) project, which aims to integrate information about a historical cemetery, including its map and grave inventory, with genealogical and documentary knowledge extracted from relevant historical archives. The integrated data are accessible to cemetery visitors through an interactive mobile application, enabling them to navigate a graphical representation of the cemetery while exploring comprehensive visualizations of genealogical data. The basic idea stems from the desire to provide people with access to the rich context of cultural sites, which have often lost their original references over the centuries, making it challenging for individuals today to interpret the meanings embedded within them. The proposed approach leverages large language models (LLMs) to extract information from relevant documents and Web technologies to represent such information as interactive visualizations. As a practical case study, this paper focuses on the Jewish Cemetery in Pisa and the Historical Archives of the Jewish Community in Pisa, working on the genealogical tree of one of the most representative families resting in the cemetery.
Full article
(This article belongs to the Section Social Informatics and Digital Humanities)
►▼
Show Figures
Figure 1
Open AccessArticle
Topic Extraction: BERTopic’s Insight into the 117th Congress’s Twitterverse
by
Margarida Mendonça and Álvaro Figueira
Informatics 2024, 11(1), 8; https://doi.org/10.3390/informatics11010008 - 17 Feb 2024
Abstract
►▼
Show Figures
As social media (SM) becomes increasingly prevalent, its impact on society is expected to grow accordingly. While SM has brought positive transformations, it has also amplified pre-existing issues such as misinformation, echo chambers, manipulation, and propaganda. A thorough comprehension of this impact, aided
[...] Read more.
As social media (SM) becomes increasingly prevalent, its impact on society is expected to grow accordingly. While SM has brought positive transformations, it has also amplified pre-existing issues such as misinformation, echo chambers, manipulation, and propaganda. A thorough comprehension of this impact, aided by state-of-the-art analytical tools and by an awareness of societal biases and complexities, enables us to anticipate and mitigate the potential negative effects. One such tool is BERTopic, a novel deep-learning algorithm developed for Topic Mining, which has been shown to offer significant advantages over traditional methods like Latent Dirichlet Allocation (LDA), particularly in terms of its high modularity, which allows for extensive personalization at each stage of the topic modeling process. In this study, we hypothesize that BERTopic, when optimized for Twitter data, can provide a more coherent and stable topic modeling. We began by conducting a review of the literature on topic-mining approaches for short-text data. Using this knowledge, we explored the potential for optimizing BERTopic and analyzed its effectiveness. Our focus was on Twitter data spanning the two years of the 117th US Congress. We evaluated BERTopic’s performance using coherence, perplexity, diversity, and stability scores, finding significant improvements over traditional methods and the default parameters for this tool. We discovered that improvements are possible in BERTopic’s coherence and stability. We also identified the major topics of this Congress, which include abortion, student debt, and Judge Ketanji Brown Jackson. Additionally, we describe a simple application we developed for a better visualization of Congress topics.
Full article
Figure 1
Open AccessArticle
Uncovering the Limitations and Insights of Packet Status Prediction Models in IEEE 802.15.4-Based Wireless Networks and Insights from Data Science
by
Mariana Ávalos-Arce, Heráclito Pérez-Díaz, Carolina Del-Valle-Soto and Ramon A. Briseño
Informatics 2024, 11(1), 7; https://doi.org/10.3390/informatics11010007 - 26 Jan 2024
Abstract
Wireless networks play a pivotal role in various domains, including industrial automation, autonomous vehicles, robotics, and mobile sensor networks. This research investigates the critical issue of packet loss in modern wireless networks and aims to identify the conditions within a network’s environment that
[...] Read more.
Wireless networks play a pivotal role in various domains, including industrial automation, autonomous vehicles, robotics, and mobile sensor networks. This research investigates the critical issue of packet loss in modern wireless networks and aims to identify the conditions within a network’s environment that lead to such losses. We propose a packet status prediction model for data packets that travel through a wireless network based on the IEEE 802.15.4 standard and are exposed to five different types of interference in a controlled experimentation environment. The proposed model focuses on the packetization process and its impact on network robustness. This study explores the challenges posed by packet loss, particularly in the context of interference, and puts forth the hypothesis that specific environmental conditions are linked to packet loss occurrences. The contribution of this work lies in advancing our understanding of the conditions leading to packet loss in wireless networks. Data are retrieved with a single CC2531 USB Dongle Packet Sniffer, whose pieces of information on packets become the features of each packet from which the classifier model will gather the training data with the aim of predicting whether a packet will unsuccessfully arrive at its destination. We found that interference causes more packet loss than that caused by various devices using a WiFi communication protocol simultaneously. In addition, we found that the most important predictors are network strength and packet size; low network strength tends to lead to more packet loss, especially for larger packets. This study contributes to the ongoing efforts to predict and mitigate packet loss, emphasizing the need for adaptive models in dynamic wireless environments.
Full article
(This article belongs to the Special Issue Digital Society: Interdisciplinary Insights and Applications of Wireless Connectivity)
►▼
Show Figures
Figure 1
Open AccessArticle
Exploring the Relationship between Career Satisfaction and University Learning Using Data Science Models
by
Sofía Ramos-Pulido, Neil Hernández-Gress and Gabriela Torres-Delgado
Informatics 2024, 11(1), 6; https://doi.org/10.3390/informatics11010006 - 24 Jan 2024
Abstract
►▼
Show Figures
Current research on the career satisfaction of graduates limits educational institutions in devising methods to attain high career satisfaction. Thus, this study aims to use data science models to understand and predict career satisfaction based on information collected from surveys of university alumni.
[...] Read more.
Current research on the career satisfaction of graduates limits educational institutions in devising methods to attain high career satisfaction. Thus, this study aims to use data science models to understand and predict career satisfaction based on information collected from surveys of university alumni. Five machine learning (ML) algorithms were used for data analysis, including the decision tree, random forest, gradient boosting, support vector machine, and neural network models. To achieve optimal prediction performance, we utilized the Bayesian optimization method to fine-tune the parameters of the five ML algorithms. The five ML models were compared with logistic and ordinal regression. Then, to extract the most important features of the best predictive model, we employed the SHapley Additive exPlanations (SHAP), a novel methodology for extracting the significant features in ML. The results indicated that gradient boosting is a marginally superior predictive model, with 2–3% higher accuracy and area under the receiver operating characteristic curve (AUC) compared to logistic and ordinal regression. Interestingly, concerning low career satisfaction, those with the worst scores for the phrase “how frequently applied knowledge, skills, or technological tools from the academic training” were less satisfied with their careers. To summarize, career satisfaction is related to academic training, alumni satisfaction, employment status, published articles or books, and other factors.
Full article
Figure 1
Open AccessArticle
Application of Augmented Reality Technology for Chest ECG Electrode Placement Practice
by
Charlee Kaewrat, Dollaporn Anopas, Si Thu Aung and Yunyong Punsawad
Informatics 2024, 11(1), 5; https://doi.org/10.3390/informatics11010005 - 15 Jan 2024
Abstract
This study presents an augmented reality application for training chest electrocardiography electrode placement. AR applications featuring augmented object displays and interactions have been developed to facilitate learning and training of electrocardiography (ECG) chest lead placement via smartphones. The AR marker-based technique was used
[...] Read more.
This study presents an augmented reality application for training chest electrocardiography electrode placement. AR applications featuring augmented object displays and interactions have been developed to facilitate learning and training of electrocardiography (ECG) chest lead placement via smartphones. The AR marker-based technique was used to track the objects. The proposed AR application can project virtual ECG electrode positions onto the mannequin’s chest and provide feedback to trainees. We designed experimental tasks using the pre- and post-tests and practice sessions to verify the efficiency of the proposed AR application. The control group was assigned to learn chest ECG electrode placement using traditional methods, whereas the intervention group was introduced to the proposed AR application for ECG electrode placement. The results indicate that the proposed AR application can encourage learning outcomes, such as chest lead ECG knowledge and skills. Moreover, using AR technology can enhance students’ learning experiences. In the future, we plan to apply the proposed AR technology to improve related courses in medical science education.
Full article
(This article belongs to the Section Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Exploring the Relation between Contextual Social Determinants of Health and COVID-19 Occurrence and Hospitalization
by
Aokun Chen, Yunpeng Zhao, Yi Zheng, Hui Hu, Xia Hu, Jennifer N. Fishe, William R. Hogan, Elizabeth A. Shenkman, Yi Guo and Jiang Bian
Informatics 2024, 11(1), 4; https://doi.org/10.3390/informatics11010004 - 15 Jan 2024
Abstract
►▼
Show Figures
It is prudent to take a unified approach to exploring how contextual social determinants of health (SDoH) relate to COVID-19 occurrence and outcomes. Poor geographically represented data and a small number of contextual SDoH examined in most previous research studies have left a
[...] Read more.
It is prudent to take a unified approach to exploring how contextual social determinants of health (SDoH) relate to COVID-19 occurrence and outcomes. Poor geographically represented data and a small number of contextual SDoH examined in most previous research studies have left a knowledge gap in the relationships between contextual SDoH and COVID-19 outcomes. In this study, we linked 199 contextual SDoH factors covering 11 domains of social and built environments with electronic health records (EHRs) from a large clinical research network (CRN) in the National Patient-Centered Clinical Research Network (PCORnet) to explore the relation between contextual SDoH and COVID-19 occurrence and hospitalization. We identified 15,890 COVID-19 patients and 63,560 matched non-COVID-19 patients in Florida between January 2020 and May 2021. We adopted a two-phase multiple linear regression approach modified from that in the exposome-wide association (ExWAS) study. After removing the highly correlated SDoH variables, 86 contextual SDoH variables were included in the data analysis. Adjusting for race, ethnicity, and comorbidities, we found six contextual SDoH variables (i.e., hospital available beds and utilization, percent of vacant property, number of golf courses, and percent of minority) related to the occurrence of COVID-19, and three variables (i.e., farmers market, low access, and religion) related to the hospitalization of COVID-19. To our best knowledge, this is the first study to explore the relationship between contextual SDoH and COVID-19 occurrence and hospitalization using EHRs in a major PCORnet CRN. As an exploratory study, the causal effect of SDoH on COVID-19 outcomes will be evaluated in future studies.
Full article
Figure 1
Open AccessArticle
Integrating IOTA’s Tangle with the Internet of Things for Sustainable Agriculture: A Proof-of-Concept Study on Rice Cultivation
by
Sandro Pullo, Remo Pareschi, Valentina Piantadosi, Francesco Salzano and Roberto Carlini
Informatics 2024, 11(1), 3; https://doi.org/10.3390/informatics11010003 - 28 Dec 2023
Abstract
►▼
Show Figures
Addressing the critical challenges of resource inefficiency and environmental impact in the agrifood sector, this study explores the integration of Internet of Things (IoT) technologies with IOTA’s Tangle, a Distributed Ledger Technology (DLT). This integration aims to enhance sustainable agricultural practices, using rice
[...] Read more.
Addressing the critical challenges of resource inefficiency and environmental impact in the agrifood sector, this study explores the integration of Internet of Things (IoT) technologies with IOTA’s Tangle, a Distributed Ledger Technology (DLT). This integration aims to enhance sustainable agricultural practices, using rice cultivation as a case study of high relevance and reapplicability given its importance in the food chain and the high irrigation requirement of its cultivation. The approach employs sensor-based intelligent irrigation systems to optimize water efficiency. These systems enable real-time monitoring of agricultural parameters through IoT sensors. Data management is facilitated by IOTA’s Tangle, providing secure and efficient data handling, and integrated with MongoDB, a Database Management System (DBMS), for effective data storage and retrieval. The collaboration between IoT and IOTA led to significant reductions in resource consumption. Implementing sustainable agricultural practices resulted in a 50% reduction in water usage, 25% decrease in nitrogen consumption, and a 50% to 70% reduction in methane emissions. Additionally, the system contributed to lower electricity consumption for irrigation pumps and generated comprehensive historical water depth records, aiding future resource management decisions. This study concludes that the integration of IoT with IOTA’s Tangle presents a highly promising solution for advancing sustainable agriculture. This approach significantly contributes to environmental conservation and food security. Furthermore, it establishes that DLTs like IOTA are not only viable but also effective for real-time monitoring and implementation of sustainable agricultural practices.
Full article
Figure 1
Open AccessReview
Cloud-Based Platforms for Health Monitoring: A Review
by
Isaac Machorro-Cano, José Oscar Olmedo-Aguirre, Giner Alor-Hernández, Lisbeth Rodríguez-Mazahua, Laura Nely Sánchez-Morales and Nancy Pérez-Castro
Informatics 2024, 11(1), 2; https://doi.org/10.3390/informatics11010002 - 20 Dec 2023
Abstract
Cloud-based platforms have gained popularity over the years because they can be used for multiple purposes, from synchronizing contact information to storing and managing user fitness data. These platforms are still in constant development and, so far, most of the data they store
[...] Read more.
Cloud-based platforms have gained popularity over the years because they can be used for multiple purposes, from synchronizing contact information to storing and managing user fitness data. These platforms are still in constant development and, so far, most of the data they store is entered manually by users. However, more and better wearable devices are being developed that can synchronize with these platforms to feed the information automatically. Another aspect that highlights the link between wearable devices and cloud-based health platforms is the improvement in which the symptomatology and/or physical status information of users can be stored and syn-chronized in real-time, 24 h a day, in health platforms, which in turn enables the possibility of synchronizing these platforms with specialized medical software to promptly detect important variations in user symptoms. This is opening opportunities to use these platforms as support for monitoring disease symptoms and, in general, for monitoring the health of users. In this work, the characteristics and possibilities of use of four popular platforms currently available in the market are explored, which are Apple Health, Google Fit, Samsung Health, and Fitbit.
Full article
(This article belongs to the Special Issue Novel Informatics Algorithms and Applications to Biomedicine and Healthcare)
►▼
Show Figures
Graphical abstract
Open AccessArticle
A Context-Based Multimedia Vocabulary Learning System for Mobile Users
by
Andrew Vargo, Kohei Yamaguchi, Motoi Iwata and Koichi Kise
Informatics 2024, 11(1), 1; https://doi.org/10.3390/informatics11010001 - 19 Dec 2023
Abstract
Vocabulary acquisition and retention is an essential part of learning a foreign language and many learners use flashcard applications to repetitively increase vocabulary retention. However, it can be difficult for learners to remember new words and phrases without any context. In this paper,
[...] Read more.
Vocabulary acquisition and retention is an essential part of learning a foreign language and many learners use flashcard applications to repetitively increase vocabulary retention. However, it can be difficult for learners to remember new words and phrases without any context. In this paper, we propose a system that allows users to acquire new vocabulary with media which gives context to the words. Theoretically, this use of multimedia context should enable users to practice with interest and increased motivation, which has been shown to enhance the effects of contextual language learning. An experiment with 46 English as foreign language learners showed better retention after two weeks with the proposed system as compared to ordinary flashcards. However, the impact was not universally beneficial to all learners. An analysis of participant attributes that were gathered through surveys and questionnaires shows a link between personality and learning traits and affinity for learning with this system. This result indicates that the proposed system provides a significant advantage in vocabulary retention for some users, while other users should stay with traditional flashcard applications. The implications of this study indicate the need for the development of more personalized learning applications.
Full article
(This article belongs to the Section Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
EndoNet: A Model for the Automatic Calculation of H-Score on Histological Slides
by
Egor Ushakov, Anton Naumov, Vladislav Fomberg, Polina Vishnyakova, Aleksandra Asaturova, Alina Badlaeva, Anna Tregubova, Evgeny Karpulevich, Gennady Sukhikh and Timur Fatkhudinov
Informatics 2023, 10(4), 90; https://doi.org/10.3390/informatics10040090 - 12 Dec 2023
Abstract
H-score is a semi-quantitative method used to assess the presence and distribution of proteins in tissue samples by combining the intensity of staining and the percentage of stained nuclei. It is widely used but time-consuming and can be limited in terms of accuracy
[...] Read more.
H-score is a semi-quantitative method used to assess the presence and distribution of proteins in tissue samples by combining the intensity of staining and the percentage of stained nuclei. It is widely used but time-consuming and can be limited in terms of accuracy and precision. Computer-aided methods may help overcome these limitations and improve the efficiency of pathologists’ workflows. In this work, we developed a model EndoNet for automatic H-score calculation on histological slides. Our proposed method uses neural networks and consists of two main parts. The first is a detection model which predicts the keypoints of centers of nuclei. The second is an H-score module that calculates the value of the H-score using mean pixel values of predicted keypoints. Our model was trained and validated on 1780 annotated tiles with a shape of 100 × 100 µm and we achieved 0.77 mAP on a test dataset. We obtained our best results in H-score calculation; these results proved superior to QuPath predictions. Moreover, the model can be adjusted to a specific specialist or whole laboratory to reproduce the manner of calculating the H-score. Thus, EndoNet is effective and robust in the analysis of histology slides, which can improve and significantly accelerate the work of pathologists.
Full article
(This article belongs to the Special Issue Novel Informatics Algorithms and Applications to Biomedicine and Healthcare)
►▼
Show Figures
Figure 1
Open AccessArticle
Knowledge-Based Intelligent Text Simplification for Biological Relation Extraction
by
Jaskaran Gill, Madhu Chetty, Suryani Lim and Jennifer Hallinan
Informatics 2023, 10(4), 89; https://doi.org/10.3390/informatics10040089 - 11 Dec 2023
Abstract
►▼
Show Figures
Relation extraction from biological publications plays a pivotal role in accelerating scientific discovery and advancing medical research. While vast amounts of this knowledge is stored within the published literature, extracting it manually from this continually growing volume of documents is becoming increasingly arduous.
[...] Read more.
Relation extraction from biological publications plays a pivotal role in accelerating scientific discovery and advancing medical research. While vast amounts of this knowledge is stored within the published literature, extracting it manually from this continually growing volume of documents is becoming increasingly arduous. Recently, attention has been focused towards automatically extracting such knowledge using pre-trained Large Language Models (LLM) and deep-learning algorithms for automated relation extraction. However, the complex syntactic structure of biological sentences, with nested entities and domain-specific terminology, and insufficient annotated training corpora, poses major challenges in accurately capturing entity relationships from the unstructured data. To address these issues, in this paper, we propose a Knowledge-based Intelligent Text Simplification (KITS) approach focused on the accurate extraction of biological relations. KITS is able to precisely and accurately capture the relational context among various binary relations within the sentence, alongside preventing any potential changes in meaning for those sentences being simplified by KITS. The experiments show that the proposed technique, using well-known performance metrics, resulted in a 21% increase in precision, with only 25% of sentences simplified in the Learning Language in Logic (LLL) dataset. Combining the proposed method with BioBERT, the popular pre-trained LLM was able to outperform other state-of-the-art methods.
Full article
Figure 1
Open AccessArticle
Unraveling Microblog Sentiment Dynamics: A Twitter Public Attitudes Analysis towards COVID-19 Cases and Deaths
by
Paraskevas Koukaras, Dimitrios Rousidis and Christos Tjortjis
Informatics 2023, 10(4), 88; https://doi.org/10.3390/informatics10040088 - 07 Dec 2023
Abstract
►▼
Show Figures
The identification and analysis of sentiment polarity in microblog data has drawn increased attention. Researchers and practitioners attempt to extract knowledge by evaluating public sentiment in response to global events. This study aimed to evaluate public attitudes towards the spread of COVID-19 by
[...] Read more.
The identification and analysis of sentiment polarity in microblog data has drawn increased attention. Researchers and practitioners attempt to extract knowledge by evaluating public sentiment in response to global events. This study aimed to evaluate public attitudes towards the spread of COVID-19 by performing sentiment analysis on over 2.1 million tweets in English. The implications included the generation of insights for timely disease outbreak prediction and assertions regarding worldwide events, which can help policymakers take suitable actions. We investigated whether there was a correlation between public sentiment and the number of cases and deaths attributed to COVID-19. The research design integrated text preprocessing (regular expression operations, (de)tokenization, stopwords), sentiment polarization analysis via TextBlob, hypothesis formulation (null hypothesis testing), and statistical analysis (Pearson coefficient and p-value) to produce the results. The key findings highlight a correlation between sentiment polarity and deaths, starting at 41 days before and expanding up to 3 days after counting. Twitter users reacted to increased numbers of COVID-19-related deaths after four days by posting tweets with fading sentiment polarization. We also detected a strong correlation between COVID-19 Twitter conversation polarity and reported cases and a weak correlation between polarity and reported deaths.
Full article
Figure 1
Open AccessArticle
ChatGPT in Education: Empowering Educators through Methods for Recognition and Assessment
by
Joost C. F. de Winter, Dimitra Dodou and Arno H. A. Stienen
Informatics 2023, 10(4), 87; https://doi.org/10.3390/informatics10040087 - 29 Nov 2023
Abstract
ChatGPT is widely used among students, a situation that challenges educators. The current paper presents two strategies that do not push educators into a defensive role but can empower them. Firstly, we show, based on statistical analysis, that ChatGPT use can be recognized
[...] Read more.
ChatGPT is widely used among students, a situation that challenges educators. The current paper presents two strategies that do not push educators into a defensive role but can empower them. Firstly, we show, based on statistical analysis, that ChatGPT use can be recognized from certain keywords such as ‘delves’ and ‘crucial’. This insight allows educators to detect ChatGPT-assisted work more effectively. Secondly, we illustrate that ChatGPT can be used to assess texts written by students. The latter topic was presented in two interactive workshops provided to educators and educational specialists. The results of the workshops, where prompts were tested live, indicated that ChatGPT, provided a targeted prompt is used, is good at recognizing errors in texts but not consistent in grading. Ethical and copyright concerns were raised as well in the workshops. In conclusion, the methods presented in this paper may help fortify the teaching methods of educators. The computer scripts that we used for live prompting are available and enable educators to give similar workshops.
Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
►▼
Show Figures
Figure 1
Open AccessArticle
Automated Detection of Persuasive Content in Electronic News
by
Brian Rizqi Paradisiaca Darnoto, Daniel Siahaan and Diana Purwitasari
Informatics 2023, 10(4), 86; https://doi.org/10.3390/informatics10040086 - 21 Nov 2023
Abstract
Persuasive content in online news contains elements that aim to persuade its readers and may not necessarily include factual information. Since a news article only has some sentences that indicate persuasiveness, it would be quite challenging to differentiate news with or without the
[...] Read more.
Persuasive content in online news contains elements that aim to persuade its readers and may not necessarily include factual information. Since a news article only has some sentences that indicate persuasiveness, it would be quite challenging to differentiate news with or without the persuasive content. Recognizing persuasive sentences with a text summarization and classification approach is important to understand persuasive messages effectively. Text summarization identifies arguments and key points, while classification separates persuasive sentences based on the linguistic and semantic features used. Our proposed architecture includes text summarization approaches to shorten sentences without persuasive content and then using classifiers model to detect those with persuasive indication. In this paper, we compare the performance of latent semantic analysis (LSA) and TextRank in text summarization methods, the latter of which has outperformed in all trials, and also two classifiers of convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM). We have prepared a dataset (±1700 data and manually persuasiveness-labeled) consisting of news articles written in the Indonesian language collected from a nationwide electronic news portal. Comparative studies in our experimental results show that the TextRank–BERT–BiLSTM model achieved the highest accuracy of 95% in detecting persuasive news. The text summarization methods were able to generate detailed and precise summaries of the news articles and the deep learning models were able to effectively differentiate between persuasive news and real news.
Full article
(This article belongs to the Section Machine Learning)
►▼
Show Figures
Figure 1
Open AccessArticle
Why Do People Use Telemedicine Apps in the Post-COVID-19 Era? Expanded TAM with E-Health Literacy and Social Influence
by
Moonkyoung Jang
Informatics 2023, 10(4), 85; https://doi.org/10.3390/informatics10040085 - 06 Nov 2023
Abstract
►▼
Show Figures
This study delves into the determinants influencing individuals’ intentions to adopt telemedicine apps during the COVID-19 pandemic. The study aims to offer a comprehensive framework for understanding behavioral intentions by leveraging the Technology Acceptance Model (TAM), supplemented by e-health literacy and social influence
[...] Read more.
This study delves into the determinants influencing individuals’ intentions to adopt telemedicine apps during the COVID-19 pandemic. The study aims to offer a comprehensive framework for understanding behavioral intentions by leveraging the Technology Acceptance Model (TAM), supplemented by e-health literacy and social influence variables. The study analyzes survey data from 364 adults using partial least squares structural equation modeling (PLS-SEM) to empirically examine the internal relationships within the model. Results indicated that e-health literacy, attitude, and social influence significantly impacted the intention to use telemedicine apps. Notably, e-health literacy positively influenced both perceived usefulness and ease of use, expanding beyond mere usage intention. The study underscored the substantial role of social influence in predicting the intention to use telemedicine apps, challenging the traditional oversight of social influence in the TAM framework. The findings will help researchers, practitioners, and governments understand how social influence and e-health literacy influence the adoption of telehealth apps and promote the use of telehealth apps through enhancing social influence and e-health literacy.
Full article
Figure 1
Open AccessArticle
Classifying Crowdsourced Citizen Complaints through Data Mining: Accuracy Testing of k-Nearest Neighbors, Random Forest, Support Vector Machine, and AdaBoost
by
Evaristus D. Madyatmadja, Corinthias P. M. Sianipar, Cristofer Wijaya and David J. M. Sembiring
Informatics 2023, 10(4), 84; https://doi.org/10.3390/informatics10040084 - 01 Nov 2023
Abstract
Crowdsourcing has gradually become an effective e-government process to gather citizen complaints over the implementation of various public services. In practice, the collected complaints form a massive dataset, making it difficult for government officers to analyze the big data effectively. It is consequently
[...] Read more.
Crowdsourcing has gradually become an effective e-government process to gather citizen complaints over the implementation of various public services. In practice, the collected complaints form a massive dataset, making it difficult for government officers to analyze the big data effectively. It is consequently vital to use data mining algorithms to classify the citizen complaint data for efficient follow-up actions. However, different classification algorithms produce varied classification accuracies. Thus, this study aimed to compare the accuracy of several classification algorithms on crowdsourced citizen complaint data. Taking the case of the LAKSA app in Tangerang City, Indonesia, this study included k-Nearest Neighbors, Random Forest, Support Vector Machine, and AdaBoost for the accuracy assessment. The data were taken from crowdsourced citizen complaints submitted to the LAKSA app, including those aggregated from official social media channels, from May 2021 to April 2022. The results showed SVM with a linear kernel as the most accurate among the assessed algorithms (89.2%). In contrast, AdaBoost (base learner: Decision Trees) produced the lowest accuracy. Still, the accuracy levels of all algorithms varied in parallel to the amount of training data available for the actual classification categories. Overall, the assessments on all algorithms indicated that their accuracies were insignificantly different, with an overall variation of 4.3%. The AdaBoost-based classification, in particular, showed its large dependence on the choice of base learners. Looking at the method and results, this study contributes to e-government, data mining, and big data discourses. This research recommends that governments continuously conduct supervised training of classification algorithms over their crowdsourced citizen complaints to seek the highest accuracy possible, paving the way for smart and sustainable governance.
Full article
(This article belongs to the Special Issue Feature Papers in Big Data)
►▼
Show Figures
Figure 1
Open AccessArticle
Federated Secure Computing
by
Hendrik Ballhausen and Ludwig Christian Hinske
Informatics 2023, 10(4), 83; https://doi.org/10.3390/informatics10040083 - 31 Oct 2023
Abstract
►▼
Show Figures
Privacy-preserving computation (PPC) enables encrypted computation of private data. While advantageous in theory, the complex technology has steep barriers to entry in practice. Here, we derive design goals and principles for a middleware that encapsulates the demanding cryptography server side and provides a
[...] Read more.
Privacy-preserving computation (PPC) enables encrypted computation of private data. While advantageous in theory, the complex technology has steep barriers to entry in practice. Here, we derive design goals and principles for a middleware that encapsulates the demanding cryptography server side and provides a simple-to-use interface to client-side application developers. The resulting architecture, “Federated Secure Computing”, offloads computing-intensive tasks to the server and separates concerns of cryptography and business logic. It provides microservices through an Open API 3.0 definition and hosts multiple protocols through self-discovered plugins. It requires only minimal DevSecOps capabilities and is straightforward and secure. Finally, it is small enough to work in the internet of things (IoT) and in propaedeutic settings on consumer hardware. We provide benchmarks for calculations with a secure multiparty computation (SMPC) protocol, both for vertically and horizontally partitioned data. Runtimes are in the range of seconds on both dedicated workstations and IoT devices such as Raspberry Pi or smartphones. A reference implementation is available as free and open source software under the MIT license.
Full article
Figure 1
Journal Menu
► ▼ Journal Menu-
- Informatics Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Algorithms, BDCC, Future Internet, Informatics, Information, Languages, Publications
AI Chatbots: Threat or Opportunity?
Topic Editors: Antony Bryant, Roberto Montemanni, Min Chen, Paolo Bellavista, Kenji Suzuki, Jeanine Treffers-DallerDeadline: 30 April 2024
Topic in
Brain Sciences, Healthcare, Informatics, IJERPH
Applications of Virtual Reality Technology in Rehabilitation
Topic Editors: Jorge Oliveira, Pedro GamitoDeadline: 30 June 2024
Topic in
Applied Sciences, Electronics, Informatics, Information, Software
Software Engineering and Applications
Topic Editors: Sanjay Misra, Robertas Damaševičius, Bharti SuriDeadline: 31 October 2024
Topic in
Computers, Informatics, Information, Logistics, Mathematics, Algorithms
Decision Science Applications and Models (DSAM)
Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 31 December 2024
Conferences
Special Issues
Special Issue in
Informatics
New Advances in Semantic Recognition and Analysis
Guest Editors: Daniele Toti, Andrea Pozzi, Enrico BarbieratoDeadline: 31 March 2024
Special Issue in
Informatics
Health Informatics: Feature Review Papers
Guest Editors: Jiang Bian, Yi GuoDeadline: 31 July 2024
Special Issue in
Informatics
Digital Society: Interdisciplinary Insights and Applications of Wireless Connectivity
Guest Editors: Carolina Del Valle Soto, Ramiro VelázquezDeadline: 30 September 2024
Special Issue in
Informatics
The Smart Cities Continuum via Machine Learning and Artificial Intelligence
Guest Editors: Augusto Neto, Roger ImmichDeadline: 31 December 2024
Topical Collections
Topical Collection in
Informatics
Promotion of Computational Thinking and Informatics Education in Pre-University Studies
Collection Editor: Francisco José García-Peñalvo
Topical Collection in
Informatics
Uncertainty in Digital Humanities
Collection Editors: Roberto Theron, Eveline Wandl-Vogt, Jennifer Cizik Edmond, Cezary Mazurek