Next Issue
Volume 14, October
Previous Issue
Volume 14, August
 
 

Information, Volume 14, Issue 9 (September 2023) – 48 articles

Cover Story (view full-size image): In this study, we compared four widely used open-source SDWN controllers (ONOS, Ryu, POX, and ODL) based on a multi-criteria scheme. Using Mininet-WiFi, the performance of each controller is evaluated in terms of throughput, latency, jitter, and packet loss. As each performance factor exhibits a particular behavior, following several trends, and there is no direct correlation among them, it is difficult to conclude which the best controller is from the comparison of each metric separately; we need a comprehensive consideration of all metrics (universality). Thus, we propose a particular methodology that helps us decide which of the controllers has the best overall behavior using a single indicator (GPI). The results reveal that Ryu and POX controllers are far superior to others in terms of scalability. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 1679 KiB  
Article
SUCCEED: Sharing Upcycling Cases with Context and Evaluation for Efficient Software Development
by Takuya Nakata, Sinan Chen, Sachio Saiki and Masahide Nakamura
Information 2023, 14(9), 518; https://doi.org/10.3390/info14090518 - 21 Sep 2023
Cited by 1 | Viewed by 1121
Abstract
Software upcycling, a form of software reuse, is a concept that efficiently generates novel, innovative, and value-added development projects by utilizing knowledge extracted from past projects. However, how to integrate the materials derived from these projects for upcycling remains uncertain. This study defines [...] Read more.
Software upcycling, a form of software reuse, is a concept that efficiently generates novel, innovative, and value-added development projects by utilizing knowledge extracted from past projects. However, how to integrate the materials derived from these projects for upcycling remains uncertain. This study defines a systematic model for upcycling cases and develops the Sharing Upcycling Cases with Context and Evaluation for Efficient Software Development (SUCCEED) system to support the implementation of new upcycling initiatives by effectively sharing cases within the organization. To ascertain the efficacy of upcycling within our proposed model and system, we formulated three research questions and conducted two distinct experiments. Through surveys, we identified motivations and characteristics of shared upcycling-relevant development cases. Development tasks were divided into groups, those that employed the SUCCEED system and those that did not, in order to discern the enhancements brought about by upcycling. As a result of this research, we accomplished a comprehensive structuring of both technical and experiential knowledge beneficial for development, a feat previously unrealizable through conventional software reuse, and successfully realized reuse in a proactive and closed environment through construction of the wisdom of crowds for upcycling cases. Consequently, it becomes possible to systematically perform software upcycling by leveraging knowledge from existing projects for streamlining of software development. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

18 pages, 3095 KiB  
Article
Machine Translation of Electrical Terminology Constraints
by Zepeng Wang, Yuan Chen and Juwei Zhang
Information 2023, 14(9), 517; https://doi.org/10.3390/info14090517 - 20 Sep 2023
Cited by 1 | Viewed by 1002
Abstract
In practical applications, the accuracy of domain terminology translation is an important criterion for the performance evaluation of domain machine translation models. Aiming at the problem of phrase mismatch and improper translation caused by word-by-word translation of English terminology phrases, this paper constructs [...] Read more.
In practical applications, the accuracy of domain terminology translation is an important criterion for the performance evaluation of domain machine translation models. Aiming at the problem of phrase mismatch and improper translation caused by word-by-word translation of English terminology phrases, this paper constructs a dictionary of terminology phrases in the field of electrical engineering and proposes three schemes to integrate the dictionary knowledge into the translation model. Scheme 1 replaces the terminology phrases of the source language. Scheme 2 uses the residual connection at the encoder end after the terminology phrase is replaced. Scheme 3 uses a segmentation method of combining character segmentation and terminology segmentation for the target language and uses an additional loss module in the training process. The results show that all three schemes are superior to the baseline model in two aspects: BLEU value and correct translation rate of terminology words. In the test set, the highest accuracy of terminology words was 48.3% higher than that of the baseline model. The BLEU value is up to 3.6 higher than the baseline model. The phenomenon is also analyzed and discussed in this paper. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 3708 KiB  
Article
Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation
by Mehdi Sadi, Bashir Mohammad Sabquat Bahar Talukder, Kaniz Mishty and Md Tauhidur Rahman
Information 2023, 14(9), 516; https://doi.org/10.3390/info14090516 - 19 Sep 2023
Viewed by 1418
Abstract
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, [...] Read more.
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can seriously jeopardize the security and integrity of practical deep learning applications, the existing techniques use additional neural networks to detect the existence of these noises at the input image source. In this paper, we demonstrate an attack strategy that, when activated by rogue means (e.g., malware, trojan), can bypass these existing countermeasures by augmenting the adversarial noise at the AI hardware accelerator stage. We demonstrate the accelerator-level universal adversarial noise attack on several deep learning models using co-simulation of the software kernel of the Conv2D function and the Verilog RTL model of the hardware under the FuseSoC environment. Full article
(This article belongs to the Special Issue Hardware Security and Trust)
Show Figures

Figure 1

27 pages, 2187 KiB  
Article
A Conceptual Consent Request Framework for Mobile Devices
by Olha Drozd and Sabrina Kirrane
Information 2023, 14(9), 515; https://doi.org/10.3390/info14090515 - 19 Sep 2023
Viewed by 1191
Abstract
The General Data Protection Regulation (GDPR) identifies consent as one of the legal bases for personal data processing and requires that it should be freely given, specific, informed, unambiguous, understandable, and easily revocable. Unfortunately, current technical mechanisms for obtaining consent often do not [...] Read more.
The General Data Protection Regulation (GDPR) identifies consent as one of the legal bases for personal data processing and requires that it should be freely given, specific, informed, unambiguous, understandable, and easily revocable. Unfortunately, current technical mechanisms for obtaining consent often do not comply with these requirements. The conceptual consent request framework for mobile devices that is presented in this paper, addresses this issue by following the GDPR requirements on consent and offering a unified user interface for mobile apps. The proposed conceptual framework is evaluated via the development of a City Explorer app with four consent request approaches (custom, functionality-based, app-based, and usage-based) integrated into it. The evaluation shows that the functionality-based consent, which was integrated into the City Explorer app, achieved the best evaluation results and the highest average system usability scale (SUS) score. The functionality-based consent also scored the highest number of SUS points among the four consent templates when evaluated separately from the app. Additionally, we discuss the framework’s reusability and its integration into other mobile apps of different contexts. Full article
(This article belongs to the Special Issue Addressing Privacy and Data Protection in New Technological Trends)
Show Figures

Figure 1

33 pages, 6659 KiB  
Article
Development of a Virtual Reality Escape Room Game for Emotion Elicitation
by Inês Oliveira, Vítor Carvalho, Filomena Soares, Paulo Novais, Eva Oliveira and Lisa Gomes
Information 2023, 14(9), 514; https://doi.org/10.3390/info14090514 - 19 Sep 2023
Viewed by 2140
Abstract
In recent years, the role of emotions in digital games has gained prominence. Studies confirm emotions’ substantial impact on gaming, influencing interactions, effectiveness, efficiency, and satisfaction. Combining gaming dynamics, Virtual Reality (VR) and the immersive Escape Room genre offers a potent avenue through [...] Read more.
In recent years, the role of emotions in digital games has gained prominence. Studies confirm emotions’ substantial impact on gaming, influencing interactions, effectiveness, efficiency, and satisfaction. Combining gaming dynamics, Virtual Reality (VR) and the immersive Escape Room genre offers a potent avenue through which to evoke emotions and create a captivating player experience. The primary objective of this study is to explore VR game design specifically for the elicitation of emotions, in combination with the Escape Room genre. We also seek to understand how players perceive and respond to emotional stimuli within the game. Our study involved two distinct groups of participants: Nursing and Games. We employed a questionnaire to collect data on emotions experienced by participants, the game elements triggering these emotions, and their overall user experience. This study demonstrates the potential of VR technology and the Escape Room genre as a powerful means of eliciting emotions in players. “Escape VR: The Guilt” serves as a successful example of how immersive VR gaming can evoke emotions and captivate players. Full article
Show Figures

Figure 1

28 pages, 1304 KiB  
Review
Exploring the State of Machine Learning and Deep Learning in Medicine: A Survey of the Italian Research Community
by Alessio Bottrighi and Marzio Pennisi
Information 2023, 14(9), 513; https://doi.org/10.3390/info14090513 - 18 Sep 2023
Viewed by 1945
Abstract
Artificial intelligence (AI) is becoming increasingly important, especially in the medical field. While AI has been used in medicine for some time, its growth in the last decade is remarkable. Specifically, machine learning (ML) and deep learning (DL) techniques in medicine have been [...] Read more.
Artificial intelligence (AI) is becoming increasingly important, especially in the medical field. While AI has been used in medicine for some time, its growth in the last decade is remarkable. Specifically, machine learning (ML) and deep learning (DL) techniques in medicine have been increasingly adopted due to the growing abundance of health-related data, the improved suitability of such techniques for managing large datasets, and more computational power. ML and DL methodologies are fostering the development of new “intelligent” tools and expert systems to process data, to automatize human–machine interactions, and to deliver advanced predictive systems that are changing every aspect of the scientific research, industry, and society. The Italian scientific community was instrumental in advancing this research area. This article aims to conduct a comprehensive investigation of the ML and DL methodologies and applications used in medicine by the Italian research community in the last five years. To this end, we selected all the papers published in the last five years with at least one of the authors affiliated to an Italian institution that in the title, in the abstract, or in the keywords present the terms “machine learning” or “deep learning” and reference a medical area. We focused our research on journal papers under the hypothesis that Italian researchers prefer to present novel but well-established research in scientific journals. We then analyzed the selected papers considering different dimensions, including the medical topic, the type of data, the pre-processing methods, the learning methods, and the evaluation methods. As a final outcome, a comprehensive overview of the Italian research landscape is given, highlighting how the community has increasingly worked on a very heterogeneous range of medical problems. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

22 pages, 17315 KiB  
Article
Progressive-Augmented-Based DeepFill for High-Resolution Image Inpainting
by Muzi Cui, Hao Jiang and Chaozhuo Li
Information 2023, 14(9), 512; https://doi.org/10.3390/info14090512 - 18 Sep 2023
Viewed by 1286
Abstract
Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily rely on the surrounding pixels while ignoring that the [...] Read more.
Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily rely on the surrounding pixels while ignoring that the boundaries might be uninformative or noisy, leading to blurred images. As complementary, global visual features from the remote image contexts depict the overall structure and texture of the vanilla images, contributing to generating pixels that blend seamlessly with the existing visual elements. In this paper, we propose a novel model, PA-DeepFill, to repair high-resolution images. The generator network follows a novel progressive learning paradigm, starting with low-resolution images and gradually improving the resolutions by stacking more layers. A novel attention-based module, the gathered attention block, is further integrated into the generator to learn the importance of different distant visual components adaptively. In addition, we have designed a local discriminator that is more suitable for image inpainting tasks, multi-task guided mask-level local discriminator based PatchGAN, which can guide the model to distinguish between regions from the original image and regions completed by the model at a finer granularity. This local discriminator can capture more detailed local information, thereby enhancing the model’s discriminative ability and resulting in more realistic and natural inpainted images. Our proposal is extensively evaluated over popular datasets, and the experimental results demonstrate the superiority of our proposal. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

14 pages, 259 KiB  
Article
Analysis of the Current Situation of Big Data MOOCs in the Intelligent Era Based on the Perspective of Improving the Mental Health of College Students
by Hongfeng Sang, Liyi Ma and Nan Ma
Information 2023, 14(9), 511; https://doi.org/10.3390/info14090511 - 18 Sep 2023
Viewed by 1457
Abstract
A three-dimensional MOOC analysis framework was developed, focusing on platform design, organizational mechanisms, and course construction. This framework aims to investigate the current situation of big data MOOCs in the intelligent era, particularly from the perspective of improving the mental health of college [...] Read more.
A three-dimensional MOOC analysis framework was developed, focusing on platform design, organizational mechanisms, and course construction. This framework aims to investigate the current situation of big data MOOCs in the intelligent era, particularly from the perspective of improving the mental health of college students; moreover, the framework summarizes the construction experience and areas for improvement. The construction of 525 big data courses on 16 MOOC platforms is compared and analyzed from three aspects: the platform (including platform construction, resource quantity, and resource quality), organizational mechanism (including the course opening unit, teacher team, and learning norms), and course construction (including course objectives, teaching design, course content, teaching organization, implementation, teaching management, and evaluation). Drawing from the successful practices of international big data MOOCs and excellent Chinese big data MOOCs, and considering the requirements of authoritative government documents, such as the no. 8 document (J.G. [2019]), no. 3 document (J.G. [2015]), no. 1 document (J.G. [2022]), as well as the “Educational Information Technology Standard CELTS-22—Online Course Evaluation Standard”, recommendations about the platform, organizational mechanism, and course construction are provided for the future development of big data MOOCs in China. Full article
Show Figures

Figure 1

13 pages, 397 KiB  
Article
Knowledge Graph Based Recommender for Automatic Playlist Continuation
by Aleksandar Ivanovski, Milos Jovanovik, Riste Stojanov and Dimitar Trajanov
Information 2023, 14(9), 510; https://doi.org/10.3390/info14090510 - 16 Sep 2023
Viewed by 1571
Abstract
In this work, we present a state-of-the-art solution for automatic playlist continuation through a knowledge graph-based recommender system. By integrating representational learning with graph neural networks and fusing multiple data streams, the system effectively models user behavior, leading to accurate and personalized recommendations. [...] Read more.
In this work, we present a state-of-the-art solution for automatic playlist continuation through a knowledge graph-based recommender system. By integrating representational learning with graph neural networks and fusing multiple data streams, the system effectively models user behavior, leading to accurate and personalized recommendations. We provide a systematic and thorough comparison of our results with existing solutions and approaches, demonstrating the remarkable potential of graph-based representation in improving recommender systems. Our experiments reveal substantial enhancements over existing approaches, further validating the efficacy of this novel approach. Additionally, through comprehensive evaluation, we highlight the robustness of our solution in handling dynamic user interactions and streaming data scenarios, showcasing its practical viability and promising prospects for next-generation recommender systems. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Figure 1

19 pages, 1831 KiB  
Article
An Empirical Study of Deep Learning-Based SS7 Attack Detection
by Yuejun Guo, Orhan Ermis, Qiang Tang, Hoang Trang and Alexandre De Oliveira
Information 2023, 14(9), 509; https://doi.org/10.3390/info14090509 - 16 Sep 2023
Viewed by 1780
Abstract
Signalling protocols are responsible for fundamental tasks such as initiating and terminating communication and identifying the state of the communication in telecommunication core networks. Signalling System No. 7 (SS7), Diameter, and GPRS Tunneling Protocol (GTP) are the main protocols used in 2G to [...] Read more.
Signalling protocols are responsible for fundamental tasks such as initiating and terminating communication and identifying the state of the communication in telecommunication core networks. Signalling System No. 7 (SS7), Diameter, and GPRS Tunneling Protocol (GTP) are the main protocols used in 2G to 4G, while 5G uses standard Internet protocols for its signalling. Despite their distinct features, and especially their security guarantees, they are most vulnerable to attacks in roaming scenarios: the attacks that target the location update function call for subscribers who are located in a visiting network. The literature tells us that rule-based detection mechanisms are ineffective against such attacks, while the hope lies in deep learning (DL)-based solutions. In this paper, we provide a large-scale empirical study of state-of-the-art DL models, including eight supervised and five semi-supervised, to detect attacks in the roaming scenario. Our experiments use a real-world dataset and a simulated dataset for SS7, and they can be straightforwardly carried out for other signalling protocols upon the availability of corresponding datasets. The results show that semi-supervised DL models generally outperform supervised ones since they leverage both labeled and unlabeled data for training. Nevertheless, the ensemble-based supervised model NODE outperforms others in its category and some in the semi-supervised category. Among all, the semi-supervised model PReNet performs the best regarding the Recall and F1 metrics when all unlabeled data are used for training, and it is also the most stable one. Our experiment also shows that the performances of different semi-supervised models could differ a lot regarding the size of used unlabeled data in training. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

19 pages, 2438 KiB  
Article
A Deep Learning Approach for Predictive Healthcare Process Monitoring
by Ulises Manuel Ramirez-Alcocer, Edgar Tello-Leal, Gerardo Romero and Bárbara A. Macías-Hernández
Information 2023, 14(9), 508; https://doi.org/10.3390/info14090508 - 16 Sep 2023
Cited by 4 | Viewed by 1328
Abstract
In this paper, we propose a deep learning-based approach to predict the next event in hospital organizational process models following the guidance of predictive process mining. This method provides value for the planning and allocating of resources since each trace linked to a [...] Read more.
In this paper, we propose a deep learning-based approach to predict the next event in hospital organizational process models following the guidance of predictive process mining. This method provides value for the planning and allocating of resources since each trace linked to a case shows the consecutive execution of events in a healthcare process. The predictive model is based on a long short-term memory (LSTM) neural network that achieves high accuracy in the training and testing stages. In addition, a framework to implement the LSTM neural network is proposed, comprising stages from the preprocessing of the raw data to selecting the best LSTM model. The effectiveness of the prediction method is evaluated through four real-life event logs that contain historical information on the execution of the processes of patient transfer orders between hospitals, sepsis care cases, billing of medical services, and patient care management. In the test stage, the LSTM model reached values of 0.98, 0.91, 0.85, and 0.81 in the accuracy metric, and in the evaluation of the prediction of the next event using the 10-fold cross-validation technique, values of 0.94, 0.88, 0.84, and 0.81 were obtained for the four previously mentioned event logs. In addition, the performance of the LSTM prediction model was evaluated with the precision, recall, F1-score, and area under the receiver operating characteristic (ROC) curve (AUC) metrics, obtaining high scores very close to 1. The experimental results suggest that the proposed method achieves acceptable measures in predicting the next event regardless of whether an input event or a set of input events is used. Full article
(This article belongs to the Special Issue Information Systems in Healthcare)
Show Figures

Figure 1

17 pages, 5105 KiB  
Article
Investigation of a Hybrid LSTM + 1DCNN Approach to Predict In-Cylinder Pressure of Internal Combustion Engines
by Federico Ricci, Luca Petrucci, Francesco Mariani and Carlo Nazareno Grimaldi
Information 2023, 14(9), 507; https://doi.org/10.3390/info14090507 - 15 Sep 2023
Viewed by 1080
Abstract
The control of internal combustion engines is becoming increasingly challenging to the customer’s requirements for growing performance and ever-stringent emission regulations. Therefore, significant computational efforts are required to manage the large amount of data coming from the field for engine optimization, leading to [...] Read more.
The control of internal combustion engines is becoming increasingly challenging to the customer’s requirements for growing performance and ever-stringent emission regulations. Therefore, significant computational efforts are required to manage the large amount of data coming from the field for engine optimization, leading to increased operating times and costs. Machine-learning techniques are being increasingly used in the automotive field as virtual sensors, fault detection systems, and performance-optimization applications for their real-time and low-cost implementation. Among them, the combination of long short-term memory (LSTM) together with one-dimensional convolutional neural networks (1DCNN), i.e., LSTM + 1DCNN, has proved to be a promising tool for signal analysis. The architecture exploits the CNN characteristic to combine feature classification and extraction, creating a single adaptive learning body with the ability of LSTM to follow the sequential nature of sensor measurements over time. The current research focus is on evaluating the possibility of integrating virtual sensors into the on-board control system. Specifically, the primary objective is to assess and harness the potential of advanced machine-learning technologies to replace physical sensors. In realizing this goal, the present work establishes the first step by evaluating the forecasting performance of a LSTM + 1DCNN architecture. Experimental data coming from a three-cylinder spark-ignition engine under different operating conditions are used to predict the engine’s in-cylinder pressure traces. Since using in-cylinder pressure transducers in road cars is not economically viable, adopting advanced machine-learning technologies becomes crucial to avoid structural modifications while preserving engine integrity. The results show that LSTM + 1DCNN is particularly suited for the prediction of signals characterized by a higher variability. In particular, it consistently outperforms other architectures utilized for comparative purposes, achieving average error percentages below 2%. As cycle-to-cycle variability increases, LSTM + 1DCNN reaches average error percentages below 1.5%, demonstrating the architecture’s potential for replacing physical sensors. Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
Show Figures

Figure 1

24 pages, 2318 KiB  
Article
Evaluation of 60 GHz Wireless Connectivity for an Automated Warehouse Suitable for Industry 4.0
by Rahul Gulia, Abhishek Vashist, Amlan Ganguly, Clark Hochgraf and Michael E. Kuhl
Information 2023, 14(9), 506; https://doi.org/10.3390/info14090506 - 15 Sep 2023
Viewed by 1038
Abstract
The fourth industrial revolution focuses on the digitization and automation of supply chains resulting in a significant transformation of methods for goods production and delivery systems. To enable this, automated warehousing is demanding unprecedented vehicle-to-vehicle and vehicle-to-infrastructure communication rates and reliability. The 60 [...] Read more.
The fourth industrial revolution focuses on the digitization and automation of supply chains resulting in a significant transformation of methods for goods production and delivery systems. To enable this, automated warehousing is demanding unprecedented vehicle-to-vehicle and vehicle-to-infrastructure communication rates and reliability. The 60 GHz frequency band can deliver multi-gigabit/second data rates to satisfy the increasing demands of network connectivity by smart warehouses. In this paper, we aim to investigate the network connectivity in the 60 GHz millimeter-wave band inside an automated warehouse. A key hindrance to robust and high-speed network connectivity, especially, at mmWave frequencies stems from numerous non-line-of-sight (nLOS) paths in the transmission medium due to various interacting objects such as metal shelves and storage boxes. The continual change in the warehouse storage configuration significantly affects the multipath reflected components and shadow fading effects, thus adding complexity to establishing stable, yet fast, network coverage. In this study, network connectivity in an automated warehouse is analyzed at 60 GHz using Network Simulator-3 (NS-3) channel simulations. We examine a simple warehouse model with several metallic shelves and storage materials of standard proportions. Our investigation indicates that the indoor warehouse network performance relies on the line-of-sight and nLOS propagation paths, the existence of reflective materials, and the autonomous material handling agents present around the access point (AP). In addition, we discuss the network performance under varied conditions including the AP height and storage materials on the warehouse shelves. We also analyze the network performance in each aisle of the warehouse in addition to its SINR heatmap to understand the 60 GHz network connectivity. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols II)
Show Figures

Figure 1

14 pages, 904 KiB  
Article
Enhancing Personalized Educational Content Recommendation through Cosine Similarity-Based Knowledge Graphs and Contextual Signals
by Christos Troussas, Akrivi Krouska, Panagiota Tselenti, Dimitrios K. Kardaras and Stavroula Barbounaki
Information 2023, 14(9), 505; https://doi.org/10.3390/info14090505 - 14 Sep 2023
Viewed by 1472
Abstract
The extensive pool of content within educational software platforms can often overwhelm learners, leaving them uncertain about what materials to engage with. In this context, recommender systems offer significant support by customizing the content delivered to learners, alleviating the confusion and enhancing the [...] Read more.
The extensive pool of content within educational software platforms can often overwhelm learners, leaving them uncertain about what materials to engage with. In this context, recommender systems offer significant support by customizing the content delivered to learners, alleviating the confusion and enhancing the learning experience. To this end, this paper presents a novel approach for recommending adequate educational content to learners via the use of knowledge graphs. In our approach, the knowledge graph encompasses learners, educational entities, and relationships among them, creating an interconnected framework that drives personalized e-learning content recommendations. Moreover, the presented knowledge graph has been enriched with contextual signals referring to various learners’ characteristics, such as prior knowledge level, learning style, and current learning goals. To refine the recommendation process, the cosine similarity technique was employed to quantify the likeness between a learner’s preferences and the attributes of educational entities within the knowledge graph. The above methodology was incorporated in an intelligent tutoring system for learning the programming language Java to recommend content to learners. The software was evaluated with highly promising results. Full article
(This article belongs to the Collection Knowledge Graphs for Search and Recommendation)
Show Figures

Figure 1

35 pages, 9764 KiB  
Article
Using ChatGPT and Persuasive Technology for Personalized Recommendation Messages in Hotel Upselling
by Manolis Remountakis, Konstantinos Kotis, Babis Kourtzis and George E. Tsekouras
Information 2023, 14(9), 504; https://doi.org/10.3390/info14090504 - 13 Sep 2023
Cited by 4 | Viewed by 3406
Abstract
Recommender systems have become indispensable tools in the hotel hospitality industry, enabling personalized and tailored experiences for guests. Recent advancements in large language models (LLMs), such as ChatGPT, and persuasive technologies have opened new avenues for enhancing the effectiveness of those systems. This [...] Read more.
Recommender systems have become indispensable tools in the hotel hospitality industry, enabling personalized and tailored experiences for guests. Recent advancements in large language models (LLMs), such as ChatGPT, and persuasive technologies have opened new avenues for enhancing the effectiveness of those systems. This paper explores the potential of integrating ChatGPT and persuasive technologies for automating and improving hotel hospitality recommender systems. First, we delve into the capabilities of ChatGPT, which can understand and generate human-like text, enabling more accurate and context-aware recommendations. We discuss the integration of ChatGPT into recommender systems, highlighting the ability to analyze user preferences, extract valuable insights from online reviews, and generate personalized recommendations based on guest profiles. Second, we investigate the role of persuasive technology in influencing user behavior and enhancing the persuasive impact of hotel recommendations. By incorporating persuasive techniques, such as social proof, scarcity, and personalization, recommender systems can effectively influence user decision making and encourage desired actions, such as booking a specific hotel or upgrading their room. To investigate the efficacy of ChatGPT and persuasive technologies, we present pilot experiments with a case study involving a hotel recommender system. Our inhouse commercial hotel marketing platform, eXclusivi, was extended with a new software module working with ChatGPT prompts and persuasive ads created for its recommendations. In particular, we developed an intelligent advertisement (ad) copy generation tool for the hotel marketing platform. The proposed approach allows for the hotel team to target all guests in their language, leveraging the integration with the hotel’s reservation system. Overall, this paper contributes to the field of hotel hospitality by exploring the synergistic relationship between ChatGPT and persuasive technology in recommender systems, ultimately influencing guest satisfaction and hotel revenue. Full article
(This article belongs to the Special Issue Systems Engineering and Knowledge Management)
Show Figures

Figure 1

30 pages, 2345 KiB  
Systematic Review
Designing a Chatbot for Contemporary Education: A Systematic Literature Review
by Dimitrios Ramandanis and Stelios Xinogalos
Information 2023, 14(9), 503; https://doi.org/10.3390/info14090503 - 13 Sep 2023
Cited by 1 | Viewed by 2648
Abstract
A chatbot is a technological tool that can simulate a discussion between a human and a program application. This technology has been developing rapidly over recent years, and its usage is increasing rapidly in many sectors, especially in education. For this purpose, a [...] Read more.
A chatbot is a technological tool that can simulate a discussion between a human and a program application. This technology has been developing rapidly over recent years, and its usage is increasing rapidly in many sectors, especially in education. For this purpose, a systematic literature review was conducted using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to analyze the developments and evolutions of this technology in the educational sector during the last 5 years. More precisely, an analysis of the development methods, practices and guidelines for the development of a conversational tutor are examined. The results of this study aim to summarize the gathered knowledge to provide useful information to educators that would like to develop a conversational assistant for their course and to developers that would like to develop chatbot systems in the educational domain. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

21 pages, 1433 KiB  
Article
PDD-ET: Parkinson’s Disease Detection Using ML Ensemble Techniques and Customized Big Dataset
by Kalyan Chatterjee, Ramagiri Praveen Kumar, Anjan Bandyopadhyay, Sujata Swain, Saurav Mallik, Aimin Li and Kanad Ray
Information 2023, 14(9), 502; https://doi.org/10.3390/info14090502 - 13 Sep 2023
Cited by 2 | Viewed by 2137
Abstract
Parkinson’s disease (PD) is a neurological disorder affecting the nerve cells. PD gives rise to various neurological conditions, including gradual reduction in movement speed, tremors, limb stiffness, and alterations in walking patterns. Identifying Parkinson’s disease in its initial phases is crucial to preserving [...] Read more.
Parkinson’s disease (PD) is a neurological disorder affecting the nerve cells. PD gives rise to various neurological conditions, including gradual reduction in movement speed, tremors, limb stiffness, and alterations in walking patterns. Identifying Parkinson’s disease in its initial phases is crucial to preserving the well-being of those afflicted. However, accurately identifying PD in its early phases is intricate due to the aging population. Therefore, in this paper, we harnessed machine learning-based ensemble methodologies and focused on the premotor stage of PD to create a precise and reliable early-stage PD detection model named PDD-ET. We compiled a tailored, extensive dataset encompassing patient mobility, medication habits, prior medical history, rigidity, gender, and age group. The PDD-ET model amalgamates the outcomes of various ML techniques, resulting in an impressive 97.52% accuracy in early-stage PD detection. Furthermore, the PDD-ET model effectively distinguishes between multiple stages of PD and accurately categorizes the severity levels of patients affected by PD. The evaluation findings demonstrate that the PDD-ET model outperforms the SVR, CNN, Stacked LSTM, LSTM, GRU, Alex Net, [Decision Tree, RF, and SVR], Deep Neural Network, HOG, Quantum ReLU Activator, Improved KNN, Adaptive Boosting, RF, and Deep Learning Model techniques by the approximate margins of 37%, 30%, 20%, 27%, 25%, 18%, 19%, 27%, 25%, 23%, 45%, 40%, 42%, and 16%, respectively. Full article
(This article belongs to the Special Issue Trends in Electronics and Health Informatics)
Show Figures

Figure 1

19 pages, 3807 KiB  
Article
BGP Dataset-Based Malicious User Activity Detection Using Machine Learning
by Hansol Park, Kookjin Kim, Dongil Shin and Dongkyoo Shin
Information 2023, 14(9), 501; https://doi.org/10.3390/info14090501 - 13 Sep 2023
Cited by 1 | Viewed by 1439
Abstract
Recent advances in the Internet and digital technology have brought a wide variety of activities into cyberspace, but they have also brought a surge in cyberattacks, making it more important than ever to detect and prevent cyberattacks. In this study, a method is [...] Read more.
Recent advances in the Internet and digital technology have brought a wide variety of activities into cyberspace, but they have also brought a surge in cyberattacks, making it more important than ever to detect and prevent cyberattacks. In this study, a method is proposed to detect anomalies in cyberspace by consolidating BGP (Border Gateway Protocol) data into numerical data that can be trained by machine learning (ML) through a tokenizer. BGP data comprise a mix of numeric and textual data, making it challenging for ML models to learn. To convert the data into a numerical format, a tokenizer, a preprocessing technique from Natural Language Processing (NLP), was employed. This process goes beyond merely replacing letters with numbers; its objective is to preserve the patterns and characteristics of the data. The Synthetic Minority Over-sampling Technique (SMOTE) was subsequently applied to address the issue of imbalanced data. Anomaly detection experiments were conducted on the model using various ML algorithms such as One-Class Support Vector Machine (One-SVM), Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM), Random Forest (RF), and Autoencoder (AE), and excellent performance in detection was demonstrated. In experiments, it performed best with the AE model, with an F1-Score of 0.99. In terms of the Area Under the Receiver Operating Characteristic (AUROC) curve, good performance was achieved by all ML models, with an average of over 90%. Improved cybersecurity is expected to be contributed by this research, as it enables the detection and monitoring of cyber anomalies from malicious users through BGP data. Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
Show Figures

Figure 1

18 pages, 741 KiB  
Article
Time-Series Neural Network: A High-Accuracy Time-Series Forecasting Method Based on Kernel Filter and Time Attention
by Lexin Zhang, Ruihan Wang, Zhuoyuan Li, Jiaxun Li, Yichen Ge, Shiyun Wa, Sirui Huang and Chunli Lv
Information 2023, 14(9), 500; https://doi.org/10.3390/info14090500 - 13 Sep 2023
Cited by 6 | Viewed by 8522
Abstract
This research introduces a novel high-accuracy time-series forecasting method, namely the Time Neural Network (TNN), which is based on a kernel filter and time attention mechanism. Taking into account the complex characteristics of time-series data, such as non-linearity, high dimensionality, and long-term dependence, [...] Read more.
This research introduces a novel high-accuracy time-series forecasting method, namely the Time Neural Network (TNN), which is based on a kernel filter and time attention mechanism. Taking into account the complex characteristics of time-series data, such as non-linearity, high dimensionality, and long-term dependence, the TNN model is designed and implemented. The key innovations of the TNN model lie in the incorporation of the time attention mechanism and kernel filter, allowing the model to allocate different weights to features at each time point, and extract high-level features from the time-series data, thereby improving the model’s predictive accuracy. Additionally, an adaptive weight generator is integrated into the model, enabling the model to automatically adjust weights based on input features. Mainstream time-series forecasting models such as Recurrent Neural Networks (RNNs) and Long Short-Term Memory Networks (LSTM) are employed as baseline models and comprehensive comparative experiments are conducted. The results indicate that the TNN model significantly outperforms the baseline models in both long-term and short-term prediction tasks. Specifically, the RMSE, MAE, and R2 reach 0.05, 0.23, and 0.95, respectively. Remarkably, even for complex time-series data that contain a large amount of noise, the TNN model still maintains a high prediction accuracy. Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

26 pages, 1592 KiB  
Article
FinChain-BERT: A High-Accuracy Automatic Fraud Detection Model Based on NLP Methods for Financial Scenarios
by Xinze Yang, Chunkai Zhang, Yizhi Sun, Kairui Pang, Luru Jing, Shiyun Wa and Chunli Lv
Information 2023, 14(9), 499; https://doi.org/10.3390/info14090499 - 12 Sep 2023
Cited by 1 | Viewed by 3024
Abstract
This research primarily explores the application of Natural Language Processing (NLP) technology in precision financial fraud detection, with a particular focus on the implementation and optimization of the FinChain-BERT model. Firstly, the FinChain-BERT model has been successfully employed for financial fraud detection tasks, [...] Read more.
This research primarily explores the application of Natural Language Processing (NLP) technology in precision financial fraud detection, with a particular focus on the implementation and optimization of the FinChain-BERT model. Firstly, the FinChain-BERT model has been successfully employed for financial fraud detection tasks, improving the capability of handling complex financial text information through deep learning techniques. Secondly, novel attempts have been made in the selection of loss functions, with a comparison conducted between negative log-likelihood function and Keywords Loss Function. The results indicated that the Keywords Loss Function outperforms the negative log-likelihood function when applied to the FinChain-BERT model. Experimental results validated the efficacy of the FinChain-BERT model and its optimization measures. Whether in the selection of loss functions or the application of lightweight technology, the FinChain-BERT model demonstrated superior performance. The utilization of Keywords Loss Function resulted in a model achieving 0.97 in terms of accuracy, recall, and precision. Simultaneously, the model size was successfully reduced to 43 MB through the application of integer distillation technology, which holds significant importance for environments with limited computational resources. In conclusion, this research makes a crucial contribution to the application of NLP in financial fraud detection and provides a useful reference for future studies. Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
Show Figures

Figure 1

15 pages, 3160 KiB  
Article
A Novel Gamification Application for High School Student Examination and Assessment to Assist Student Engagement and to Stimulate Interest
by Anna Maria Gianni and Nikolaos Antoniadis
Information 2023, 14(9), 498; https://doi.org/10.3390/info14090498 - 10 Sep 2023
Viewed by 1467
Abstract
Formal education in high school focuses primarily on knowledge acquisition via traditional classroom teaching. Younger generations of students tend to lose interest and to disengage from the process. Gamification, the use of gaming elements in the training process to stimulate interest, has been [...] Read more.
Formal education in high school focuses primarily on knowledge acquisition via traditional classroom teaching. Younger generations of students tend to lose interest and to disengage from the process. Gamification, the use of gaming elements in the training process to stimulate interest, has been used lately to battle this phenomenon. The use of an interactive environment and the employment of tools familiar to today’s students aim to bring the student closer to the learning process. Even though there have been several attempts to integrate gaming elements in the teaching process, few applications in the student assessment procedure have been reported so far. In this article, a new approach to student assessment is implemented using a gamified quiz as opposed to standard exam formats, where students are asked to answer questions on the material already taught, using various gaming elements (leaderboards, rewards at different levels, etc.). The results show that students are much more interested in this interactive process and would like to see this kind of performance assessment more often in their everyday activity in school. The participants are also motivated to learn more about the subject of the course and are generally satisfied with this novel approach compared to standard forms of exams. Full article
Show Figures

Figure 1

15 pages, 393 KiB  
Review
An Analytical Review of the Source Code Models for Exploit Analysis
by Elena Fedorchenko, Evgenia Novikova, Andrey Fedorchenko and Sergei Verevkin
Information 2023, 14(9), 497; https://doi.org/10.3390/info14090497 - 08 Sep 2023
Cited by 1 | Viewed by 1392
Abstract
Currently, enhancing the efficiency of vulnerability detection and assessment remains relevant. We investigate a new approach for the detection of vulnerabilities that can be used in cyber attacks and assess their severity for further effective responses based on an analysis of exploit source [...] Read more.
Currently, enhancing the efficiency of vulnerability detection and assessment remains relevant. We investigate a new approach for the detection of vulnerabilities that can be used in cyber attacks and assess their severity for further effective responses based on an analysis of exploit source codes and real-time detection of features of their implementation. The key element of this approach is an exploit source code model. In this paper, to specify the model, we systematically analyze existing source code models, approaches to source code analysis in general, and exploits in particular in order to examine their advantages, applications, and challenges. Finally, we provide an initial specification of the proposed source code model. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

22 pages, 612 KiB  
Article
A Study on Influential Features for Predicting Best Answers in Community Question-Answering Forums
by Valeria Zoratto, Daniela Godoy and Gabriela N. Aranda
Information 2023, 14(9), 496; https://doi.org/10.3390/info14090496 - 07 Sep 2023
Viewed by 902
Abstract
The knowledge provided by user communities in question-answering (QA) forums is a highly valuable source of information for satisfying user information needs. However, finding the best answer for a posted question can be challenging. User-generated content in forums can be of unequal quality [...] Read more.
The knowledge provided by user communities in question-answering (QA) forums is a highly valuable source of information for satisfying user information needs. However, finding the best answer for a posted question can be challenging. User-generated content in forums can be of unequal quality given the free nature of natural language and the varied levels of user expertise. Answers to a question posted in a forum are compiled in a discussion thread, concentrating also posterior activity such as comments and votes. There are usually multiple reasons why an answer successfully fulfills a certain information need and gets accepted as the best answer among a (possibly) high number of answers. In this work, we study the influence that different aspects of answers have on the prediction of the best answers in a QA forum. We collected the discussion threads of a real-world forum concerning computer programming, and we evaluated different features for representing the answers and the context in which they appear in a thread. Multiple classification models were used to compare the performance of the different features, finding that readability is one of the most important factors for detecting the best answers. The goal of this study is to shed some light on the reasons why answers are more likely to receive more votes and be selected as the best answer for a posted question. Such knowledge enables users to enhance their answers which leads, in turn, to an improvement in the overall quality of the content produced in a platform. Full article
Show Figures

Figure 1

25 pages, 1910 KiB  
Review
A Literature Survey on Word Sense Disambiguation for the Hindi Language
by Vinto Gujjar, Neeru Mago, Raj Kumari, Shrikant Patel, Nalini Chintalapudi and Gopi Battineni
Information 2023, 14(9), 495; https://doi.org/10.3390/info14090495 - 07 Sep 2023
Cited by 3 | Viewed by 1530
Abstract
Word sense disambiguation (WSD) is a process used to determine the most appropriate meaning of a word in a given contextual framework, particularly when the word is ambiguous. While WSD has been extensively studied for English, it remains a challenging problem for resource-scarce [...] Read more.
Word sense disambiguation (WSD) is a process used to determine the most appropriate meaning of a word in a given contextual framework, particularly when the word is ambiguous. While WSD has been extensively studied for English, it remains a challenging problem for resource-scarce languages such as Hindi. Therefore, it is crucial to address ambiguity in Hindi to effectively and efficiently utilize it on the web for various applications such as machine translation, information retrieval, etc. The rich linguistic structure of Hindi, characterized by complex morphological variations and syntactic nuances, presents unique challenges in accurately determining the intended sense of a word within a given context. This review paper presents an overview of different approaches employed to resolve the ambiguity of Hindi words, including supervised, unsupervised, and knowledge-based methods. Additionally, the paper discusses applications, identifies open problems, presents conclusions, and suggests future research directions. Full article
(This article belongs to the Special Issue Computational Linguistics and Natural Language Processing)
Show Figures

Figure 1

12 pages, 782 KiB  
Article
Extreme Learning Machine-Enabled Coding Unit Partitioning Algorithm for Versatile Video Coding
by Xiantao Jiang, Mo Xiang, Jiayuan Jin and Tian Song
Information 2023, 14(9), 494; https://doi.org/10.3390/info14090494 - 07 Sep 2023
Viewed by 797
Abstract
The versatile video coding (VVC) standard offers improved coding efficiency compared to the high efficiency video coding (HEVC) standard in multimedia signal coding. However, this increased efficiency comes at the cost of increased coding complexity. This work proposes an efficient coding unit partitioning [...] Read more.
The versatile video coding (VVC) standard offers improved coding efficiency compared to the high efficiency video coding (HEVC) standard in multimedia signal coding. However, this increased efficiency comes at the cost of increased coding complexity. This work proposes an efficient coding unit partitioning algorithm based on an extreme learning machine (ELM), which can reduce the coding complexity while ensuring coding efficiency. Firstly, the coding unit size decision is modeled as a classification problem. Secondly, an ELM classifier is trained to predict the coding unit size. In the experiment, the proposed approach is verified based on the VVC reference model. The results show that the proposed method can reduce coding complexity significantly, and good image quality can be obtained. Full article
Show Figures

Figure 1

12 pages, 818 KiB  
Article
Availability of Physical Activity Tracking Data from Wearable Devices for Glaucoma Patients
by Sonali B. Bhanvadia, Leo Meller, Kian Madjedi, Robert N. Weinreb and Sally L. Baxter
Information 2023, 14(9), 493; https://doi.org/10.3390/info14090493 - 07 Sep 2023
Viewed by 1087
Abstract
Physical activity has been found to potentially modulate glaucoma risk, but the evidence remains inconclusive. The increasing use of wearable physical activity trackers may provide longitudinal and granular data suitable to address this issue, but little is known regarding the characteristics and availability [...] Read more.
Physical activity has been found to potentially modulate glaucoma risk, but the evidence remains inconclusive. The increasing use of wearable physical activity trackers may provide longitudinal and granular data suitable to address this issue, but little is known regarding the characteristics and availability of these data sources. We performed a scoping review and query of data sources on the availability of wearable physical activity data for glaucoma patients. Literature databases (PubMed and MEDLINE) were reviewed with search terms consisting of those related to physical activity trackers and those related to glaucoma, and we evaluated results at the intersection of these two groups. Biomedical databases were also reviewed, for which we completed database queries. We identified eight data sources containing physical activity tracking data for glaucoma, with two being large national databases (UK BioBank and All of Us) and six from individual journal articles providing participant-level information. The number of glaucoma patients with physical activity tracking data available, types of glaucoma-related data, fitness devices utilized, and diversity of participants varied across all sources. Overall, there were limited analyses of these data, suggesting the need for additional research to further investigate how physical activity may alter glaucoma risk. Full article
Show Figures

Figure 1

26 pages, 2354 KiB  
Article
Effects of Generative Chatbots in Higher Education
by Galina Ilieva, Tania Yankova, Stanislava Klisarova-Belcheva, Angel Dimitrov, Marin Bratkov and Delian Angelov
Information 2023, 14(9), 492; https://doi.org/10.3390/info14090492 - 07 Sep 2023
Cited by 5 | Viewed by 8129
Abstract
Learning technologies often do not meet the university requirements for learner engagement via interactivity and real-time feedback. In addition to the challenge of providing personalized learning experiences for students, these technologies can increase the workload of instructors due to the maintenance and updates [...] Read more.
Learning technologies often do not meet the university requirements for learner engagement via interactivity and real-time feedback. In addition to the challenge of providing personalized learning experiences for students, these technologies can increase the workload of instructors due to the maintenance and updates required to keep the courses up-to-date. Intelligent chatbots based on generative artificial intelligence (AI) technology can help overcome these disadvantages by transforming pedagogical activities and guiding both students and instructors interactively. In this study, we explore and compare the main characteristics of existing educational chatbots. Then, we propose a new theoretical framework for blended learning with intelligent chatbots integration enabling students to interact online and instructors to create and manage their courses using generative AI tools. The advantages of the proposed framework are as follows: (1) it provides a comprehensive understanding of the transformative potential of AI chatbots in education and facilitates their effective implementation; (2) it offers a holistic methodology to enhance the overall educational experience; and (3) it unifies the applications of intelligent chatbots in teaching–learning activities within universities. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

19 pages, 1366 KiB  
Article
Fortified-Grid: Fortifying Smart Grids through the Integration of the Trusted Platform Module in Internet of Things Devices
by Giriraj Sharma, Amit M. Joshi and Saraju P. Mohanty
Information 2023, 14(9), 491; https://doi.org/10.3390/info14090491 - 06 Sep 2023
Viewed by 1645
Abstract
This paper presents a hardware-assisted security primitive that integrates the Trusted Platform Module (TPM) into IoT devices for authentication in smart grids. Data and device security plays a pivotal role in smart grids since they are vulnerable to various attacks that could risk [...] Read more.
This paper presents a hardware-assisted security primitive that integrates the Trusted Platform Module (TPM) into IoT devices for authentication in smart grids. Data and device security plays a pivotal role in smart grids since they are vulnerable to various attacks that could risk grid failure. The proposed Fortified-Grid security primitive provides an innovative solution, leveraging the TPM for attestation coupled with standard X.509 certificates. This methodology serves a dual purpose, ensuring the authenticity of IoT devices and upholding software integrity, an indispensable foundation for any resilient smart grid security system. TPM is a hardware security module that can generate keys and store them with encryption so they cannot be compromised. Formal security verification has been performed using the random or real Oracle (ROR) model and widely accepted AVISPA simulation tool, while informal security verification uses the DY and CK adversary model. Fortified-Grid helps to validate the attested state of IoT devices with a minimal network overhead of 1984 bits. Full article
(This article belongs to the Special Issue Recent Advances in IoT and Cyber/Physical System)
Show Figures

Figure 1

12 pages, 800 KiB  
Article
Effects of Contractual Governance on IT Project Performance under the Mediating Role of Project Management Risk: An Emerging Market Context
by Ayesha Saddiqa, Muhammad Usman Shehzad and Muhammad Mohiuddin
Information 2023, 14(9), 490; https://doi.org/10.3390/info14090490 - 05 Sep 2023
Viewed by 1171
Abstract
In this study, we explore the impact of contractual governance (CG) on project performance (PP) under the mediation of project management risk (PMR). Contractual governance influences favorably IT projects performance in an emerging market context where the IT sector is growing. The principal-agent [...] Read more.
In this study, we explore the impact of contractual governance (CG) on project performance (PP) under the mediation of project management risk (PMR). Contractual governance influences favorably IT projects performance in an emerging market context where the IT sector is growing. The principal-agent theory is used to build a research model that schedules project governance and IT project risk management. Data were collected from 295 IT professionals and the response rate was 73.75%. Smart PLS was employed to test proposed relationships. The findings postulate a strong causal relationship between the CG, PP and PMR. Fundamental elements (FE), change elements (CE), and governance elements (GE) have a significant positive relationship with project management risk (PMR), and PMR positively affects PP. Additionally, PMR mediates the relationship of FE, CE and GE with PP. Overall, the results of the study provide pragmatic visions for IT industry practitioners and experts, but the unscheduled risk to the IT industry may bring enormous harm. Consequently, effective and well-structured governance in a strategic way tends to improve the project performance by monitoring and managing both project risk and quality. In addition, the study empirically supports the significant impacts of project governance dimensions i.e., fundamental elements, change elements and governance elements on project management risk and project performance. It also guides researchers and adds value to the project performance-related literature by filling the gap. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

19 pages, 4732 KiB  
Article
Interpreting Disentangled Representations of Person-Specific Convolutional Variational Autoencoders of Spatially Preserving EEG Topographic Maps via Clustering and Visual Plausibility
by Taufique Ahmed and Luca Longo
Information 2023, 14(9), 489; https://doi.org/10.3390/info14090489 - 04 Sep 2023
Cited by 1 | Viewed by 1173
Abstract
Dimensionality reduction and producing simple representations of electroencephalography (EEG) signals are challenging problems. Variational autoencoders (VAEs) have been employed for EEG data creation, augmentation, and automatic feature extraction. In most of the studies, VAE latent space interpretation is used to detect only the [...] Read more.
Dimensionality reduction and producing simple representations of electroencephalography (EEG) signals are challenging problems. Variational autoencoders (VAEs) have been employed for EEG data creation, augmentation, and automatic feature extraction. In most of the studies, VAE latent space interpretation is used to detect only the out-of-order distribution latent variable for anomaly detection. However, the interpretation and visualisation of all latent space components disclose information about how the model arrives at its conclusion. The main contribution of this study is interpreting the disentangled representation of VAE by activating only one latent component at a time, whereas the values for the remaining components are set to zero because it is the mean of the distribution. The results show that CNN-VAE works well, as indicated by matrices such as SSIM, MSE, MAE, and MAPE, along with SNR and correlation coefficient values throughout the architecture’s input and output. Furthermore, visual plausibility and clustering demonstrate that each component contributes differently to capturing the generative factors in topographic maps. Our proposed pipeline adds to the body of knowledge by delivering a CNN-VAE-based latent space interpretation model. This helps us learn the model’s decision and the importance of each component of latent space responsible for activating parts of the brain. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop