Next Issue
Volume 15, July
Previous Issue
Volume 15, May
 
 

Information, Volume 15, Issue 6 (June 2024) – 76 articles

Cover Story (view full-size image): Structured science summaries using properties beyond traditional keywords enhance science findability. Current methods, such as those used by the Open Research Knowledge Graph (ORKG), involve manual curation, which is labor-intensive and inconsistent. We propose using Large Language Models (LLMs) to automatically suggest these properties. Our study compares ORKG’s manually curated properties with those generated by LLMs, evaluating performance from the following four perspectives: semantic alignment, property mapping accuracy, cosine similarity, and expert surveys. LLMs show potential as recommendation systems for structuring science, but further fine-tuning is recommended to improve their alignment with scientific tasks and mimicry of human expertise. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
34 pages, 3308 KiB  
Article
Strategic Management of Workforce Diversity: An Evolutionary Game Theory Approach as a Foundation for AI-Driven Systems
by Mirko Talajić, Ilko Vrankić and Mirjana Pejić Bach
Information 2024, 15(6), 366; https://doi.org/10.3390/info15060366 - 20 Jun 2024
Viewed by 290
Abstract
In the complex organisational landscape, managing workforce diversity effectively has become crucial due to rapid technological advancements and shifting societal values. This study explores strategic workforce management through the novel methodological framework consisting of the evolutionary game theory concept integrated with replicator dynamics [...] Read more.
In the complex organisational landscape, managing workforce diversity effectively has become crucial due to rapid technological advancements and shifting societal values. This study explores strategic workforce management through the novel methodological framework consisting of the evolutionary game theory concept integrated with replicator dynamics and traditional game theory, addressing a notable gap in the literature and suggesting an evolutionarily stable workforce structure. Key findings indicate that targeted rewards for the most Enthusiastic employee type can reduce overall costs and enhance workforce efficiency, although managing a diverse team remains complex. The study reveals that while short-term incentives boost immediate productivity, long-term rewards facilitate favourable behavioural changes, which are crucial for sustaining organisational performance. Additionally, the role of artificial intelligence (AI) is highlighted, emphasising its potential to integrate with these theoretical models, thereby enhancing decision-making processes. The study underscores the importance of strategic leadership in navigating these dynamics, suggesting that leaders must tailor their approaches to balance short-term incentives and long-term rewards to maintain an optimal workforce structure. Full article
(This article belongs to the Special Issue New Information Communication Technologies in the Digital Era)
Show Figures

Figure 1

15 pages, 1403 KiB  
Article
BERTopic for Enhanced Idea Management and Topic Generation in Brainstorming Sessions
by Asma Cheddak, Tarek Ait Baha, Youssef Es-Saady, Mohamed El Hajji and Mohamed Baslam
Information 2024, 15(6), 365; https://doi.org/10.3390/info15060365 - 20 Jun 2024
Viewed by 383
Abstract
Brainstorming is an important part of the design thinking process since it encourages creativity and innovation through bringing together diverse viewpoints. However, traditional brainstorming practices face challenges such as the management of large volumes of ideas. To address this issue, this paper introduces [...] Read more.
Brainstorming is an important part of the design thinking process since it encourages creativity and innovation through bringing together diverse viewpoints. However, traditional brainstorming practices face challenges such as the management of large volumes of ideas. To address this issue, this paper introduces a decision support system that employs the BERTopic model to automate the brainstorming process, which enhances the categorization of ideas and the generation of coherent topics from textual data. The dataset for our study was assembled from a brainstorming session on “scholar dropouts”, where ideas were captured on Post-it notes, digitized through an optical character recognition (OCR) model, and enhanced using data augmentation with a language model, GPT-3.5, to ensure robustness. To assess the performance of our system, we employed both quantitative and qualitative analyses. Quantitative evaluations were conducted independently across various parameters, while qualitative assessments focused on the relevance and alignment of keywords with human-classified topics during brainstorming sessions. Our findings demonstrate that BERTopic outperforms traditional LDA models in generating semantically coherent topics. These results demonstrate the usefulness of our system in managing the complex nature of Arabic language data and improving the efficiency of brainstorming sessions. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

9 pages, 779 KiB  
Article
Clustering Offensive Strategies in Australian-Rules Football Using Social Network Analysis
by Zachery Born, Marion Mundt, Ajmal Mian, Jason Weber and Jacqueline Alderson
Information 2024, 15(6), 364; https://doi.org/10.3390/info15060364 - 20 Jun 2024
Viewed by 508
Abstract
Sports teams aim to understand the tactical behaviour of their opposition to gain a competitive advantage. Prior research of tactical behaviour in team sports has predominantly focused on the relationship between key performance indicators and match outcomes. However, key performance indicators fail to [...] Read more.
Sports teams aim to understand the tactical behaviour of their opposition to gain a competitive advantage. Prior research of tactical behaviour in team sports has predominantly focused on the relationship between key performance indicators and match outcomes. However, key performance indicators fail to capture the patterns of ball movement deployed by teams, which provide deeper insight into a team’s playing style. The purpose of this study was to quantify existing ball movement strategies in Australian-rules Football (AF) using detailed descriptions of possession types from 396 matches of the 2019 season. Ball movement patterns were measured by social network analysis for each team during offensive phases of play. K-means clustering identified four unique offensive strategies. The most successful offensive strategy, defined by the number of matches won (83/396), achieved a win/loss ratio of 1.69 and was characterised by low ball movement predictability, low reliance on well-connected athletes, and a high number of passes. This study’s insights into offensive strategy are instructional to AF coaches and high-performance support staff. The outcomes of this study can be used to support the design of tactical training and inform match-day decisions surrounding optimal offensive strategies. Full article
(This article belongs to the Special Issue Second Edition of Predictive Analytics and Data Science)
Show Figures

Figure 1

21 pages, 1529 KiB  
Article
ETHICore: Ethical Compliance and Oversight Framework for Digital Forensic Readiness
by Amr Adel, Ali Ahsan and Claire Davison
Information 2024, 15(6), 363; https://doi.org/10.3390/info15060363 - 20 Jun 2024
Viewed by 422
Abstract
How can organisations be forensically ready? As organisations are bound to be criticised in the digitally developing world, they must ensure that they are forensically ready. The readiness of digital forensics ensures compliance in an organisation’s legal, regulatory, and operational structure. Several digital [...] Read more.
How can organisations be forensically ready? As organisations are bound to be criticised in the digitally developing world, they must ensure that they are forensically ready. The readiness of digital forensics ensures compliance in an organisation’s legal, regulatory, and operational structure. Several digital forensic investigative methods and duties are based on specific technological designs. The present study is the first to address the core principles of digital forensic studies, namely, reconnaissance, reliability, and relevance. It reassesses the investigative duties and establishes eight separate positions and their obligations in a digital forensics’ investigation. A systematic literature review revealed a gap in the form of a missing comprehensive direction for establishing a digital forensic framework for ethical purposes. Digital forensic readiness refers to the ability of a business to collect and respond to digital evidence related to security incidents at low levels of cost and interruption to existing business operations. This study established a digital forensic framework through a systematic literature review to ensure that organisations are forensically ready to conduct an efficient forensic investigation and to cover ethical aspects. Furthermore, this study conducted a focus group evaluation through focus group discussions to provide insights into the framework. Lastly, a roadmap was provided for integrating the system seamlessly into zero-knowledge data collection technologies. Full article
Show Figures

Figure 1

21 pages, 10748 KiB  
Article
Modeling COVID-19 Transmission in Closed Indoor Settings: An Agent-Based Approach with Comprehensive Sensitivity Analysis
by Amir Hossein Ebrahimi, Ali Asghar Alesheikh, Navid Hooshangi, Mohammad Sharif and Abolfazl Mollalo
Information 2024, 15(6), 362; https://doi.org/10.3390/info15060362 - 19 Jun 2024
Viewed by 533
Abstract
Computational simulation models have been widely used to study the dynamics of COVID-19. Among those, bottom-up approaches such as agent-based models (ABMs) can account for population heterogeneity. While many studies have addressed COVID-19 spread at various scales, insufficient studies have investigated the spread [...] Read more.
Computational simulation models have been widely used to study the dynamics of COVID-19. Among those, bottom-up approaches such as agent-based models (ABMs) can account for population heterogeneity. While many studies have addressed COVID-19 spread at various scales, insufficient studies have investigated the spread of COVID-19 within closed indoor settings. This study aims to develop an ABM to simulate the spread of COVID-19 in a closed indoor setting using three transmission sub-models. Moreover, a comprehensive sensitivity analysis encompassing 4374 scenarios is performed. The model is calibrated using data from Calabria, Italy. The results indicated a decent consistency between the observed and predicted number of infected people (MAPE = 27.94%, RMSE = 0.87 and χ2(1,N=34)=(44.11,p=0.11)). Notably, the transmission distance was identified as the most influential parameter in this model. In nearly all scenarios, this parameter had a significant impact on the outbreak dynamics (total cases and epidemic peak). Also, the calibration process showed that the movement of agents and the number of initial asymptomatic agents are vital model parameters to simulate COVID-19 spread accurately. The developed model may provide useful insights to investigate different scenarios and dynamics of other similar infectious diseases in closed indoor settings. Full article
(This article belongs to the Special Issue Health Data Information Retrieval)
Show Figures

Figure 1

4 pages, 156 KiB  
Editorial
Advances in Cybersecurity and Reliability
by Moutaz Alazab and Ammar Alazab
Information 2024, 15(6), 361; https://doi.org/10.3390/info15060361 - 19 Jun 2024
Viewed by 367
Abstract
In recent years, the significant increase in financial and data losses impacting individuals and businesses has highlighted the pressing need to tackle cybersecurity challenges in today’s digital environment [...] Full article
(This article belongs to the Special Issue Advances in Cybersecurity and Reliability)
19 pages, 25362 KiB  
Article
An Anomaly Detection Approach to Determine Optimal Cutting Time in Cheese Formation
by Andrea Loddo, Davide Ghiani, Alessandra Perniciano, Luca Zedda, Barbara Pes and Cecilia Di Ruberto
Information 2024, 15(6), 360; https://doi.org/10.3390/info15060360 - 18 Jun 2024
Viewed by 354
Abstract
The production of cheese, a beloved culinary delight worldwide, faces challenges in maintaining consistent product quality and operational efficiency. One crucial stage in this process is determining the precise cutting time during curd formation, which significantly impacts the quality of the cheese. Misjudging [...] Read more.
The production of cheese, a beloved culinary delight worldwide, faces challenges in maintaining consistent product quality and operational efficiency. One crucial stage in this process is determining the precise cutting time during curd formation, which significantly impacts the quality of the cheese. Misjudging this timing can lead to the production of inferior products, harming a company’s reputation and revenue. Conventional methods often fall short of accurately assessing variations in coagulation conditions due to the inherent potential for human error. To address this issue, we propose an anomaly-detection-based approach. In this approach, we treat the class representing curd formation as the anomaly to be identified. Our proposed solution involves utilizing a one-class, fully convolutional data description network, which we compared against several state-of-the-art methods to detect deviations from the standard coagulation patterns. Encouragingly, our results show F1 scores of up to 0.92, indicating the effectiveness of our approach. Full article
Show Figures

Figure 1

32 pages, 8136 KiB  
Article
Social Media Influencers: Customer Attitudes and Impact on Purchase Behaviour
by Galina Ilieva, Tania Yankova, Margarita Ruseva, Yulia Dzhabarova, Stanislava Klisarova-Belcheva and Marin Bratkov
Information 2024, 15(6), 359; https://doi.org/10.3390/info15060359 - 18 Jun 2024
Viewed by 500
Abstract
Social media marketing has become a crucial component of contemporary business strategies, significantly influencing brand visibility, customer engagement, and sales growth. The aim of this study is to investigate and determine the key factors guiding customer attitudes towards social media influencers, and, on [...] Read more.
Social media marketing has become a crucial component of contemporary business strategies, significantly influencing brand visibility, customer engagement, and sales growth. The aim of this study is to investigate and determine the key factors guiding customer attitudes towards social media influencers, and, on that basis, to explore their effects on purchase intentions regarding advertised products or services. A total of 376 filled-in questionnaires from an online survey were analysed. The main characteristics of digital influencers’ behaviour that affect consumer perceptions have been systematized and categorized through a combination of both traditional and advanced data analysis methods. Structural equation modelling (SEM), machine learning and multi-criteria decision-making (MCDM) methods were selected to uncover the hidden dependencies between variables from the perspective of social media users. The developed models elucidate the underlying relationships that shape the acceptance mechanism of influencers’ messages. The obtained results provide specific recommendations for stakeholders across the social media marketing value chain. Marketers can make informed decisions and optimize influencer marketing strategies to enhance user experience and increase conversion rates. Working collaboratively, marketers and influencers can create impactful and successful marketing campaigns that resonate with the target audience and drive meaningful results. Customers benefit from more tailored and engaging influencer content that aligns with their interests and preferences, fostering a stronger connection with brands and potentially affecting their purchase decisions. As the perception of customer satisfaction is an individual and evolving process, stakeholders should organize regular evaluations of influencer marketing data and explore the possibilities to ensure the continuous improvement of this e-marketing channel. Full article
Show Figures

Figure 1

18 pages, 1789 KiB  
Article
Multivariate Hydrological Modeling Based on Long Short-Term Memory Networks for Water Level Forecasting
by Jackson B. Renteria-Mena, Douglas Plaza and Eduardo Giraldo
Information 2024, 15(6), 358; https://doi.org/10.3390/info15060358 - 15 Jun 2024
Viewed by 480
Abstract
In the Department of Chocó, flooding poses a recurrent and significant challenge due to heavy rainfall and the dense network of rivers characterizing the region. However, the lack of adequate infrastructure to prevent and predict floods exacerbates this situation. The absence of early [...] Read more.
In the Department of Chocó, flooding poses a recurrent and significant challenge due to heavy rainfall and the dense network of rivers characterizing the region. However, the lack of adequate infrastructure to prevent and predict floods exacerbates this situation. The absence of early warning systems, the scarcity of meteorological and hydrological monitoring stations, and deficiencies in urban planning contribute to the vulnerability of communities to these phenomena. It is imperative to invest in flood prediction and prevention infrastructure, including advanced monitoring systems, the development of hydrological prediction models, and the construction of hydraulic infrastructure, to reduce risk and protect vulnerable communities in Chocó. Additionally, raising public awareness of the associated risks and encouraging the adoption of mitigation and preparedness measures throughout the population are essential. This study introduces a novel approach for the multivariate prediction of hydrological variables, specifically focusing on water level forecasts for two hydrological stations along the Atrato River in Colombia. The model, utilizing a specialized type of recurrent neural network (RNN) called the long short-term memory (LSTM) network, integrates data from hydrological variables, such as the flow, precipitation, and level. With a model architecture featuring four inputs and two outputs, where flow and precipitation serve as inputs and the level serves as the output for each station, the LSTM model is adept at capturing the complex dynamics and cross-correlations among these variables. Validation involves comparing the LSTM model’s performance with linear and nonlinear Autoregressive with Exogenous Input (NARX) models, considering factors such as the estimation error and computational time. Furthermore, this study explores different scenarios for water level prediction, aiming to utilize the proposed approach as an effective flood early warning system. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

11 pages, 1100 KiB  
Brief Report
Autonomous Vehicle Safety through the SIFT Method: A Conceptual Analysis
by Muhammad Anshari, Mohammad Nabil Almunawar, Masairol Masri, Norma Latif Fitriyani and Muhammad Syafrudin
Information 2024, 15(6), 357; https://doi.org/10.3390/info15060357 - 15 Jun 2024
Viewed by 291
Abstract
This study aims to provide a conceptual analysis of the dynamic transformations occurring in an autonomous vehicle (AV), placing a specific emphasis on the safety implications for pedestrians and passengers. AV, also known as self-driving automobiles, are positioned as potential disruptors in the [...] Read more.
This study aims to provide a conceptual analysis of the dynamic transformations occurring in an autonomous vehicle (AV), placing a specific emphasis on the safety implications for pedestrians and passengers. AV, also known as self-driving automobiles, are positioned as potential disruptors in the contemporary transportation landscape, offering heightened safety and improved traffic efficiency. Despite these promises, the intricate nature of road scenarios and the looming specter of misinformation pose challenges that can compromise the efficacy of AV decision-making. A crucial aspect of the proposed verification process is the incorporation of the stop, investigate the source, find better coverage, trace claims, quotes, and media to the original context (SIFT) method. The SIFT method, originally designed to combat misinformation, emerges as a valuable mechanism for enhancing AV safety by ensuring the accuracy and reliability of information influencing autonomous decision-making processes. Full article
(This article belongs to the Special Issue Automotive System Security: Recent Advances and Challenges)
Show Figures

Figure 1

21 pages, 1861 KiB  
Article
Prominent User Segments in Online Consumer Recommendation Communities: Capturing Behavioral and Linguistic Qualities with User Comment Embeddings
by Apostolos Skotis and Christos Livas
Information 2024, 15(6), 356; https://doi.org/10.3390/info15060356 - 15 Jun 2024
Viewed by 303
Abstract
Online conversation communities have become an influential source of consumer recommendations in recent years. We propose a set of meaningful user segments which emerge from user embedding representations, based exclusively on comments’ text input. Data were collected from three popular recommendation communities in [...] Read more.
Online conversation communities have become an influential source of consumer recommendations in recent years. We propose a set of meaningful user segments which emerge from user embedding representations, based exclusively on comments’ text input. Data were collected from three popular recommendation communities in Reddit, covering the domains of book and movie suggestions. We utilized two neural language model methods to produce user embeddings, namely Doc2Vec and Sentence-BERT. Embedding interpretation issues were addressed by examining latent factors’ associations with behavioral, sentiment, and linguistic variables, acquired using the VADER, LIWC, and LFTK libraries in Python. User clusters were identified, having different levels of engagement and linguistic characteristics. The latent features of both approaches were strongly correlated with several user behavioral and linguistic indicators. Both approaches managed to capture significant variability in writing styles and quality, such as length, readability, use of function words, and complexity. However, the Doc2Vec features better described users by varying level of contribution, while S-BERT-based features were more closely adapted to users’ varying emotional engagement. Prominent segments revealed prolific users with formal, intuitive, emotionally distant, and highly analytical styles, as well as users who were less elaborate, less consistent, but more emotionally connected. The observed patterns were largely similar across communities. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

22 pages, 2569 KiB  
Article
The Application of Machine Learning in Diagnosing the Financial Health and Performance of Companies in the Construction Industry
by Jarmila Horváthová, Martina Mokrišová and Alexander Schneider
Information 2024, 15(6), 355; https://doi.org/10.3390/info15060355 - 14 Jun 2024
Viewed by 318
Abstract
Diagnosing the financial health of companies and their performance is currently one of the basic questions that attracts the attention of researchers and experts in the field of finance and management. In this study, we focused on the proposal of models for measuring [...] Read more.
Diagnosing the financial health of companies and their performance is currently one of the basic questions that attracts the attention of researchers and experts in the field of finance and management. In this study, we focused on the proposal of models for measuring the financial health and performance of businesses. These models were built for companies doing business within the Slovak construction industry. Construction companies are identified by their higher liquidity and different capital structure compared to other industries. Therefore, simple classifiers are not able to effectively predict their financial health. In this paper, we investigated whether boosting ensembles are a suitable alternative for performance analysis. The result of the research is the finding that deep learning is a suitable approach aimed at measuring the financial health and performance of the analyzed sample of companies. The developed models achieved perfect classification accuracy when using the AdaBoost and Gradient-boosting algorithms. The application of a decision tree as a base learner also proved to be very appropriate. The result is a decision tree with adequate depth and very good interpretability. Full article
(This article belongs to the Special Issue AI Applications in Construction and Infrastructure)
Show Figures

Figure 1

14 pages, 1414 KiB  
Review
The Use of AI in Software Engineering: A Synthetic Knowledge Synthesis of the Recent Research Literature
by Peter Kokol
Information 2024, 15(6), 354; https://doi.org/10.3390/info15060354 - 14 Jun 2024
Viewed by 572
Abstract
Artificial intelligence (AI) has witnessed an exponential increase in use in various applications. Recently, the academic community started to research and inject new AI-based approaches to provide solutions to traditional software-engineering problems. However, a comprehensive and holistic understanding of the current status needs [...] Read more.
Artificial intelligence (AI) has witnessed an exponential increase in use in various applications. Recently, the academic community started to research and inject new AI-based approaches to provide solutions to traditional software-engineering problems. However, a comprehensive and holistic understanding of the current status needs to be included. To close the above gap, synthetic knowledge synthesis was used to induce the research landscape of the contemporary research literature on the use of AI in software engineering. The synthesis resulted in 15 research categories and 5 themes—namely, natural language processing in software engineering, use of artificial intelligence in the management of the software development life cycle, use of machine learning in fault/defect prediction and effort estimation, employment of deep learning in intelligent software engineering and code management, and mining software repositories to improve software quality. The most productive country was China (n = 2042), followed by the United States (n = 1193), India (n = 934), Germany (n = 445), and Canada (n = 381). A high percentage (n = 47.4%) of papers were funded, showing the strong interest in this research topic. The convergence of AI and software engineering can significantly reduce the required resources, improve the quality, enhance the user experience, and improve the well-being of software developers. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

16 pages, 1604 KiB  
Review
Strategic Approaches in Network Communication and Information Security Risk Assessment
by Nadher Alsafwani, Yousef Fazea and Fuad Alnajjar
Information 2024, 15(6), 353; https://doi.org/10.3390/info15060353 - 14 Jun 2024
Viewed by 514
Abstract
Risk assessment is a critical sub-process in information security risk management (ISRM) that is used to identify an organization’s vulnerabilities and threats as well as evaluate current and planned security controls. Therefore, adequate resources and return on investments should be considered when reviewing [...] Read more.
Risk assessment is a critical sub-process in information security risk management (ISRM) that is used to identify an organization’s vulnerabilities and threats as well as evaluate current and planned security controls. Therefore, adequate resources and return on investments should be considered when reviewing assets. However, many existing frameworks lack granular guidelines and mostly operate on qualitative human input and feedback, which increases subjective and unreliable judgment within organizations. Consequently, current risk assessment methods require additional time and cost to test all information security controls thoroughly. The principal aim of this study is to critically review the Information Security Control Prioritization (ISCP) models that improve the Information Security Risk Assessment (ISRA) process, by using literature analysis to investigate ISRA’s main problems and challenges. We recommend that designing a streamlined and standardized Information Security Control Prioritization model would greatly reduce the uncertainty, cost, and time associated with the assessment of information security controls, thereby helping organizations prioritize critical controls reliably and more efficiently based on clear and practical guidelines. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

24 pages, 1627 KiB  
Article
A Novel Radio Network Information Service (RNIS) to MEC Framework in B5G Networks
by Kaíque M. R. Cunha, Sand Correa, Fabrizzio Soares, Maria Ribeiro, Waldir Moreira, Raphael Gomes, Leandro A. Freitas and Antonio Oliveira-Jr
Information 2024, 15(6), 352; https://doi.org/10.3390/info15060352 - 13 Jun 2024
Viewed by 317
Abstract
Multi-Access Edge Computing (MEC) reduces latency, provides high-bandwidth applications with real-time performance and reliability, supporting new applications and services for the present and future Beyond the Fifth Generation (B5G). Radio Network Information Service (RNIS) plays a crucial role in obtaining information from the [...] Read more.
Multi-Access Edge Computing (MEC) reduces latency, provides high-bandwidth applications with real-time performance and reliability, supporting new applications and services for the present and future Beyond the Fifth Generation (B5G). Radio Network Information Service (RNIS) plays a crucial role in obtaining information from the Radio Access Network (RAN). With the advent of 5G, RNIS requires improvements to handle information from the new generations of RAN. In this scenario, improving the RNIS is essential to boost new applications according to the strict requirements imposed. Hence, this work proposes a new RNIS as a service to the MEC framework in B5G networks to improve MEC applications. The service is validated and evaluated, and demonstrates the ability to adequately serve a large number of MEC apps (two, four, six and eight) and from 100 to 2000 types of User Equipment (UE). Full article
(This article belongs to the Special Issue Advances in Communication Systems and Networks)
Show Figures

Figure 1

22 pages, 3983 KiB  
Article
Leveraging Machine Learning to Analyze Semantic User Interactions in Visual Analytics
by Dong Hyun Jeong, Bong Keun Jeong and Soo Yeon Ji
Information 2024, 15(6), 351; https://doi.org/10.3390/info15060351 - 13 Jun 2024
Viewed by 326
Abstract
In the field of visualization, understanding users’ analytical reasoning is important for evaluating the effectiveness of visualization applications. Several studies have been conducted to capture and analyze user interactions to comprehend this reasoning process. However, few have successfully linked these interactions to users’ [...] Read more.
In the field of visualization, understanding users’ analytical reasoning is important for evaluating the effectiveness of visualization applications. Several studies have been conducted to capture and analyze user interactions to comprehend this reasoning process. However, few have successfully linked these interactions to users’ reasoning processes. This paper introduces an approach that addresses the limitation by correlating semantic user interactions with analysis decisions using an interactive wire transaction analysis system and a visual state transition matrix, both designed as visual analytics applications. The system enables interactive analysis for evaluating financial fraud in wire transactions. It also allows mapping captured user interactions and analytical decisions back onto the visualization to reveal their decision differences. The visual state transition matrix further aids in understanding users’ analytical flows, revealing their decision-making processes. Classification machine learning algorithms are applied to evaluate the effectiveness of our approach in understanding users’ analytical reasoning process by connecting the captured semantic user interactions to their decisions (i.e., suspicious, not suspicious, and inconclusive) on wire transactions. With the algorithms, an average of 72% accuracy is determined to classify the semantic user interactions. For classifying individual decisions, the average accuracy is 70%. Notably, the accuracy for classifying ‘inconclusive’ decisions is 83%. Overall, the proposed approach improves the understanding of users’ analytical decisions and provides a robust method for evaluating user interactions in visualization tools. Full article
(This article belongs to the Special Issue Information Visualization Theory and Applications)
Show Figures

Figure 1

17 pages, 5348 KiB  
Article
Machine Learning-Based Channel Estimation Techniques for ATSC 3.0
by Yu-Sun Liu, Shingchern D. You and Yu-Chun Lai
Information 2024, 15(6), 350; https://doi.org/10.3390/info15060350 - 13 Jun 2024
Viewed by 327
Abstract
Channel estimation accuracy significantly affects the performance of orthogonal frequency-division multiplexing (OFDM) systems. In the literature, there are quite a few channel estimation methods. However, the performances of these methods deteriorate considerably when the wireless channels suffer from nonlinear distortions and interferences. Machine [...] Read more.
Channel estimation accuracy significantly affects the performance of orthogonal frequency-division multiplexing (OFDM) systems. In the literature, there are quite a few channel estimation methods. However, the performances of these methods deteriorate considerably when the wireless channels suffer from nonlinear distortions and interferences. Machine learning (ML) shows great potential for solving nonparametric problems. This paper proposes ML-based channel estimation methods for systems with comb-type pilot patterns and random pilot symbols, such as ATSC 3.0. We compare their performances with conventional channel estimations in ATSC 3.0 systems for linear and nonlinear channel models. We also evaluate the robustness of the ML-based methods against channel model mismatch and signal-to-noise ratio (SNR) mismatch. The results show that the ML-based channel estimations achieve good mean squared error (MSE) performance for linear and nonlinear channels if the channel statistics used for the training stage match those of the deployment stage. Otherwise, the ML estimation models may overfit the training channel, leading to poor deployment performance. Furthermore, the deep neural network (DNN)-based method does not outperform the linear channel estimation methods in nonlinear channels. Full article
(This article belongs to the Special Issue Recent Advances in Communications Technology)
Show Figures

Figure 1

25 pages, 2800 KiB  
Article
Driving across Markets: An Analysis of a Human–Machine Interface in Different International Contexts
by Denise Sogemeier, Yannick Forster, Frederik Naujoks, Josef F. Krems and Andreas Keinath
Information 2024, 15(6), 349; https://doi.org/10.3390/info15060349 - 12 Jun 2024
Viewed by 418
Abstract
The design of automotive human–machine interfaces (HMIs) for global consumers’ needs to cater to a broad spectrum of drivers. This paper comprises benchmark studies and explores how users from international markets—Germany, China, and the United States—engage with the same automotive HMI. In real [...] Read more.
The design of automotive human–machine interfaces (HMIs) for global consumers’ needs to cater to a broad spectrum of drivers. This paper comprises benchmark studies and explores how users from international markets—Germany, China, and the United States—engage with the same automotive HMI. In real driving scenarios, N = 301 participants (premium vehicle owners) completed several tasks using different interaction modalities. The multi-method approach included both self-report measures to assess preference and satisfaction through well-established questionnaires and observational measures, namely experimenter ratings, to capture interaction performance. We observed a trend towards lower preference ratings in the Chinese sample. Further, interaction performance differed across the user groups, with self-reported preference not consistently aligning with observed performance. This dissociation accentuates the importance of integrating both measures in user studies. By employing benchmark data, we provide insights into varied market-based perspectives on automotive HMIs. The findings highlight the necessity for a nuanced approach to HMI design that considers diverse user preferences and interaction patterns. Full article
Show Figures

Figure 1

20 pages, 1007 KiB  
Article
HitSim: An Efficient Algorithm for Single-Source and Top-k SimRank Computation
by Jing Bai, Junfeng Zhou, Shuotong Chen, Ming Du, Ziyang Chen and Mengtao Min
Information 2024, 15(6), 348; https://doi.org/10.3390/info15060348 - 12 Jun 2024
Viewed by 270
Abstract
SimRank is a widely used metric for evaluating vertex similarity based on graph topology, with diverse applications such as large-scale graph mining and natural language processing. The objective of the single-source and top-k SimRank query problem is to retrieve the kvertices with [...] Read more.
SimRank is a widely used metric for evaluating vertex similarity based on graph topology, with diverse applications such as large-scale graph mining and natural language processing. The objective of the single-source and top-k SimRank query problem is to retrieve the kvertices with the largest SimRank to the source vertex. However, existing algorithms suffer from inefficiency as they require computing SimRank for all vertices to retrieve the top-k results. To address this issue, we propose an algorithm named HitSimthat utilizes a branch and bound strategy for the single-source and top-k query. HitSim initially partitions vertices into distinct sets based on their shortest-meeting lengths to the source vertex. Subsequently, it computes an upper bound of SimRank for each set. If the upper bound of a set is no larger than the minimum value of the current top-k results, HitSim efficiently batch-prunes the unpromising vertices within the set. However, in scenarios where the graph becomes dense, certain sets with large upper bounds may contain numerous vertices with small SimRank, leading to redundant overhead when processing these vertices. To address this issue, we propose an optimized algorithm named HitSim-OPT that computes the upper bound of SimRank for each vertex instead of each set, resulting in a fine-grained and efficient pruning process. The experimental results conducted on six real-world datasets demonstrate the performance of our algorithms in efficiently addressing the single-source and top-k query problem. Full article
Show Figures

Figure 1

9 pages, 6444 KiB  
Correction
Correction: Yi et al. SFS-AGGL: Semi-Supervised Feature Selection Integrating Adaptive Graph with Global and Local Information. Information 2024, 15, 57
by Yugen Yi, Haoming Zhang, Ningyi Zhang, Wei Zhou, Xiaomei Huang, Gengsheng Xie and Caixia Zheng
Information 2024, 15(6), 347; https://doi.org/10.3390/info15060347 - 12 Jun 2024
Viewed by 171
Abstract
In the original publication [...] Full article
Show Figures

Figure 2

18 pages, 462 KiB  
Article
Factors for Customers’ AI Use Readiness in Physical Retail Stores: The Interplay of Consumer Attitudes and Gender Differences
by Nina Kolar, Borut Milfelner and Aleksandra Pisnik
Information 2024, 15(6), 346; https://doi.org/10.3390/info15060346 - 12 Jun 2024
Viewed by 491
Abstract
In addressing the nuanced interplay between consumer attitudes and Artificial Intelligence (AI) use readiness in physical retail stores, the main objective of this study is to test the impacts of prior experience, as well as perceived risks with AI technologies, self-assessment of consumers’ [...] Read more.
In addressing the nuanced interplay between consumer attitudes and Artificial Intelligence (AI) use readiness in physical retail stores, the main objective of this study is to test the impacts of prior experience, as well as perceived risks with AI technologies, self-assessment of consumers’ ability to manage AI technologies, and the moderator role of gender in this relationship. Using a quantitative cross-sectional survey, data from 243 consumers familiar with AI technologies were analyzed using structural equation modeling (SEM) methods to explore these dynamics in the context of physical retail stores. Additionally, the moderating impacts were tested after the invariance analysis across both gender groups. Key findings indicate that positive prior experience with AI technologies positively influences AI use readiness in physical retail stores, while perceived risks with AI technologies serve as a deterrent. Gender differences significantly moderate these effects, with perceived risks with AI technologies more negatively impacting women’s AI use readiness and self-assessment of the ability to manage AI technologies showing a stronger positive impact on men’s AI use readiness. The study concludes that retailers must consider these gender-specific perceptions and attitudes toward AI to develop more effective strategies for technology integration. Our research also highlights the need to address gender-specific barriers and biases when adopting AI technology. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

19 pages, 2810 KiB  
Article
Large Language Models (LLMs) in Engineering Education: A Systematic Review and Suggestions for Practical Adoption
by Stefano Filippi and Barbara Motyl
Information 2024, 15(6), 345; https://doi.org/10.3390/info15060345 - 12 Jun 2024
Viewed by 445
Abstract
The use of large language models (LLMs) is now spreading in several areas of research and development. This work is concerned with systematically reviewing LLMs’ involvement in engineering education. Starting from a general research question, two queries were used to select 370 papers [...] Read more.
The use of large language models (LLMs) is now spreading in several areas of research and development. This work is concerned with systematically reviewing LLMs’ involvement in engineering education. Starting from a general research question, two queries were used to select 370 papers from the literature. Filtering them through several inclusion/exclusion criteria led to the selection of 20 papers. These were investigated based on eight dimensions to identify areas of engineering disciplines that involve LLMs, where they are most present, how this involvement takes place, and which LLM-based tools are used, if any. Addressing these key issues allowed three more specific research questions to be answered, offering a clear overview of the current involvement of LLMs in engineering education. The research outcomes provide insights into the potential and challenges of LLMs in transforming engineering education, contributing to its responsible and effective future implementation. This review’s outcomes could help address the best ways to involve LLMs in engineering education activities and measure their effectiveness as time progresses. For this reason, this study addresses suggestions on how to improve activities in engineering education. The systematic review on which this research is based conforms to the rules of the current literature regarding inclusion/exclusion criteria and quality assessments in order to make the results as objective as possible and easily replicable. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

21 pages, 7046 KiB  
Article
Knowledge-Driven and Diffusion Model-Based Methods for Generating Historical Building Facades: A Case Study of Traditional Minnan Residences in China
by Sirui Xu, Jiaxin Zhang and Yunqin Li
Information 2024, 15(6), 344; https://doi.org/10.3390/info15060344 - 11 Jun 2024
Viewed by 315
Abstract
The preservation of historical traditional architectural ensembles faces multifaceted challenges, and the need for facade renovation and updates has become increasingly prominent. In conventional architectural updating and renovation processes, assessing design schemes and the redesigning component are often time-consuming and labor-intensive. The knowledge-driven [...] Read more.
The preservation of historical traditional architectural ensembles faces multifaceted challenges, and the need for facade renovation and updates has become increasingly prominent. In conventional architectural updating and renovation processes, assessing design schemes and the redesigning component are often time-consuming and labor-intensive. The knowledge-driven method utilizes a wide range of knowledge resources, such as historical documents, architectural drawings, and photographs, commonly used to guide and optimize the conservation, restoration, and management of architectural heritage. Recently, the emergence of artificial intelligence-generated content (AIGC) technologies has provided new solutions for creating architectural facades, introducing a new research paradigm to the renovation plans for historic districts with their variety of options and high efficiency. In this study, we propose a workflow combining Grasshopper with Stable Diffusion: starting with Grasshopper to generate concise line drawings, then using the ControlNet and low-rank adaptation (LoRA) models to produce images of traditional Minnan architectural facades, allowing designers to quickly preview and modify the facade designs during the renovation of traditional architectural clusters. Our research results demonstrate Stable Diffusion’s precise understanding and execution ability concerning architectural facade elements, capable of generating regional traditional architectural facades that meet architects’ requirements for style, size, and form based on existing images and prompt descriptions, revealing the immense potential for application in the renovation of traditional architectural groups and historic districts. It should be noted that the correlation between specific architectural images and proprietary term prompts still requires further addition due to the limitations of the database. Although the model generally performs well when trained on traditional Chinese ancient buildings, the accuracy and clarity of more complex decorative parts still need enhancement, necessitating further exploration of solutions for handling facade details in the future. Full article
(This article belongs to the Special Issue AI Applications in Construction and Infrastructure)
Show Figures

Figure 1

33 pages, 2156 KiB  
Article
Identification of Optimal Data Augmentation Techniques for Multimodal Time-Series Sensory Data: A Framework
by Nazish Ashfaq, Muhammad Hassan Khan and Muhammad Adeel Nisar
Information 2024, 15(6), 343; https://doi.org/10.3390/info15060343 - 11 Jun 2024
Viewed by 465
Abstract
Recently, the research community has shown significant interest in the continuous temporal data obtained from motion sensors in wearable devices. These data are useful for classifying and analysing different human activities in many application areas such as healthcare, sports and surveillance. The literature [...] Read more.
Recently, the research community has shown significant interest in the continuous temporal data obtained from motion sensors in wearable devices. These data are useful for classifying and analysing different human activities in many application areas such as healthcare, sports and surveillance. The literature has presented a multitude of deep learning models that aim to derive a suitable feature representation from temporal sensory input. However, the presence of a substantial quantity of annotated training data is crucial to adequately train the deep networks. Nevertheless, the data originating from the wearable devices are vast but ineffective due to a lack of labels which hinders our ability to train the models with optimal efficiency. This phenomenon leads to the model experiencing overfitting. The contribution of the proposed research is twofold: firstly, it involves a systematic evaluation of fifteen different augmentation strategies to solve the inadequacy problem of labeled data which plays a critical role in the classification tasks. Secondly, it introduces an automatic feature-learning technique proposing a Multi-Branch Hybrid Conv-LSTM network to classify human activities of daily living using multimodal data of different wearable smart devices. The objective of this study is to introduce an ensemble deep model that effectively captures intricate patterns and interdependencies within temporal data. The term “ensemble model” pertains to fusion of distinct deep models, with the objective of leveraging their own strengths and capabilities to develop a solution that is more robust and efficient. A comprehensive assessment of ensemble models is conducted using data-augmentation techniques on two prominent benchmark datasets: CogAge and UniMiB-SHAR. The proposed network employs a range of data-augmentation methods to improve the accuracy of atomic and composite activities. This results in a 5% increase in accuracy for composite activities and a 30% increase for atomic activities. Full article
(This article belongs to the Special Issue Human Activity Recognition and Biomedical Signal Processing)
Show Figures

Figure 1

24 pages, 4230 KiB  
Article
Understanding Local Government Cybersecurity Policy: A Concept Map and Framework
by Sk Tahsin Hossain, Tan Yigitcanlar, Kien Nguyen and Yue Xu
Information 2024, 15(6), 342; https://doi.org/10.3390/info15060342 - 10 Jun 2024
Viewed by 532
Abstract
Cybersecurity is a crucial concern for local governments as they serve as the primary interface between public and government services, managing sensitive data and critical infrastructure. While technical safeguards are integral to cybersecurity, the role of a well-structured policy is equally important as [...] Read more.
Cybersecurity is a crucial concern for local governments as they serve as the primary interface between public and government services, managing sensitive data and critical infrastructure. While technical safeguards are integral to cybersecurity, the role of a well-structured policy is equally important as it provides structured guidance to translate technical requirements into actionable protocols. This study reviews local governments’ cybersecurity policies to provide a comprehensive assessment of how these policies align with the National Institute of Standards and Technology’s Cybersecurity Framework 2.0, which is a widely adopted and commonly used cybersecurity assessment framework. This review offers local governments a mirror to reflect on their cybersecurity stance, identifying potential vulnerabilities and areas needing urgent attention. This study further extends the development of a cybersecurity policy framework, which local governments can use as a strategic tool. It provides valuable information on crucial cybersecurity elements that local governments must incorporate into their policies to protect confidential data and critical infrastructure. Full article
(This article belongs to the Special Issue Cybersecurity, Cybercrimes, and Smart Emerging Technologies)
Show Figures

Figure 1

15 pages, 21530 KiB  
Article
Social-STGMLP: A Social Spatio-Temporal Graph Multi-Layer Perceptron for Pedestrian Trajectory Prediction
by Dexu Meng, Guangzhe Zhao and Feihu Yan
Information 2024, 15(6), 341; https://doi.org/10.3390/info15060341 - 10 Jun 2024
Viewed by 315
Abstract
As autonomous driving technology advances, the imperative of ensuring pedestrian traffic safety becomes increasingly prominent within the design framework of autonomous driving systems. Pedestrian trajectory prediction stands out as a pivotal technology aiming to address this challenge by striving to precisely forecast pedestrians’ [...] Read more.
As autonomous driving technology advances, the imperative of ensuring pedestrian traffic safety becomes increasingly prominent within the design framework of autonomous driving systems. Pedestrian trajectory prediction stands out as a pivotal technology aiming to address this challenge by striving to precisely forecast pedestrians’ future trajectories, thereby enabling autonomous driving systems to execute timely and accurate decisions. However, the prevailing state-of-the-art models often rely on intricate structures and a substantial number of parameters, posing challenges in meeting the imperative demand for lightweight models within autonomous driving systems. To address these challenges, we introduce Social Spatio-Temporal Graph Multi-Layer Perceptron (Social-STGMLP), a novel approach that utilizes solely fully connected layers and layer normalization. Social-STGMLP operates by abstracting pedestrian trajectories into a spatio-temporal graph, facilitating the modeling of both the spatial social interaction among pedestrians and the temporal motion tendency inherent to pedestrians themselves. Our evaluation of Social-STGMLP reveals its superiority over the reference method, as evidenced by experimental results indicating reductions of 5% in average displacement error (ADE) and 17% in final displacement error (FDE). Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

30 pages, 1001 KiB  
Article
Genre Classification of Books in Russian with Stylometric Features: A Case Study
by Natalia Vanetik, Margarita Tiamanova, Genady Kogan and Marina Litvak
Information 2024, 15(6), 340; https://doi.org/10.3390/info15060340 - 7 Jun 2024
Viewed by 422
Abstract
Within the literary domain, genres function as fundamental organizing concepts that provide readers, publishers, and academics with a unified framework. Genres are discrete categories that are distinguished by common stylistic, thematic, and structural components. They facilitate the categorization process and improve our understanding [...] Read more.
Within the literary domain, genres function as fundamental organizing concepts that provide readers, publishers, and academics with a unified framework. Genres are discrete categories that are distinguished by common stylistic, thematic, and structural components. They facilitate the categorization process and improve our understanding of a wide range of literary expressions. In this paper, we introduce a new dataset for genre classification of Russian books, covering 11 literary genres. We also perform dataset evaluation for the tasks of binary and multi-class genre identification. Through extensive experimentation and analysis, we explore the effectiveness of different text representations, including stylometric features, in genre classification. Our findings clarify the challenges present in classifying Russian literature by genre, revealing insights into the performance of different models across various genres. Furthermore, we address several research questions regarding the difficulty of multi-class classification compared to binary classification, and the impact of stylometric features on classification accuracy. Full article
(This article belongs to the Special Issue Text Mining: Challenges, Algorithms, Tools and Applications)
Show Figures

Figure 1

16 pages, 1739 KiB  
Article
Light-Field Image Compression Based on a Two-Dimensional Prediction Coding Structure
by Jianrui Shao, Enjian Bai, Xueqin Jiang and Yun Wu
Information 2024, 15(6), 339; https://doi.org/10.3390/info15060339 - 7 Jun 2024
Viewed by 353
Abstract
Light-field images (LFIs) are gaining increased attention within the field of 3D imaging, virtual reality, and digital refocusing, owing to their wealth of spatial and angular information. The escalating volume of LFI data poses challenges in terms of storage and transmission. To address [...] Read more.
Light-field images (LFIs) are gaining increased attention within the field of 3D imaging, virtual reality, and digital refocusing, owing to their wealth of spatial and angular information. The escalating volume of LFI data poses challenges in terms of storage and transmission. To address this problem, this paper introduces an MSHPE (most-similar hierarchical prediction encoding) structure based on light-field multi-view images. By systematically exploring the similarities among sub-views, our structure obtains residual views through the subtraction of the encoded view from its corresponding reference view. Regarding the encoding process, this paper implements a new encoding scheme to process all residual views, achieving lossless compression. High-efficiency video coding (HEVC) is applied to encode select residual views, thereby achieving lossy compression. Furthermore, the introduced structure is conceptualized as a layered coding scheme, enabling progressive transmission and showing good random access performance. Experimental results demonstrate the superior compression performance attained by encoding residual views according to the proposed structure, outperforming alternative structures. Notably, when HEVC is employed for encoding residual views, significant bit savings are observed compared to the direct encoding of original views. The final restored view presents better detail quality, reinforcing the effectiveness of this approach. Full article
Show Figures

Figure 1

14 pages, 1202 KiB  
Article
The Impact of Operant Resources on the Task Performance of Learners via Knowledge Management Process
by Quoc Trung Pham, Canh Khiem Le, Dinh Thai Linh Huynh and Sanjay Misra
Information 2024, 15(6), 338; https://doi.org/10.3390/info15060338 - 7 Jun 2024
Viewed by 801
Abstract
In human resource management, training is considered one of the most effective ways to improve employees’ task performance. However, the effectiveness of training depends mostly on the resources and effort of learners, especially the operant resources. This study investigates the influence of operant [...] Read more.
In human resource management, training is considered one of the most effective ways to improve employees’ task performance. However, the effectiveness of training depends mostly on the resources and effort of learners, especially the operant resources. This study investigates the influence of operant resources on individual task performance within the framework of knowledge management. Building on existing research, a quantitative model was developed and tested using data from 296 Vietnamese managers and senior employees. Data analysis employed SPSS 21 and AMOS 24 software. The findings provide strong support for all nine proposed hypotheses, demonstrating a positive impact of operant resources on both learner behavior and subsequent task performance. The research highlights the significant role of individual operant resources in enhancing learning outcomes and employee effectiveness. Managerial implications are derived from these results, offering practical guidance for businesses to improve training activities and ultimately boost employee task performance. Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Processes")
Show Figures

Figure 1

23 pages, 4788 KiB  
Article
Production Scheduling Based on a Multi-Agent System and Digital Twin: A Bicycle Industry Case
by Vasilis Siatras, Emmanouil Bakopoulos, Panagiotis Mavrothalassitis, Nikolaos Nikolakis and Kosmas Alexopoulos
Information 2024, 15(6), 337; https://doi.org/10.3390/info15060337 - 6 Jun 2024
Viewed by 401
Abstract
The emerging digitalization in today’s industrial environments allows manufacturers to store online knowledge about production and use it to make better informed management decisions. This paper proposes a multi-agent framework enhanced with digital twin (DT) for production scheduling and optimization. Decentralized scheduling agents [...] Read more.
The emerging digitalization in today’s industrial environments allows manufacturers to store online knowledge about production and use it to make better informed management decisions. This paper proposes a multi-agent framework enhanced with digital twin (DT) for production scheduling and optimization. Decentralized scheduling agents interact to efficiently manage the work allocation in different segments of production. A DT is used to evaluate the performance of different scheduling decisions and to avoid potential risks and bottlenecks. Production managers can supervise the system’s decision-making processes and manually regulate them online. The multi-agent system (MAS) uses asset administration shells (AASs) for data modelling and communication, enabling interoperability and scalability. The framework was deployed and tested in an industrial pilot coming from the bicycle production industry, optimizing and controlling the short-term production schedule of the different departments. The evaluation resulted in a higher production rate, thus achieving higher production volume in a shorter time span. Managers were also able to coordinate schedules from different departments in a dynamic way and achieve early bottleneck detection. Full article
(This article belongs to the Special Issue Intelligent Agent and Multi-Agent System)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop