Next Issue
Volume 13, March
Previous Issue
Volume 13, January
 
 

Future Internet, Volume 13, Issue 2 (February 2021) – 34 articles

Cover Story (view full-size image): The latest findings within the domain of Artificial Intelligence and Robotics have paved the way for the design of an increasing number of advanced robot-oriented applications. This led to the hypothesis that robotics and Artificial Intelligence will eventually beat humans in performing any kind of job. It is therefore the job of the developers to identify the features humans will need to trust and engage with robots. Following this direction, in this paper, we provide a general, flexible, and scalable architecture for enhanced learning through human–robot interaction where different modules tailored for the learning domain can be developed and whose priority of execution can be easily configured. We developed four modules using Zora (an extension of NAO) as the humanoid robotic platform. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 651 KiB  
Article
Video Captioning Based on Channel Soft Attention and Semantic Reconstructor
by Zhou Lei and Yiyong Huang
Future Internet 2021, 13(2), 55; https://doi.org/10.3390/fi13020055 - 23 Feb 2021
Cited by 12 | Viewed by 2161
Abstract
Video captioning is a popular task which automatically generates a natural-language sentence to describe video content. Previous video captioning works mainly use the encoder–decoder framework and exploit special techniques such as attention mechanisms to improve the quality of generated sentences. In addition, most [...] Read more.
Video captioning is a popular task which automatically generates a natural-language sentence to describe video content. Previous video captioning works mainly use the encoder–decoder framework and exploit special techniques such as attention mechanisms to improve the quality of generated sentences. In addition, most attention mechanisms focus on global features and spatial features. However, global features are usually fully connected features. Recurrent convolution networks (RCNs) receive 3-dimensional features as input at each time step, but the temporal structure of each channel at each time step has been ignored, which provide temporal relation information of each channel. In this paper, a video captioning model based on channel soft attention and semantic reconstructor is proposed, which considers the global information for each channel. In a video feature map sequence, the same channel of every time step is generated by the same convolutional kernel. We selectively collect the features generated by each convolutional kernel and then input the weighted sum of each channel to RCN at each time step to encode video representation. Furthermore, a semantic reconstructor is proposed to rebuild semantic vectors to ensure the integrity of semantic information in the training process, which takes advantage of both forward (semantic to sentence) and backward (sentence to semantic) flows. Experimental results on popular datasets MSVD and MSR-VTT demonstrate the effectiveness and feasibility of our model. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

13 pages, 787 KiB  
Article
Load Balancing Oriented Predictive Routing Algorithm for Data Center Networks
by Yazhi Liu, Jiye Zhang, Wei Li, Qianqian Wu and Pengmiao Li
Future Internet 2021, 13(2), 54; https://doi.org/10.3390/fi13020054 - 22 Feb 2021
Cited by 9 | Viewed by 2767
Abstract
A data center undertakes increasing background services of various applications, and the data flows transmitted between the nodes in data center networks (DCNs) are consequently increased. At the same time, the traffic of each link in a DCN changes dynamically over time. Flow [...] Read more.
A data center undertakes increasing background services of various applications, and the data flows transmitted between the nodes in data center networks (DCNs) are consequently increased. At the same time, the traffic of each link in a DCN changes dynamically over time. Flow scheduling algorithms can improve the distribution of data flows among the network links so as to improve the balance of link loads in a DCN. However, most current load balancing works achieve flow scheduling decisions to the current links on the basis of past link flow conditions. This situation impedes the existing link scheduling methods from implementing optimal decisions for scheduling data flows among the network links in a DCN. This paper proposes a predictive link load balance routing algorithm for a DCN based on residual networks (ResNet), i.e., the link load balance route (LLBR) algorithm. The LLBR algorithm predicts the occupancy of the network links in the next duty cycle, according to the ResNet architecture, and then the optimal traffic route is selected according to the predictive network environment. The LLBR algorithm, round-robin scheduling (RRS), and weighted round-robin scheduling (WRRS) are used in the same experimental environment. Experimental results show that compared with the WRRS and RRS, the LLBR algorithm can reduce the transmission time by approximately 50%, reduce the packet loss rate from 0.05% to 0.02%, and improve the bandwidth utilization by 30%. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

17 pages, 628 KiB  
Article
ARIBC: Online Reporting Based on Identity-Based Cryptography
by Athanasios Goudosis and Sokratis Katsikas
Future Internet 2021, 13(2), 53; https://doi.org/10.3390/fi13020053 - 21 Feb 2021
Viewed by 2251
Abstract
The reporting of incidents of misconduct, violence, sexual assault, harassment, and other types of crime that constitute a major concern in modern society is of significant value when investigating such incidents. Unfortunately, people involved in such incidents, either as witnesses or victims, are [...] Read more.
The reporting of incidents of misconduct, violence, sexual assault, harassment, and other types of crime that constitute a major concern in modern society is of significant value when investigating such incidents. Unfortunately, people involved in such incidents, either as witnesses or victims, are often reluctant to report them when such reporting demands revealing the reporter’s true identity. In this paper, we propose an online reporting system that leverages Identity-Based Cryptography (IBC) and offers data authentication, data integrity, and data confidentiality services to both eponymous and anonymous users. The system, called ARIBC, is founded on a certificate-less, public-key, IBC infrastructure, implemented by employing the Sakai–Kasahara approach and by following the IEEE 1363.3-2013 standard. We develop a proof-of-concept implementation of the proposed scheme, and demonstrate its applicability in environments with constrained human, organizational and/or computational resources. The computational overheads imposed by the scheme are found to be well within the capabilities of modern fixed or mobile devices. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Cybersecurity Section)
Show Figures

Figure 1

16 pages, 1932 KiB  
Article
Assessing Digital Transformation in Universities
by Guillermo Rodríguez-Abitia and Graciela Bribiesca-Correa
Future Internet 2021, 13(2), 52; https://doi.org/10.3390/fi13020052 - 20 Feb 2021
Cited by 74 | Viewed by 12300
Abstract
Industry 4.0 and Society 5.0 are reshaping the way organizations function and interact with the communities they serve. The massive penetration of computer and network applications forces organizations to digitalize their processes and provide innovative products, services, and business models. The education market [...] Read more.
Industry 4.0 and Society 5.0 are reshaping the way organizations function and interact with the communities they serve. The massive penetration of computer and network applications forces organizations to digitalize their processes and provide innovative products, services, and business models. The education market is suffering changes as well, but universities seem slow to react. This paper proposes the application of an integrated digital transformation model to assess the maturity level that educational institutions have in their digital transformation processes and compares them to other industries. Particular considerations to address when using the model for higher-education institutions are discussed. Our results show that universities fall behind other sectors, probably due to a lack of effective leadership and changes in culture. This is complemented negatively by an insufficient degree of innovation and financial support. Full article
Show Figures

Graphical abstract

15 pages, 1335 KiB  
Article
Interpretable Variational Graph Autoencoder with Noninformative Prior
by Lili Sun, Xueyan Liu, Min Zhao and Bo Yang
Future Internet 2021, 13(2), 51; https://doi.org/10.3390/fi13020051 - 18 Feb 2021
Cited by 2 | Viewed by 3811
Abstract
Variational graph autoencoder, which can encode structural information and attribute information in the graph into low-dimensional representations, has become a powerful method for studying graph-structured data. However, most existing methods based on variational (graph) autoencoder assume that the prior of latent variables obeys [...] Read more.
Variational graph autoencoder, which can encode structural information and attribute information in the graph into low-dimensional representations, has become a powerful method for studying graph-structured data. However, most existing methods based on variational (graph) autoencoder assume that the prior of latent variables obeys the standard normal distribution which encourages all nodes to gather around 0. That leads to the inability to fully utilize the latent space. Therefore, it becomes a challenge on how to choose a suitable prior without incorporating additional expert knowledge. Given this, we propose a novel noninformative prior-based interpretable variational graph autoencoder (NPIVGAE). Specifically, we exploit the noninformative prior as the prior distribution of latent variables. This prior enables the posterior distribution parameters to be almost learned from the sample data. Furthermore, we regard each dimension of a latent variable as the probability that the node belongs to each block, thereby improving the interpretability of the model. The correlation within and between blocks is described by a block–block correlation matrix. We compare our model with state-of-the-art methods on three real datasets, verifying its effectiveness and superiority. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

16 pages, 1393 KiB  
Article
A Sign of Things to Come: Predicting the Perception of Above-the-Fold Time in Web Browsing
by Hamed Z. Jahromi, Declan Delaney and Andrew Hines
Future Internet 2021, 13(2), 50; https://doi.org/10.3390/fi13020050 - 17 Feb 2021
Cited by 1 | Viewed by 2652
Abstract
Content is a key influencing factor in Web Quality of Experience (QoE) estimation. A web user’s satisfaction can be influenced by how long it takes to render and visualize the visible parts of the web page in the browser. This is referred to [...] Read more.
Content is a key influencing factor in Web Quality of Experience (QoE) estimation. A web user’s satisfaction can be influenced by how long it takes to render and visualize the visible parts of the web page in the browser. This is referred to as the Above-the-fold (ATF) time. SpeedIndex (SI) has been widely used to estimate perceived web page loading speed of ATF content and a proxy metric for Web QoE estimation. Web application developers have been actively introducing innovative interactive features, such as animated and multimedia content, aiming to capture the users’ attention and improve the functionality and utility of the web applications. However, the literature shows that, for the websites with animated content, the estimated ATF time using the state-of-the-art metrics may not accurately match completed ATF time as perceived by users. This study introduces a new metric, Plausibly Complete Time (PCT), that estimates ATF time for a user’s perception of websites with and without animations. PCT can be integrated with SI and web QoE models. The accuracy of the proposed metric is evaluated based on two publicly available datasets. The proposed metric holds a high positive Spearman’s correlation (rs=0.89) with the Perceived ATF reported by the users for websites with and without animated content. This study demonstrates that using PCT as a KPI in QoE estimation models can improve the robustness of QoE estimation in comparison to using the state-of-the-art ATF time metric. Furthermore, experimental result showed that the estimation of SI using PCT improves the robustness of SI for websites with animated content. The PCT estimation allows web application designers to identify where poor design has significantly increased ATF time and refactor their implementation before it impacts end-user experience. Full article
Show Figures

Figure 1

19 pages, 652 KiB  
Article
Semantic Task Planning for Service Robots in Open Worlds
by Guowei Cui, Wei Shuai and Xiaoping Chen
Future Internet 2021, 13(2), 49; https://doi.org/10.3390/fi13020049 - 17 Feb 2021
Cited by 8 | Viewed by 2506
Abstract
This paper presents a planning system based on semantic reasoning for a general-purpose service robot, which is aimed at behaving more intelligently in domains that contain incomplete information, under-specified goals, and dynamic changes. First, Two kinds of data are generated by Natural Language [...] Read more.
This paper presents a planning system based on semantic reasoning for a general-purpose service robot, which is aimed at behaving more intelligently in domains that contain incomplete information, under-specified goals, and dynamic changes. First, Two kinds of data are generated by Natural Language Processing module from the speech: (i) action frames and their relationships; (ii) the modifier used to indicate some property or characteristic of a variable in the action frame. Next, the task’s goals are generated from these action frames and modifiers. These goals are represented as AI symbols, combining world state and domain knowledge, which are used to generate plans by an Answer Set Programming solver. Finally, the plan’s actions are executed one by one, and continuous sensing grounds useful information, which makes the robot use contingent knowledge to adapt to dynamic changes and faults. For each action in the plan, the planner gets its preconditions and effects from domain knowledge, so during the execution of the task, the environmental changes, especially those conflict with the actions, not only the action being performed but also the subsequent actions, can be detected and handled as early as possible. A series of case studies are used to evaluate the system and verify its ability to acquire knowledge through dialogue with users, solve problems with the acquired causal knowledge, and plan for complex tasks autonomously in the open world. Full article
(This article belongs to the Special Issue Service-Oriented Systems and Applications)
Show Figures

Figure 1

21 pages, 1463 KiB  
Review
Blockchain-Enabled Edge Intelligence for IoT: Background, Emerging Trends and Open Issues
by Yao Du, Zehua Wang and Victor C. M. Leung
Future Internet 2021, 13(2), 48; https://doi.org/10.3390/fi13020048 - 17 Feb 2021
Cited by 30 | Viewed by 6629
Abstract
Blockchain, a distributed ledger technology (DLT), refers to a list of records with consecutive time stamps. This decentralization technology has become a powerful model to establish trust among trustless entities, in a verifiable manner. Motivated by the recent advancement of multi-access edge computing [...] Read more.
Blockchain, a distributed ledger technology (DLT), refers to a list of records with consecutive time stamps. This decentralization technology has become a powerful model to establish trust among trustless entities, in a verifiable manner. Motivated by the recent advancement of multi-access edge computing (MEC) and artificial intelligence (AI), blockchain-enabled edge intelligence has become an emerging technology for the Internet of Things (IoT). We review how blockchain-enabled edge intelligence works in the IoT domain, identify the emerging trends, and suggest open issues for further research. To be specific: (1) we first offer some basic knowledge of DLT, MEC, and AI; (2) a comprehensive review of current peer-reviewed literature is given to identify emerging trends in this research area; and (3) we discuss some open issues and research gaps for future investigations. We expect that blockchain-enabled edge intelligence will become an important enabler of future IoT, providing trust and intelligence to satisfy the sophisticated needs of industries and society. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Internet of Things Section)
Show Figures

Figure 1

13 pages, 2520 KiB  
Article
An Accelerating Approach for Blockchain Information Transmission Based on NDN
by Zhi-Peng Yang, Lu Hua, Ning-Jie Gao, Ru Huo, Jiang Liu and Tao Huang
Future Internet 2021, 13(2), 47; https://doi.org/10.3390/fi13020047 - 14 Feb 2021
Cited by 4 | Viewed by 2458
Abstract
Blockchain is becoming more and more popular in various fields. Since the information transmission mode of the blockchain is data broadcasting, the traditional TCP/IP network cannot support the blockchain system well, but the Named-Data Networking (NDN) could be a good choice because of [...] Read more.
Blockchain is becoming more and more popular in various fields. Since the information transmission mode of the blockchain is data broadcasting, the traditional TCP/IP network cannot support the blockchain system well, but the Named-Data Networking (NDN) could be a good choice because of its multi-path forwarding and intra-network caching functions. In this article, we propose a new blockchain information transmission acceleration strategy (AITS) combining with graph theory and probability theory based on the NDN architecture. We select some more important nodes in the network as “secondary nodes”, and give them more bandwidth and cache space to assist the NDN network in data transmission. In order to select the correct node as the secondary node, we present a method to calculate the number of secondary nodes, and give the function to calculate the importance of each node. The simulation results show that in complex networks, the proposed method has superior performance in accelerating information transmission and reducing data overhead. Full article
(This article belongs to the Special Issue The Next Blockchain Wave Current Challenges and Future Prospects)
Show Figures

Figure 1

24 pages, 1040 KiB  
Article
Use of Social Media Data in Disaster Management: A Survey
by Jedsada Phengsuwan, Tejal Shah, Nipun Balan Thekkummal, Zhenyu Wen, Rui Sun, Divya Pullarkatt, Hemalatha Thirugnanam, Maneesha Vinodini Ramesh, Graham Morgan, Philip James and Rajiv Ranjan
Future Internet 2021, 13(2), 46; https://doi.org/10.3390/fi13020046 - 12 Feb 2021
Cited by 43 | Viewed by 10919
Abstract
Social media has played a significant role in disaster management, as it enables the general public to contribute to the monitoring of disasters by reporting incidents related to disaster events. However, the vast volume and wide variety of generated social media data create [...] Read more.
Social media has played a significant role in disaster management, as it enables the general public to contribute to the monitoring of disasters by reporting incidents related to disaster events. However, the vast volume and wide variety of generated social media data create an obstacle in disaster management by limiting the availability of actionable information from social media. Several approaches have therefore been proposed in the literature to cope with the challenges of social media data for disaster management. To the best of our knowledge, there is no published literature on social media data management and analysis that identifies the research problems and provides a research taxonomy for the classification of the common research issues. In this paper, we provide a survey of how social media data contribute to disaster management and the methodologies for social media data management and analysis in disaster management. This survey includes the methodologies for social media data classification and event detection as well as spatial and temporal information extraction. Furthermore, a taxonomy of the research dimensions of social media data management and analysis for disaster management is also proposed, which is then applied to a survey of existing literature and to discuss the core advantages and disadvantages of the various methodologies. Full article
(This article belongs to the Special Issue AI and IoT technologies in Smart Cities)
Show Figures

Figure 1

17 pages, 5444 KiB  
Article
Dashboard COMPRIME_COMPRI_MOv: Multiscalar Spatio-Temporal Monitoring of the COVID-19 Pandemic in Portugal
by Nuno Marques da Costa, Nelson Mileu and André Alves
Future Internet 2021, 13(2), 45; https://doi.org/10.3390/fi13020045 - 12 Feb 2021
Cited by 9 | Viewed by 3008
Abstract
Due to its novelty, the recent pandemic of the coronavirus disease (COVID-19), which is associated with the spread of the new severe acute respiratory syndrome coronavirus (SARS-CoV-2), triggered the public’s interest in accessing information, demonstrating the importance of obtaining and analyzing credible and [...] Read more.
Due to its novelty, the recent pandemic of the coronavirus disease (COVID-19), which is associated with the spread of the new severe acute respiratory syndrome coronavirus (SARS-CoV-2), triggered the public’s interest in accessing information, demonstrating the importance of obtaining and analyzing credible and updated information from an epidemiological surveillance context. For this purpose, health authorities, international organizations, and university institutions have published online various graphic and cartographic representations of the evolution of the pandemic with daily updates that allow the almost real-time monitoring of the evolutionary behavior of the spread, lethality, and territorial distribution of the disease. The purpose of this article is to describe the technical solution and the main results associated with the publication of the COMPRIME_COMPRI_MOv dashboard for the dissemination of information and multi-scale knowledge of COVID-19. Under two rapidly implementing research projects for innovative solutions to respond to the COVID-19 pandemic, promoted in Portugal by the FCT (Foundation for Science and Technology), a website was created. That website brings together a diverse set of variables and indicators in a dynamic and interactive way that reflects the evolutionary behavior of the pandemic from a multi-scale perspective, in Portugal, constituting itself as a system for monitoring the evolution of the pandemic. In the current situation, this type of exploratory solutions proves to be crucial to guarantee everyone’s access to information while simultaneously emerging as an epidemiological surveillance tool that is capable of assisting decision-making by public authorities with competence in defining control policies and fight the spread of the new coronavirus. Full article
(This article belongs to the Special Issue Data Science and Knowledge Discovery)
Show Figures

Figure 1

28 pages, 846 KiB  
Article
Digital Regulation of Intellectual Capital for Open Innovation: Industries’ Expert Assessments of Tacit Knowledge for Controlling and Networking Outcome
by Nadezhda N. Pokrovskaia, Olga N. Korableva, Lucio Cappelli and Denis A. Fedorov
Future Internet 2021, 13(2), 44; https://doi.org/10.3390/fi13020044 - 10 Feb 2021
Cited by 5 | Viewed by 3023
Abstract
Digital regulation implies the quantified measuring and the network infrastructure allowing managers to control the processes of value creation. Digital regulation needs to take into account tacit elements of the value creation process, including unconscious competency, creativity, and intuitive anticipation, to assure the [...] Read more.
Digital regulation implies the quantified measuring and the network infrastructure allowing managers to control the processes of value creation. Digital regulation needs to take into account tacit elements of the value creation process, including unconscious competency, creativity, and intuitive anticipation, to assure the resulting network’s innovation growth. Digital society in developing countries is built on the ground of fact change of the economy and social relations, of transition towards an emerging market within the global offline network of interactions and online activities through Internet; the innovative growth imposes the evolution of managerial behavior and attitudes. The main objective of the paper is to obtain indications on the perception of intellectual capital by corporate managers. The exploratory study was carried out in Russian companies operating in different sectors, with the use of the open-ended approach, including focused interviews and group discussion among experts, middle and senior managers from marketing or corporate governance background. The data were complemented by documentary analysis of descriptions of internal processes of the implementation of digital tools of accounting, which includes the human resources control applied for the remote work during the pandemic. Networking helps to coordinate functions between team members at remote work and between teams and administrators. The interviews demonstrated the administrative tendency to under-estimate the non-formalized factors of innovation activity, such as awareness of corporate strategy, creativity, motivation, and affective and behavioral components of communication of the persons involved in the enrichment of intellectual capital. The results show fuzzy boundaries between the intellectual capital components that are difficult to control. This difficulty provokes the preference for the use of “traditional” quantitative indicators that had been implemented at the stage of the financial digitalization, instead of developing new parameters or measuring approaches. The networking emerges synergetic effect if the administrators refuse their monopoly on the uncertainty zones and are oriented to construct the trustful atmosphere of personal responsibility within the network. Full article
(This article belongs to the Special Issue Digital Society Challenges in Developing Countries)
Show Figures

Figure 1

22 pages, 685 KiB  
Article
Digital Communication Tools and Knowledge Creation Processes for Enriched Intellectual Outcome—Experience of Short-Term E-Learning Courses during Pandemic
by Nadezhda N. Pokrovskaia, Veronika L. Leontyeva, Marianna Yu. Ababkova, Lucio Cappelli and Fabrizio D’Ascenzo
Future Internet 2021, 13(2), 43; https://doi.org/10.3390/fi13020043 - 05 Feb 2021
Cited by 20 | Viewed by 8604
Abstract
Social isolation during the pandemic contributed to the transition of educational processes to e-learning. A short-term e-marketing education program for a variety of students was introduced in May 2020 and is taught entirely online. A survey was conducted regularly in the last week [...] Read more.
Social isolation during the pandemic contributed to the transition of educational processes to e-learning. A short-term e-marketing education program for a variety of students was introduced in May 2020 and is taught entirely online. A survey was conducted regularly in the last week of training using Google Forms, and three cohorts were surveyed in July, September, and December 2020. A high level of satisfaction indicates an interest in the content and a positive assessment of the level of comfort of an organization adapted to the needs of students; this positive result contrasted with the negative opinion of the remote learning in Russia since March 2020, and this surprising satisfaction of students has motivated the study to try to explain its reasons. This result was compared with the short-term course taught through the educational pedagogical platform of a university. The students of traditional short- and long-term university programs were asked to assess their satisfaction with different digital communication tools used for e-learning. They showed low satisfaction with the pedagogical platform and a positive reaction to the e-communication tools (messengers, social media, short surveys, video conferences, etc.). The qualitative responses helped to better understand the real problems of the cognitive process and the triple structure of intellectual production during e-learning, including interest in the intellectual outcome, the need for emotional and motivational elements of cooperation and competition between students, and smooth behavioral enrichment, which requires special efforts from students and their leading from teachers. The main conclusion concerns a practical decision to continue the implementation of the educational program in the form of an online course with the use of the mixed digital communication tools of social media, messengers, and video conferences, which most likely meets the expectations and capabilities of students. Full article
(This article belongs to the Special Issue E-Learning and Technology Enhanced Learning)
Show Figures

Figure 1

20 pages, 318 KiB  
Review
Intelligent and Autonomous Management in Cloud-Native Future Networks—A Survey on Related Standards from an Architectural Perspective
by Qiang Duan
Future Internet 2021, 13(2), 42; https://doi.org/10.3390/fi13020042 - 05 Feb 2021
Cited by 20 | Viewed by 3919
Abstract
Cloud-native network design, which leverages network virtualization and softwarization together with the service-oriented architectural principle, is transforming communication networks to a versatile platform for converged network-cloud/edge service provisioning. Intelligent and autonomous management is one of the most challenging issues in cloud-native future networks, [...] Read more.
Cloud-native network design, which leverages network virtualization and softwarization together with the service-oriented architectural principle, is transforming communication networks to a versatile platform for converged network-cloud/edge service provisioning. Intelligent and autonomous management is one of the most challenging issues in cloud-native future networks, and a wide range of machine learning (ML)-based technologies have been proposed for addressing different aspects of the management challenge. It becomes critical that the various management technologies are applied on the foundation of a consistent architectural framework with a holistic vision. This calls for standardization of new management architecture that supports seamless the integration of diverse ML-based technologies in cloud-native future networks. The goal of this paper is to provide a big picture of the recent developments of architectural frameworks for intelligent and autonomous management for future networks. The paper surveys the latest progress in the standardization of network management architectures including works by 3GPP, ETSI, and ITU-Tand analyzes how cloud-native network design may facilitate the architecture development for addressing management challenges. Open issues related to intelligent and autonomous management in cloud-native future networks are also discussed in this paper to identify some possible directions for future research and development. Full article
(This article belongs to the Special Issue Cloud-Native Applications and Services)
Show Figures

Figure 1

14 pages, 1653 KiB  
Article
Research Professors’ Self-Assessment of Competencies
by Gabriela Torres Delgado and Neil Hernández-Gress
Future Internet 2021, 13(2), 41; https://doi.org/10.3390/fi13020041 - 04 Feb 2021
Cited by 3 | Viewed by 2748
Abstract
Research professors develop scientific products that impact and benefit society, but their competencies in doing so are rarely evaluated. Therefore, by employing a mixed two-stage sequential design, this study developed a self-assessment model of research professors’ competencies with four domains, seven competencies, and [...] Read more.
Research professors develop scientific products that impact and benefit society, but their competencies in doing so are rarely evaluated. Therefore, by employing a mixed two-stage sequential design, this study developed a self-assessment model of research professors’ competencies with four domains, seven competencies, and 30 competency elements. Next, we conducted descriptive statistical analysis of those elements. In the first year, 320 respondents rated themselves on four levels: initial, basic, autonomous, and consolidated. In the assessment model’s second year, we compared 30 respondents’ results with those of their initial self-assessment. The main developmental challenge was Originality and Innovation, which remained at the initial level. Both Training of Researchers and Transformation of Society were at the basic level, and Digital Competency was at the autonomous level. Both Teaching Competence and Ethics and Citizenship attained the consolidated level. This information helps establish priorities for accelerating researchers’ training and the quality of their research. Full article
Show Figures

Figure 1

19 pages, 1781 KiB  
Article
An Automatic Generation Approach of the Cyber Threat Intelligence Records Based on Multi-Source Information Fusion
by Tianfang Sun, Pin Yang, Mengming Li and Shan Liao
Future Internet 2021, 13(2), 40; https://doi.org/10.3390/fi13020040 - 02 Feb 2021
Cited by 17 | Viewed by 4040
Abstract
With the progressive deterioration of cyber threats, collecting cyber threat intelligence (CTI) from open-source threat intelligence publishing platforms (OSTIPs) can help information security personnel grasp public opinions with specific pertinence, handle emergency events, and even confront the advanced persistent threats. However, due to [...] Read more.
With the progressive deterioration of cyber threats, collecting cyber threat intelligence (CTI) from open-source threat intelligence publishing platforms (OSTIPs) can help information security personnel grasp public opinions with specific pertinence, handle emergency events, and even confront the advanced persistent threats. However, due to the explosive growth of information shared on multi-type OSTIPs, manually collecting the CTI has had low efficiency. Articles published on the OSTIPs are unstructured, leading to an imperative challenge to automatically gather CTI records only through natural language processing (NLP) methods. To remedy these limitations, this paper proposes an automatic approach to generate the CTI records based on multi-type OSTIPs (GCO), combing the NLP method, machine learning method, and cybersecurity threat intelligence knowledge. The experiment results demonstrate that the proposed GCO outperformed some state-of-the-art approaches on article classification and cybersecurity intelligence details (CSIs) extraction, with accuracy, precision, and recall all over 93%; finally, the generated records in the Neo4j-based CTI database can help reveal malicious threat groups. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Cybersecurity Section)
Show Figures

Figure 1

40 pages, 620 KiB  
Review
A Systematic Review of Cybersecurity Risks in Higher Education
by Joachim Bjørge Ulven and Gaute Wangen
Future Internet 2021, 13(2), 39; https://doi.org/10.3390/fi13020039 - 02 Feb 2021
Cited by 54 | Viewed by 23996
Abstract
The demands for information security in higher education will continue to increase. Serious data breaches have occurred already and are likely to happen again without proper risk management. This paper applies the Comprehensive Literature Review (CLR) Model to synthesize research within cybersecurity risk [...] Read more.
The demands for information security in higher education will continue to increase. Serious data breaches have occurred already and are likely to happen again without proper risk management. This paper applies the Comprehensive Literature Review (CLR) Model to synthesize research within cybersecurity risk by reviewing existing literature of known assets, threat events, threat actors, and vulnerabilities in higher education. The review included published studies from the last twelve years and aims to expand our understanding of cybersecurity’s critical risk areas. The primary finding was that empirical research on cybersecurity risks in higher education is scarce, and there are large gaps in the literature. Despite this issue, our analysis found a high level of agreement regarding cybersecurity issues among the reviewed sources. This paper synthesizes an overview of mission-critical assets, everyday threat events, proposes a generic threat model, and summarizes common cybersecurity vulnerabilities. This report concludes nine strategic cyber risks with descriptions of frequencies from the compiled dataset and consequence descriptions. The results will serve as input for security practitioners in higher education, and the research contains multiple paths for future work. It will serve as a starting point for security researchers in the sector. Full article
(This article belongs to the Special Issue Feature Papers for Future Internet—Cybersecurity Section)
Show Figures

Figure 1

16 pages, 873 KiB  
Article
Adaptive Weighted Multi-Level Fusion of Multi-Scale Features: A New Approach to Pedestrian Detection
by Yao Xu and Qin Yu
Future Internet 2021, 13(2), 38; https://doi.org/10.3390/fi13020038 - 02 Feb 2021
Cited by 3 | Viewed by 2402
Abstract
Great achievements have been made in pedestrian detection through deep learning. For detectors based on deep learning, making better use of features has become the key to their detection effect. While current pedestrian detectors have made efforts in feature utilization to improve their [...] Read more.
Great achievements have been made in pedestrian detection through deep learning. For detectors based on deep learning, making better use of features has become the key to their detection effect. While current pedestrian detectors have made efforts in feature utilization to improve their detection performance, the feature utilization is still inadequate. To solve the problem of inadequate feature utilization, we proposed the Multi-Level Feature Fusion Module (MFFM) and its Multi-Scale Feature Fusion Unit (MFFU) sub-module, which connect feature maps of the same scale and different scales by using horizontal and vertical connections and shortcut structures. All of these connections are accompanied by weights that can be learned; thus, they can be used as adaptive multi-level and multi-scale feature fusion modules to fuse the best features. Then, we built a complete pedestrian detector, the Adaptive Feature Fusion Detector (AFFDet), which is an anchor-free one-stage pedestrian detector that can make full use of features for detection. As a result, compared with other methods, our method has better performance on the challenging Caltech Pedestrian Detection Benchmark (Caltech) and has quite competitive speed. It is the current state-of-the-art one-stage pedestrian detection method. Full article
Show Figures

Figure 1

11 pages, 706 KiB  
Article
Collaborative Filtering Based on a Variational Gaussian Mixture Model
by FengLei Yang, Fei Liu and ShanShan Liu
Future Internet 2021, 13(2), 37; https://doi.org/10.3390/fi13020037 - 01 Feb 2021
Cited by 5 | Viewed by 2528
Abstract
Collaborative filtering (CF) is a widely used method in recommendation systems. Linear models are still the mainstream of collaborative filtering research methods, but non-linear probabilistic models are beyond the limit of linear model capacity. For example, variational autoencoders (VAEs) have been extensively used [...] Read more.
Collaborative filtering (CF) is a widely used method in recommendation systems. Linear models are still the mainstream of collaborative filtering research methods, but non-linear probabilistic models are beyond the limit of linear model capacity. For example, variational autoencoders (VAEs) have been extensively used in CF, and have achieved excellent results. Aiming at the problem of the prior distribution for the latent codes of VAEs in traditional CF is too simple, which makes the implicit variable representations of users and items too poor. This paper proposes a variational autoencoder that uses a Gaussian mixture model for latent factors distribution for CF, GVAE-CF. On this basis, an optimization function suitable for GVAE-CF is proposed. In our experimental evaluation, we show that the recommendation performance of GVAE-CF outperforms the previously proposed VAE-based models on several popular benchmark datasets in terms of recall and normalized discounted cumulative gain (NDCG), thus proving the effectiveness of the algorithm. Full article
Show Figures

Figure 1

17 pages, 1430 KiB  
Article
High Performance Graph Data Imputation on Multiple GPUs
by Chao Zhou and Tao Zhang
Future Internet 2021, 13(2), 36; https://doi.org/10.3390/fi13020036 - 31 Jan 2021
Cited by 1 | Viewed by 1941
Abstract
In real applications, massive data with graph structures are often incomplete due to various restrictions. Therefore, graph data imputation algorithms have been widely used in the fields of social networks, sensor networks, and MRI to solve the graph data completion problem. To keep [...] Read more.
In real applications, massive data with graph structures are often incomplete due to various restrictions. Therefore, graph data imputation algorithms have been widely used in the fields of social networks, sensor networks, and MRI to solve the graph data completion problem. To keep the data relevant, a data structure is represented by a graph-tensor, in which each matrix is the vertex value of a weighted graph. The convolutional imputation algorithm has been proposed to solve the low-rank graph-tensor completion problem that some data matrices are entirely unobserved. However, this data imputation algorithm has limited application scope because it is compute-intensive and low-performance on CPU. In this paper, we propose a scheme to perform the convolutional imputation algorithm with higher time performance on GPUs (Graphics Processing Units) by exploiting multi-core GPUs of CUDA architecture. We propose optimization strategies to achieve coalesced memory access for graph Fourier transform (GFT) computation and improve the utilization of GPU SM resources for singular value decomposition (SVD) computation. Furthermore, we design a scheme to extend the GPU-optimized implementation to multiple GPUs for large-scale computing. Experimental results show that the GPU implementation is both fast and accurate. On synthetic data of varying sizes, the GPU-optimized implementation running on a single Quadro RTX6000 GPU achieves up to 60.50× speedups over the GPU-baseline implementation. The multi-GPU implementation achieves up to 1.81× speedups on two GPUs versus the GPU-optimized implementation on a single GPU. On the ego-Facebook dataset, the GPU-optimized implementation achieves up to 77.88× speedups over the GPU-baseline implementation. Meanwhile, the GPU implementation and the CPU implementation achieve similar, low recovery errors. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

24 pages, 1004 KiB  
Article
Teaching Physics for Computer Science Students in Higher Education During the COVID-19 Pandemic: A Fully Internet-Supported Course
by Francisco Delgado
Future Internet 2021, 13(2), 35; https://doi.org/10.3390/fi13020035 - 29 Jan 2021
Cited by 25 | Viewed by 4843
Abstract
The COVID-19 pandemic has modified and diversified the ways that students receive education. During confinements, complex courses integrating previous knowledge must be carefully designed and implemented to effectively replace the elements present in face-to-face learning to improve the students’ experience. This work assesses [...] Read more.
The COVID-19 pandemic has modified and diversified the ways that students receive education. During confinements, complex courses integrating previous knowledge must be carefully designed and implemented to effectively replace the elements present in face-to-face learning to improve the students’ experience. This work assesses the implementation of a digital-learning physics course for computer science students in a skill-based education program in higher education. The assessment was useful for the institution to evaluate if the digital strategy implemented in the course fulfilled the original premises and objectives. The analyses performed provide useful knowledge of theoretical and operational actions applied in this methodology that could be adapted to similar courses for the younger generations in this university. COVID-19 confinement will continue in Mexico in 2021. This assessment resulted in a positive evaluation of the digital strategy being followed, which can be continued while the contingency lasts. Three teachers came together to design math, physics, and computational sciences content for various sections of a physics course. The analysis was developed and implemented according to an institutional digital delivery model for the COVID-19 pandemic. Elements related to attendance, digital access, performance distribution by gender, activity types, and the course learning sections were considered. The analysis was performed with some techniques found in the literature for small groups, complemented when necessary by standard statistical tests to discern meaningful trends. A primary goal was to assess skill-based learning in the course delivered digitally due to the COVID-19 confinement. Furthermore, additional issues concerning the learning dynamics were searched, reported, and analyzed. Finally, the outcomes of an institutional exit survey collecting students’ opinions supported certain observed behaviors. The analysis produced meaningful evidence that the course’s skill-based development was well supported by the digital delivery during the confinement. Furthermore, differences in the students’ performances in the various course content sections proved statistically significant and are discussed in this work. Full article
Show Figures

Figure 1

17 pages, 296 KiB  
Article
Council Press Offices as Sources of Political Information: Between Journalism for Accountability and Propaganda
by Vanessa Rodríguez-Breijo, Núria Simelio and Pedro Molina-Rodríguez-Navas
Future Internet 2021, 13(2), 34; https://doi.org/10.3390/fi13020034 - 29 Jan 2021
Cited by 4 | Viewed by 2407
Abstract
This study uses a qualitative approach to examine what political and technical leaders of municipalities understand transparency and public information to mean, and what role they believe the different subjects involved (government, opposition, and the public) should have. The websites of 605 Spanish [...] Read more.
This study uses a qualitative approach to examine what political and technical leaders of municipalities understand transparency and public information to mean, and what role they believe the different subjects involved (government, opposition, and the public) should have. The websites of 605 Spanish councils with more than 100,000 inhabitants were analysed and three focus groups were held with political and technical leaders from a selection of sample councils. The results show that the technical and political leaders of the councils do not have a clear awareness of their function of management accountability or of the need to apply journalistic criteria to the information they publish, defending with nuances the use of propaganda criteria to focus on the actions of the local government, its information, the lack of space dedicated to public debate and the opposition’s actions. In relation to accountability and citizen participation, they have a negative view of citizens, who they describe as being disengaged. However, they emphasize that internally it is essential to continue improving in terms of the culture of transparency and the public information they provide citizens. Full article
27 pages, 8823 KiB  
Article
Cooperation in Social Dilemmas: A Group Game Model with Double-Layer Networks
by Dongwei Guo, Mengmeng Fu and Hai Li
Future Internet 2021, 13(2), 33; https://doi.org/10.3390/fi13020033 - 27 Jan 2021
Cited by 3 | Viewed by 2083
Abstract
The combination of complex networks and game theory is one of the most suitable ways to describe the evolutionary laws of various complex systems. In order to explore the evolution of group cooperation in multiple social dilemmas, a model of a group game [...] Read more.
The combination of complex networks and game theory is one of the most suitable ways to describe the evolutionary laws of various complex systems. In order to explore the evolution of group cooperation in multiple social dilemmas, a model of a group game with a double-layer network is proposed here. Firstly, to simulate a multiplayer game under multiple identities, we combine a double-layer network and public goods game. Secondly, in order to make an individual’s strategy selection process more in line with a practical context, a new strategy learning method that incorporates individual attributes is designed here, referred to as a “public goods game with selection preferences” (PGG-SP), which makes strategic choices that are more humane and diversified. Finally, a co-evolution mechanism for strategies and topologies is introduced based on the double-layer network, which effectively explains the dynamic game process in real life. To verify the role of multiple double-layer networks with a PGG-SP, four types of double-layer networks are applied in this paper. In addition, the corresponding game results are compared between single-layer, double-layer, static, and dynamic networks. Accordingly, the results show that double-layer networks can facilitate cooperation in group games. Full article
Show Figures

Figure 1

17 pages, 1563 KiB  
Article
Technology Enhanced Learning Using Humanoid Robots
by Diego Reforgiato Recupero
Future Internet 2021, 13(2), 32; https://doi.org/10.3390/fi13020032 - 27 Jan 2021
Cited by 3 | Viewed by 2593
Abstract
In this paper we present a mixture of technologies tailored for e-learning related to the Deep Learning, Sentiment Analysis, and Semantic Web domains, which we have employed to show four different use cases that we have validated in the field of Human-Robot Interaction. [...] Read more.
In this paper we present a mixture of technologies tailored for e-learning related to the Deep Learning, Sentiment Analysis, and Semantic Web domains, which we have employed to show four different use cases that we have validated in the field of Human-Robot Interaction. The approach has been designed using Zora, a humanoid robot that can be easily extended with new software behaviors. The goal is to make the robot able to engage users through natural language for different tasks. Using our software the robot can (i) talk to the user and understand their sentiments through a dedicated Semantic Sentiment Analysis engine; (ii) answer to open-dialog natural language utterances by means of a Generative Conversational Agent; (iii) perform action commands leveraging a defined Robot Action ontology and open-dialog natural language utterances; and (iv) detect which objects the user is handing by using convolutional neural networks trained on a huge collection of annotated objects. Each module can be extended with more data and information and the overall architectural design is general, flexible, and scalable and can be expanded with other components, thus enriching the interaction with the human. Different applications within the e-learning domains are foreseen: The robot can either be a trainer and autonomously perform physical actions (e.g., in rehabilitation centers) or it can interact with the users (performing simple tests or even identifying emotions) according to the program developed by the teachers. Full article
(This article belongs to the Special Issue E-Learning and Technology Enhanced Learning)
Show Figures

Figure 1

17 pages, 2829 KiB  
Article
Language Bias in the Google Scholar Ranking Algorithm
by Cristòfol Rovira, Lluís Codina and Carlos Lopezosa
Future Internet 2021, 13(2), 31; https://doi.org/10.3390/fi13020031 - 27 Jan 2021
Cited by 22 | Viewed by 10269
Abstract
The visibility of academic articles or conference papers depends on their being easily found in academic search engines, above all in Google Scholar. To enhance this visibility, search engine optimization (SEO) has been applied in recent years to academic search engines in order [...] Read more.
The visibility of academic articles or conference papers depends on their being easily found in academic search engines, above all in Google Scholar. To enhance this visibility, search engine optimization (SEO) has been applied in recent years to academic search engines in order to optimize documents and, thereby, ensure they are better ranked in search pages (i.e., academic search engine optimization or ASEO). To achieve this degree of optimization, we first need to further our understanding of Google Scholar’s relevance ranking algorithm, so that, based on this knowledge, we can highlight or improve those characteristics that academic documents already present and which are taken into account by the algorithm. This study seeks to advance our knowledge in this line of research by determining whether the language in which a document is published is a positioning factor in the Google Scholar relevance ranking algorithm. Here, we employ a reverse engineering research methodology based on a statistical analysis that uses Spearman’s correlation coefficient. The results obtained point to a bias in multilingual searches conducted in Google Scholar with documents published in languages other than in English being systematically relegated to positions that make them virtually invisible. This finding has important repercussions, both for conducting searches and for optimizing positioning in Google Scholar, being especially critical for articles on subjects that are expressed in the same way in English and other languages, the case, for example, of trademarks, chemical compounds, industrial products, acronyms, drugs, diseases, etc. Full article
(This article belongs to the Special Issue The Current State of Search Engines and Search Engine Optimization)
Show Figures

Figure 1

34 pages, 3854 KiB  
Article
A Perfect Match: Converging and Automating Privacy and Security Impact Assessment On-the-Fly
by Dimitrios Papamartzivanos, Sofia Anna Menesidou, Panagiotis Gouvas and Thanassis Giannetsos
Future Internet 2021, 13(2), 30; https://doi.org/10.3390/fi13020030 - 27 Jan 2021
Cited by 7 | Viewed by 3402
Abstract
As the upsurge of information and communication technologies has become the foundation of all modern application domains, fueled by the unprecedented amount of data being processed and exchanged, besides security concerns, there are also pressing privacy considerations that come into play. Compounding this [...] Read more.
As the upsurge of information and communication technologies has become the foundation of all modern application domains, fueled by the unprecedented amount of data being processed and exchanged, besides security concerns, there are also pressing privacy considerations that come into play. Compounding this issue, there is currently a documented gap between the cybersecurity and privacy risk assessment (RA) avenues, which are treated as distinct management processes and capitalise on rather rigid and make-like approaches. In this paper, we aim to combine the best of both worlds by proposing the APSIA (Automated Privacy and Security Impact Assessment) methodology, which stands for Automated Privacy and Security Impact Assessment. APSIA is powered by the use of interdependency graph models and data processing flows used to create a digital reflection of the cyber-physical environment of an organisation. Along with this model, we present a novel and extensible privacy risk scoring system for quantifying the privacy impact triggered by the identified vulnerabilities of the ICT infrastructure of an organisation. We provide a prototype implementation and demonstrate its applicability and efficacy through a specific case study in the context of a heavily regulated sector (i.e., assistive healthcare domain) where strict security and privacy considerations are not only expected but mandated so as to better showcase the beneficial characteristics of APSIA. Our approach can complement any existing security-based RA tool and provide the means to conduct an enhanced, dynamic and generic assessment as an integral part of an iterative and unified risk assessment process on-the-fly. Based on our findings, we posit open issues and challenges, and discuss possible ways to address them, so that such holistic security and privacy mechanisms can reach their full potential towards solving this conundrum. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy)
Show Figures

Figure 1

32 pages, 1360 KiB  
Article
How Dramatic Events Can Affect Emotionality in Social Posting: The Impact of COVID-19 on Reddit
by Valerio Basile, Francesco Cauteruccio and Giorgio Terracina
Future Internet 2021, 13(2), 29; https://doi.org/10.3390/fi13020029 - 27 Jan 2021
Cited by 29 | Viewed by 4638
Abstract
The COVID-19 outbreak impacted almost all the aspects of ordinary life. In this context, social networks quickly started playing the role of a sounding board for the content produced by people. Studying how dramatic events affect the way people interact with each other [...] Read more.
The COVID-19 outbreak impacted almost all the aspects of ordinary life. In this context, social networks quickly started playing the role of a sounding board for the content produced by people. Studying how dramatic events affect the way people interact with each other and react to poorly known situations is recognized as a relevant research task. Since automatically identifying country-based COVID-19 social posts on generalized social networks, like Twitter and Facebook, is a difficult task, in this work we concentrate on Reddit megathreads, which provide a unique opportunity to study focused reactions of people by both topic and country. We analyze specific reactions and we compare them with a “normal” period, not affected by the pandemic; in particular, we consider structural variations in social posting behavior, emotional reactions under the Plutchik model of basic emotions, and emotional reactions under unconventional emotions, such as skepticism, particularly relevant in the COVID-19 context. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

7 pages, 191 KiB  
Editorial
Acknowledgment to Reviewers of Future Internet in 2020
by Future Internet Editorial Office
Future Internet 2021, 13(2), 28; https://doi.org/10.3390/fi13020028 - 24 Jan 2021
Cited by 1 | Viewed by 1703
Abstract
Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Future Internet maintains its standards for the high quality of its published papers [...] Full article
19 pages, 3614 KiB  
Article
Experiment Information System Based on an Online Virtual Laboratory
by Chuanyan Hao, Anqi Zheng, Yuqi Wang and Bo Jiang
Future Internet 2021, 13(2), 27; https://doi.org/10.3390/fi13020027 - 24 Jan 2021
Cited by 15 | Viewed by 5034
Abstract
In the information age, MOOCs (Massive Open Online Courses), micro-classes, flipping classroom, and other blended teaching scenes have improved students learning outcomes. However, incorporating technologies into experimental courses, especially electronic and electrical experiments, has its own characteristics and difficulties. The focus of this [...] Read more.
In the information age, MOOCs (Massive Open Online Courses), micro-classes, flipping classroom, and other blended teaching scenes have improved students learning outcomes. However, incorporating technologies into experimental courses, especially electronic and electrical experiments, has its own characteristics and difficulties. The focus of this paper is to introduce virtual technology into an electronic circuit experiment course and to explore its teaching strategy, thereby realizing the informatization of experiment teaching. First, this paper explores the design concepts and implementation details of the digital circuit virtual laboratory, which is then developed based on previous literature and a prequestionnaire to users. Second, the informatization process of the experiment learning model based on traditional custom lab benches is shown through a blended learning scheme that integrates the online virtual laboratory. Finally, the experiment information system is verified and analyzed with a control group experiment and questionnaires. The blended program turned out to be an effective teaching model to complement the deficiencies in existing physical laboratories. The research conclusions show that the virtual experiment system provides students with a rich, efficient, and expansive experimental experience, in particular, the flexibility, repeatability, and visual appeal of a virtual platform could promote the development of students’ abilities in active learning, reflective thinking, and creativity. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

27 pages, 2054 KiB  
Article
Integrative Factors of E-Health Laboratory Adoption: A Case of Indonesia
by Dwiza Riana, Achmad Nizar Hidayanto, Sri Hadianti and Darmawan Napitupulu
Future Internet 2021, 13(2), 26; https://doi.org/10.3390/fi13020026 - 24 Jan 2021
Cited by 8 | Viewed by 3790
Abstract
Around the world, the adoption of digital health applications is growing very fast. The use of e-health laboratory systems is increasing while research on the factors that impact users to use e-health laboratory systems in Indonesia has not been done much. The objective [...] Read more.
Around the world, the adoption of digital health applications is growing very fast. The use of e-health laboratory systems is increasing while research on the factors that impact users to use e-health laboratory systems in Indonesia has not been done much. The objective of this study is to analyze the behavioral factors of e-health laboratory users. This study includes a survey conducted on Indonesian users, and data analysis was carried out thoroughly. Based on the Technology Acceptance Model, this research framework explores a combination of variables consisting of task-driven, technology-driven, human-driven, and adoption variables to form the model proposed in this study. This model was verified using the Structural Equation Modeling (SEM) method for factor analysis, path analysis, and regression. A total of 163 respondents were collected to evaluate this research model empirically and the level of this study were individuals. These three problems are all essential in affecting usage intentions in adopting an e-health laboratory system. Specifically, task technology fit, information quality, and accessibility show a direct effect on both perceived usefulness and perceived ease of use factors perceived by the user, and have an indirect influence on the adoption of an e-health laboratory system through these two factors. The design of an online laboratory system affects perceived ease of use and personal innovativeness factors affect the perceived usefulness that users feel when adopting a laboratory system, while task technology fit and personal innovativeness factors do not affect the perceived ease of use. However, overall technology characteristic and perceived usefulness followed by design are the main predictors of adopting an e-health laboratory system on e-health systems in Indonesia. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop