Next Issue
Volume 14, January
Previous Issue
Volume 13, November
 
 

Information, Volume 13, Issue 12 (December 2022) – 39 articles

Cover Story (view full-size image): MV-HEVC uses a multi-layer coding approach, which requires all frames from other reference layers to be decoded prior to decoding a new layer. Thus, the multi-layer coding architecture would be a bottleneck when it comes to quicker frame streaming across different views. This paper presents an HEVC-based frame interleaved stereo/multiview video codec that uses a single-layer encoding approach to encode stereo and multiview video sequences. The frames of stereo or multiview videos are interleaved in such a way that encoding the resulting monoscopic video stream would maximize the exploitation of temporal, inter-view, and cross-view correlations, thus improving the overall coding efficiency. Results show the superior coding performance of the proposed codec to its anchor MV-HEVC codec. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 892 KiB  
Article
Construction of a Compact and High-Precision Classifier in the Inductive Learning Method for Prediction and Diagnostic Problems
by Roman Kuzmich, Alena Stupina, Andrey Yasinskiy, Mariia Pokushko, Roman Tsarev and Ivan Boubriak
Information 2022, 13(12), 589; https://doi.org/10.3390/info13120589 - 18 Dec 2022
Viewed by 1935
Abstract
The study is dictated by the need to make reasonable decisions in the classification of observations, for example, in the problems of medical prediction and diagnostics. Today, as part of the digitalization in healthcare, decision-making by a doctor is carried out using intelligent [...] Read more.
The study is dictated by the need to make reasonable decisions in the classification of observations, for example, in the problems of medical prediction and diagnostics. Today, as part of the digitalization in healthcare, decision-making by a doctor is carried out using intelligent information systems. The introduction of such systems contributes to the implementation of policies aimed at ensuring sustainable development in the health sector. The paper discusses the method of inductive learning, which can be the algorithmic basis of such systems. In order to build a compact and high-precision classifier for the studied method, it is necessary to obtain a set of informative patterns and to create a method for building a classifier with high generalizing ability from this set of patterns. Three optimization models for the building of informative patterns have been developed, which are based on different concepts. Additionally, two algorithmic procedures have been developed that are used to obtain a compact and high-precision classifier. Experimental studies were carried out on the problems of medical prediction and diagnostics, aimed at finding the best optimization model for the building of informative pattern and at proving the effectiveness of the developed algorithmic procedures. Full article
Show Figures

Figure 1

16 pages, 4333 KiB  
Article
Hybrid No-Reference Quality Assessment for Surveillance Images
by Zhongchang Ye, Xin Ye and Zhonghua Zhao
Information 2022, 13(12), 588; https://doi.org/10.3390/info13120588 - 16 Dec 2022
Cited by 2 | Viewed by 1802
Abstract
Intelligent video surveillance (IVS) technology is widely used in various security systems. However, quality degradation in surveillance images (SIs) may affect its performance on vision-based tasks, leading to the difficulties in the IVS system extracting valid information from SIs. In this paper, we [...] Read more.
Intelligent video surveillance (IVS) technology is widely used in various security systems. However, quality degradation in surveillance images (SIs) may affect its performance on vision-based tasks, leading to the difficulties in the IVS system extracting valid information from SIs. In this paper, we propose a hybrid no-reference image quality assessment (NR IQA) model for SIs that can help to identify undesired distortions and provide useful guidelines for IVS technology. Specifically, we first extract two main types of quality-aware features: the low-level visual features related to various distortions, and the high-level semantic information, which is extracted by a state-of-the-art (SOTA) vision transformer backbone. Then, we fuse these two kinds of features into the final quality-aware feature vector, which is mapped into the quality index through the feature regression module. Our experimental results on two surveillance content quality databases demonstrate that the proposed model achieves the best performance compared to the SOTA on NR IQA metrics. Full article
(This article belongs to the Special Issue Deep Learning for Human-Centric Computer Vision)
Show Figures

Figure 1

15 pages, 588 KiB  
Article
LPCOCN: A Layered Paddy Crop Optimization-Based Capsule Network Approach for Anomaly Detection at IoT Edge
by Bhuvaneswari Amma Narayanavadivoo Gopinathan, Velliangiri Sarveshwaran, Vinayakumar Ravi and Rajasekhar Chaganti
Information 2022, 13(12), 587; https://doi.org/10.3390/info13120587 - 16 Dec 2022
Cited by 1 | Viewed by 1794
Abstract
Cyberattacks have increased as a consequence of the expansion of the Internet of Things (IoT). It is necessary to detect anomalies so that smart devices need to be protected from these attacks, which must be mitigated at the edge of the IoT network. [...] Read more.
Cyberattacks have increased as a consequence of the expansion of the Internet of Things (IoT). It is necessary to detect anomalies so that smart devices need to be protected from these attacks, which must be mitigated at the edge of the IoT network. Therefore, efficient detection depends on the selection of an optimal IoT traffic feature set and the learning algorithm that classifies the IoT traffic. There is a flaw in the existing anomaly detection systems because the feature selection algorithms do not identify the most appropriate set of features. In this article, a layered paddy crop optimization (LPCO) algorithm is suggested to choose the optimal set of features. Furthermore, the use of smart devices generates tremendous traffic, which can be labelled as either normal or attack using a capsule network (CN) approach. Five network traffic benchmark datasets are utilized to evaluate the proposed approach, including NSL KDD, UNSW NB, CICIDS, CSE-CIC-IDS, and UNSW Bot-IoT. Based on the experiments, the presented approach yields assuring results in comparison with the existing base classifiers and feature selection approaches. Comparatively, the proposed strategy performs better than the current state-of-the-art approaches. Full article
(This article belongs to the Special Issue Enhanced Cyber-Physical Security in IoT)
Show Figures

Figure 1

17 pages, 558 KiB  
Article
Cyberbullying in COVID-19 Pandemic Decreases? Research of Internet Habits of Croatian Adolescents
by Lucija Vejmelka, Roberta Matkovic and Miroslav Rajter
Information 2022, 13(12), 586; https://doi.org/10.3390/info13120586 - 16 Dec 2022
Cited by 4 | Viewed by 3946
Abstract
Online contacts and other activities on the Internet came into focus given the increased use during the COVID-19 pandemic. The online environment is a setting for problematic Internet use, including cyberbullying, and research so far shows that inclusion in cyberbullying depends on the [...] Read more.
Online contacts and other activities on the Internet came into focus given the increased use during the COVID-19 pandemic. The online environment is a setting for problematic Internet use, including cyberbullying, and research so far shows that inclusion in cyberbullying depends on the amount of screen time. Increases in screen time during the pandemic could affect the growth of the prevalence rates of children’s involvement in cyberbullying. The aim of this paper is to compare the Internet habits, cyberbullying and parental role in children’s online activities before and during the COVID-19 pandemic, when the use of the Internet increased due to online classes and implemented measures to prevent the spread of the infection. The Institute of Public Health of Split-Dalmatia County conducted a quantitative online survey of Internet habits and problematic Internet use in two waves in 2017 and 2020 with adolescents from 12–18 (N2017 = 536; N2020 = 284). Research included adherence to ethical standards of research with children. An online activity questionnaire for children, a questionnaire of parental behaviors and the European Cyberbullying Intervention Project Questionnaire—ECIPQ were used. The results of the research point out that cyberbullying rates in the pandemic decreased. The results show that the cumulative effect of parental monitoring is medium with approximately 5% of explained variance for experiencing and 6% for committing violence. The similar set of predictors is statistically significant in both regressions. Parental actions of monitoring applications, informing children and monitoring search history are identified as protective factors for committing or experiencing cyber violence. These findings are important for understanding the effect of the general digitization of society, which leads to an extensive increase in the use of online content and various digital tools, and the role of the parents, especially as protective potential for cyberbullying among children. Full article
Show Figures

Figure 1

17 pages, 4108 KiB  
Article
HIL Flight Simulator for VTOL-UAV Pilot Training Using X-Plane
by Daniel Aláez, Xabier Olaz, Manuel Prieto, Pablo Porcellinis and Jesús Villadangos
Information 2022, 13(12), 585; https://doi.org/10.3390/info13120585 - 16 Dec 2022
Cited by 6 | Viewed by 5074
Abstract
With the increasing popularity of vertical take-off and landing unmanned aerial vehicles (VTOL UAVs), a new problem arises: pilot training. Most conventional pilot training simulators are designed for full-scale aircrafts, while most UAV simulators are just focused on conceptual testing and design validation. [...] Read more.
With the increasing popularity of vertical take-off and landing unmanned aerial vehicles (VTOL UAVs), a new problem arises: pilot training. Most conventional pilot training simulators are designed for full-scale aircrafts, while most UAV simulators are just focused on conceptual testing and design validation. The X-Plane flight simulator was extended to include new functionalities such as complex wind dynamics, ground effect, and accurate real-time weather. A commercial HIL flight controller was coupled with a VTOL convertiplane UAV model to provide realistic flight control. A real flight case scenario was tested in simulation to show the importance of including an accurate wind model. The result is a complete simulation environment that has been successfully deployed for pilot training of the Marvin aircraft manufactured by FuVeX. Full article
(This article belongs to the Special Issue Advanced Computer and Digital Technologies)
Show Figures

Figure 1

24 pages, 491 KiB  
Article
Disentangling Determinants of Ride-Hailing Services among Malaysian Drivers
by Maryum Zaigham, Christie Pei-Yee Chin and Jakaria Dasan
Information 2022, 13(12), 584; https://doi.org/10.3390/info13120584 - 16 Dec 2022
Cited by 3 | Viewed by 3150
Abstract
Ride-hailing has emerged as one of the progressive sharing economy platforms. As a digital platform, both riders and drivers are critical to achieving sustainable ride-hailing transactions. Previous studies have gained little insight into ride-hailing services from drivers’ perspectives. This study investigates the salient [...] Read more.
Ride-hailing has emerged as one of the progressive sharing economy platforms. As a digital platform, both riders and drivers are critical to achieving sustainable ride-hailing transactions. Previous studies have gained little insight into ride-hailing services from drivers’ perspectives. This study investigates the salient factors that determine the usage of ride-hailing services among drivers in Malaysia by extending the technology acceptance model (TAM), introducing governmental regulations, and integrating perceived risk and trust into the model. We collected data from a total of 495 ride-hailing drivers across Malaysia. Our results suggest that a driver’s intention to use ride-hailing services is determined by perceived ease of use, perceived usefulness, and governmental regulations, which lead to actual usage. However, unexpectedly enough, the results signify that perceived risk does not affect the intention to use ride-hailing unless there is trust among the drivers. Overall, this paper draws attention to the substantial contrast in its results from the majority of prior TAM literature and has thoroughly improved the exploratory power of TAM by introducing new variables into the model, particularly from the perspective of ride-hailing drivers. This study is expected to bring theoretical and practical contributions to improve the country’s ride-hailing industry. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

16 pages, 1588 KiB  
Article
Wavelet-Based Classification of Enhanced Melanoma Skin Lesions through Deep Neural Architectures
by Premaladha Jayaraman, Nirmala Veeramani, Raghunathan Krishankumar, Kattur Soundarapandian Ravichandran, Fausto Cavallaro, Pratibha Rani and Abbas Mardani
Information 2022, 13(12), 583; https://doi.org/10.3390/info13120583 - 15 Dec 2022
Cited by 7 | Viewed by 2024
Abstract
In recent years, skin cancer diagnosis has been aided by the most sophisticated and advanced machine learning algorithms, primarily implemented in the spatial domain. In this research work, we concentrated on two crucial phases of a computer-aided diagnosis system: (i) image enhancement through [...] Read more.
In recent years, skin cancer diagnosis has been aided by the most sophisticated and advanced machine learning algorithms, primarily implemented in the spatial domain. In this research work, we concentrated on two crucial phases of a computer-aided diagnosis system: (i) image enhancement through enhanced median filtering algorithms based on the range method, fuzzy relational method, and similarity coefficient, and (ii) wavelet decomposition using DB4, Symlet, RBIO, and extracting seven unique entropy features and eight statistical features from the segmented image. The extracted features were then normalized and provided for classification based on supervised and deep-learning algorithms. The proposed system is comprised of enhanced filtering algorithms, Normalized Otsu’s Segmentation, and wavelet-based entropy. Statistical feature extraction led to a classification accuracy of 93.6%, 0.71% higher than the spatial domain-based classification. With better classification accuracy, the proposed system will assist clinicians and dermatology specialists in identifying skin cancer early in its stages. Full article
Show Figures

Figure 1

13 pages, 350 KiB  
Article
EREC: Enhanced Language Representations with Event Chains
by Huajie Wang and Yinglin Wang
Information 2022, 13(12), 582; https://doi.org/10.3390/info13120582 - 15 Dec 2022
Cited by 1 | Viewed by 1687
Abstract
The natural language model BERT uses a large-scale unsupervised corpus to accumulate rich linguistic knowledge during its pretraining stage, and then, the information is fine-tuned for specific downstream tasks, which greatly improves the understanding capability of various natural language tasks. For some specific [...] Read more.
The natural language model BERT uses a large-scale unsupervised corpus to accumulate rich linguistic knowledge during its pretraining stage, and then, the information is fine-tuned for specific downstream tasks, which greatly improves the understanding capability of various natural language tasks. For some specific tasks, the capability of the model can be enhanced by introducing external knowledge. In fact, these methods, such as ERNIE, have been proposed for integrating knowledge graphs into BERT models, which significantly enhanced its capabilities in related tasks such as entity recognition. However, for two types of tasks, commonsense causal reasoning and predicting the ending of stories, few previous studies have combined model modification and process optimization to integrate external knowledge. Therefore, referring to ERNIE, in this paper, we propose enhanced language representation with event chains (EREC), which focuses on keywords in the text corpus and their implied relations. Event chains are integrated into EREC as external knowledge. Furthermore, various graph networks are used to generate embeddings and to associate keywords in the corpus. Finally, via multi-task training, external knowledge is integrated into the model generated in the pretraining stage so as to enhance the effect of the model in downstream tasks. The experimental process of the EREC model is carried out with a three-stage design, and the experimental results show that EREC has a deeper understanding of the causal relationship and event relationship contained in the text by integrating the event chains, and it achieved significant improvements on two specific tasks. Full article
(This article belongs to the Special Issue Intelligence Computing and Systems)
Show Figures

Figure 1

13 pages, 1394 KiB  
Article
Medical QA Oriented Multi-Task Learning Model for Question Intent Classification and Named Entity Recognition
by Turdi Tohti, Mamatjan Abdurxit and Askar Hamdulla
Information 2022, 13(12), 581; https://doi.org/10.3390/info13120581 - 14 Dec 2022
Cited by 2 | Viewed by 2662
Abstract
Intent classification and named entity recognition of medical questions are two key subtasks of the natural language understanding module in the question answering system. Most existing methods usually treat medical queries intent classification and named entity recognition as two separate tasks, ignoring the [...] Read more.
Intent classification and named entity recognition of medical questions are two key subtasks of the natural language understanding module in the question answering system. Most existing methods usually treat medical queries intent classification and named entity recognition as two separate tasks, ignoring the close relationship between the two tasks. In order to optimize the effect of medical queries intent classification and named entity recognition tasks, a multi-task learning model based on ALBERT-BILSTM is proposed for intent classification and named entity recognition of Chinese online medical questions. The multi-task learning model in this paper makes use of encoder parameter sharing, which enables the model’s underlying network to take into account both named entity recognition and intent classification features. The model learns the shared information between the two tasks while maintaining its unique characteristics during the decoding phase. The ALBERT pre-training language model is used to obtain word vectors containing semantic information and the bidirectional LSTM network is used for training. A comparative experiment of different models was conducted on Chinese medical questions dataset. Experimental results show that the proposed multi-task learning method outperforms the benchmark method in terms of precision, recall and F1 value. Compared with the single-task model, the generalization ability of the model has been improved. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

10 pages, 2070 KiB  
Review
Initial Cybersecurity Framework in the New Capital City of Indonesia: Factors, Objectives, and Technology
by Dana Indra Sensuse, Prasetyo Adi Wibowo Putro, Rini Rachmawati and Wikan Danar Sunindyo
Information 2022, 13(12), 580; https://doi.org/10.3390/info13120580 - 14 Dec 2022
Cited by 6 | Viewed by 3287
Abstract
As a newly built city and the new capital of Indonesia, Ibu Kota Nusantara (IKN), is expected to become known worldwide as an economic driver, a symbol of national identity, and a sustainable city. As the nation’s capital, IKN will become the location [...] Read more.
As a newly built city and the new capital of Indonesia, Ibu Kota Nusantara (IKN), is expected to become known worldwide as an economic driver, a symbol of national identity, and a sustainable city. As the nation’s capital, IKN will become the location for running central government activities and hosting representatives of foreign countries and international organizations or institutions. However, there is no concept of cybersecurity in IKN associated with existing functions and expectations of the city. This study identifies the initial cybersecurity framework in the new capital city of Indonesia, IKN. A PRISMA systematic review was used to identify variables and design an initial framework. The initial framework was then validated by cybersecurity and smart city experts. The results show that the recommended cybersecurity framework involved IKN’s factors as a livable city, a smart city, and a city with critical infrastructure. We applied five security objectives supported by risk management, governance, security awareness, and the latest security technology to these factors. Full article
Show Figures

Figure 1

13 pages, 2495 KiB  
Article
Cloud Gamification: Bibliometric Analysis and Research Advances
by Myriam González-Limón and Asunción Rodríguez-Ramos
Information 2022, 13(12), 579; https://doi.org/10.3390/info13120579 - 13 Dec 2022
Cited by 3 | Viewed by 2142
Abstract
Research on gamification in the cloud has been increasing in recent years. The main objective of this work was to analyse the advances and progress reported in the scientific literature published internationally in cloud gamification from a bibliometric perspective. The scientific production in [...] Read more.
Research on gamification in the cloud has been increasing in recent years. The main objective of this work was to analyse the advances and progress reported in the scientific literature published internationally in cloud gamification from a bibliometric perspective. The scientific production in this field was identified using the Web of Science (WoS) database. The analysis was carried out with the support of the VOSviewer software, version 1.6.18, developed by van Eck and Waltman, for the graphical visualisation of bibliometric networks. The study period covered the time from the first publication on the subject in 2012 to 31 July 2022, with 108 documents detected. The most prolific author was Jacub Swacha from the University of Szczecin, Poland. Forty-seven countries published on Cloud Gamification, with Spain and Italy being the countries with the highest scientific production. The most productive organisations were Bucharest University of Economic Studies, Complutense University of Madrid, Liverpool John Moores University and the University of Szczecin. The journal with the highest output was Information. The groups in the producing countries, the authors, the organisations to which they belonged and the thematic areas of the studies were identified, as well as their evolution over time. Full article
(This article belongs to the Special Issue Cloud Gamification 2021 & 2022)
Show Figures

Figure 1

37 pages, 1158 KiB  
Article
Automatically Testing Containedness between Geometric Graph Classes defined by Inclusion, Exclusion, and Transfer Axioms under Simple Transformations
by Lucas Böltz and Hannes Frey
Information 2022, 13(12), 578; https://doi.org/10.3390/info13120578 - 12 Dec 2022
Viewed by 1229
Abstract
We study classes of geometric graphs, which all correspond to the following structural characteristic. For each instance of a vertex set drawn from a universe of possible vertices, each pair of vertices is either required to be connected, forbidden to be connected, or [...] Read more.
We study classes of geometric graphs, which all correspond to the following structural characteristic. For each instance of a vertex set drawn from a universe of possible vertices, each pair of vertices is either required to be connected, forbidden to be connected, or existence or non-existence of an edge is undetermined. The conditions which require or forbid edges are universally quantified predicates defined over the vertex pair, and optionally over existence or non-existence of another edge originating at the vertex pair. We consider further a set of simple graph transformations, where the existence of an edge between two vertices is logically determined by the existence or non-existence of directed edges between both vertices in the original graph. We derive and prove the correctness of a logical expression, which is a necessary and sufficient condition for containedness relations between graph classes that are described this way. We apply the expression on classes of geometric graphs, which are used as theoretical wireless network graph models. The models are constructed from three base class types and intersection combinations of them, with some considered directly and some considered as symmetrized variants using two of the simple graph transformations. Our study then goes systematically over all possible graph classes resulting from those base classes and all possible simple graph transformations. We derive automatically containedness relations between those graph classes. Moreover, in those cases where containedness does not hold, we provide automatically derived counter examples. Full article
(This article belongs to the Special Issue Advances in Discrete and Computational Geometry)
Show Figures

Figure 1

16 pages, 2860 KiB  
Article
SWAR: A Deep Multi-Model Ensemble Forecast Method with Spatial Grid and 2-D Time Structure Adaptability for Sea Level Pressure
by Jingyun Zhang, Lingyu Xu and Baogang Jin
Information 2022, 13(12), 577; https://doi.org/10.3390/info13120577 - 12 Dec 2022
Cited by 3 | Viewed by 1821
Abstract
The multi-model ensemble (MME) forecast for meteorological elements has been proved many times to be more skillful than the single model. It improves the forecast quality by integrating multiple sets of numerical forecast results with different spatial-temporal characteristics. Currently, the main numerical forecast [...] Read more.
The multi-model ensemble (MME) forecast for meteorological elements has been proved many times to be more skillful than the single model. It improves the forecast quality by integrating multiple sets of numerical forecast results with different spatial-temporal characteristics. Currently, the main numerical forecast results present a grid structure formed by longitude and latitude lines in space and a special two-dimensional time structure in time, namely the initial time and the lead time, compared with the traditional one-dimensional time. These characteristics mean that many MME methods have limitations in further improving forecast quality. Focusing on this problem, we propose a deep MME forecast method that suits the special structure. At spatial level, our model uses window self-attention and shifted window attention to aggregate information. At temporal level, we propose a recurrent like neural network with rolling structure (Roll-RLNN) which is more suitable for two-dimensional time structure that widely exists in the institutions of numerical weather prediction (NWP) with running service. In this paper, we test the MME forecast for sea level pressure as the forecast characteristics of the essential meteorological element vary clearly across institutions, and the results show that our model structure is effective and can make significant forecast improvements. Full article
Show Figures

Figure 1

28 pages, 441 KiB  
Article
A Comparative Study of Machine Learning and Deep Learning Techniques for Fake News Detection
by Jawaher Alghamdi, Yuqing Lin and Suhuai Luo
Information 2022, 13(12), 576; https://doi.org/10.3390/info13120576 - 12 Dec 2022
Cited by 23 | Viewed by 8316
Abstract
Efforts have been dedicated by researchers in the field of natural language processing (NLP) to detecting and combating fake news using an assortment of machine learning (ML) and deep learning (DL) techniques. In this paper, a review of the existing studies is conducted [...] Read more.
Efforts have been dedicated by researchers in the field of natural language processing (NLP) to detecting and combating fake news using an assortment of machine learning (ML) and deep learning (DL) techniques. In this paper, a review of the existing studies is conducted to understand and curtail the dissemination of fake news. Specifically, we conducted a benchmark study using a wide range of (1) classical ML algorithms such as logistic regression (LR), support vector machines (SVM), decision tree (DT), naive Bayes (NB), random forest (RF), XGBoost (XGB) and an ensemble learning method of such algorithms, (2) advanced ML algorithms such as convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM), bidirectional gated recurrent units (BiGRU), CNN-BiLSTM, CNN-BiGRU and a hybrid approach of such techniques and (3) DL transformer-based models such as BERTbase and RoBERTabase. The experiments are carried out using different pretrained word embedding methods across four well-known real-world fake news datasets—LIAR, PolitiFact, GossipCop and COVID-19—to examine the performance of different techniques across various datasets. Furthermore, a comparison is made between context-independent embedding methods (e.g., GloVe) and the effectiveness of BERTbase—contextualised representations in detecting fake news. Compared with the state of the art’s results across the used datasets, we achieve better results by solely relying on news text. We hope this study can provide useful insights for researchers working on fake news detection. Full article
(This article belongs to the Special Issue Advanced Natural Language Processing and Machine Translation)
Show Figures

Figure 1

13 pages, 960 KiB  
Article
DEGAIN: Generative-Adversarial-Network-Based Missing Data Imputation
by Reza Shahbazian and Irina Trubitsyna
Information 2022, 13(12), 575; https://doi.org/10.3390/info13120575 - 12 Dec 2022
Cited by 8 | Viewed by 4509
Abstract
Insights and analysis are only as good as the available data. Data cleaning is one of the most important steps to create quality data decision making. Machine learning (ML) helps deal with data quickly, and to create error-free or limited-error datasets. One of [...] Read more.
Insights and analysis are only as good as the available data. Data cleaning is one of the most important steps to create quality data decision making. Machine learning (ML) helps deal with data quickly, and to create error-free or limited-error datasets. One of the quality standards for cleaning the data includes handling the missing data, also known as data imputation. This research focuses on the use of machine learning methods to deal with missing data. In particular, we propose a generative adversarial network (GAN) based model called DEGAIN to estimate the missing values in the dataset. We evaluate the performance of the presented method and compare the results with some of the existing methods on publicly available Letter Recognition and SPAM datasets. The Letter dataset consists of 20,000 samples and 16 input features and the SPAM dataset consists of 4601 samples and 57 input features. The results show that the proposed DEGAIN outperforms the existing ones in terms of root mean square error and Frechet inception distance metrics. Full article
(This article belongs to the Special Issue Best IDEAS: International Database Engineered Applications Symposium)
Show Figures

Figure 1

19 pages, 2926 KiB  
Article
User-Generated Content in Social Media: A Twenty-Year Bibliometric Analysis in Hospitality
by Fotis Kitsios, Eleftheria Mitsopoulou, Eleni Moustaka and Maria Kamariotou
Information 2022, 13(12), 574; https://doi.org/10.3390/info13120574 - 12 Dec 2022
Cited by 6 | Viewed by 5951
Abstract
This article aims to present a bibliometric analysis regarding social media platforms and User-Generated Content (UGC) in hospitality. One hundred fifty-one peer-reviewed articles were analyzed using Webster’s and Watson’s (2002) methodology, a concept-driven methodology that helps analyze different concepts and contexts of a [...] Read more.
This article aims to present a bibliometric analysis regarding social media platforms and User-Generated Content (UGC) in hospitality. One hundred fifty-one peer-reviewed articles were analyzed using Webster’s and Watson’s (2002) methodology, a concept-driven methodology that helps analyze different concepts and contexts of a research field. Articles classified into five areas and a bibliometric analysis were presented to discuss the publication year, journals and publishers, authors, number of citations, research method implemented, social networking and users’ perceived value, user-generated content and travel planning, e-Word-of-Mouth (e-WOM) and brand image building, and hotel performance. The findings of this study showed that the number of studies in this field has increased over the last decade. However, exploration of the subject needs to be promoted (particularly experimental) because research in hospitality social media is still in the early phases on the grounds that publications concentrate on explicit subjects, regions, and sources of publication. Full article
Show Figures

Figure 1

25 pages, 637 KiB  
Article
Development of a Method for the Engineering of Digital Innovation Using Design Science Research
by Murad Huseynli, Udo Bub and Michael Chima Ogbuachi
Information 2022, 13(12), 573; https://doi.org/10.3390/info13120573 - 12 Dec 2022
Cited by 2 | Viewed by 2778
Abstract
This paper outlines the path towards a method focusing on a process model for the integrated engineering of Digital Innovation (DI) and Design Science Research (DSR). The use of the DSR methodology allows for achieving both scientific rigor and practical relevance, while integrating [...] Read more.
This paper outlines the path towards a method focusing on a process model for the integrated engineering of Digital Innovation (DI) and Design Science Research (DSR). The use of the DSR methodology allows for achieving both scientific rigor and practical relevance, while integrating the concept of innovation strategies into the proposed method enables a conscious approach to classify different Information Systems (IS) artifacts, and provides a way to create, transfer, and generalize their design. The resulting approach allows for the systematic creation of innovative IS artifacts. On top of that, cumulative DSR knowledge can be systematically built up, facilitating description, comparability, and reuse of the artifacts. We evaluate this newly completed approach in a case study for an automated conversational call center interface leveraging the identification of the caller’s age and gender for dialog optimization, based on machine learning models trained on the SpeechDat spoken-language resource database. Moreover, we validate innovation strategies by analyzing additional innovative projects. Full article
Show Figures

Figure 1

16 pages, 387 KiB  
Article
Fast Training Set Size Reduction Using Simple Space Partitioning Algorithms
by Stefanos Ougiaroglou, Theodoros Mastromanolis, Georgios Evangelidis and Dionisis Margaris
Information 2022, 13(12), 572; https://doi.org/10.3390/info13120572 - 10 Dec 2022
Cited by 2 | Viewed by 1327
Abstract
The Reduction by Space Partitioning (RSP3) algorithm is a well-known data reduction technique. It summarizes the training data and generates representative prototypes. Its goal is to reduce the computational cost of an instance-based classifier without penalty in accuracy. The algorithm keeps on dividing [...] Read more.
The Reduction by Space Partitioning (RSP3) algorithm is a well-known data reduction technique. It summarizes the training data and generates representative prototypes. Its goal is to reduce the computational cost of an instance-based classifier without penalty in accuracy. The algorithm keeps on dividing the initial training data into subsets until all of them become homogeneous, i.e., they contain instances of the same class. To divide a non-homogeneous subset, the algorithm computes its two furthest instances and assigns all instances to their closest furthest instance. This is a very expensive computational task, since all distances among the instances of a non-homogeneous subset must be calculated. Moreover, noise in the training data leads to a large number of small homogeneous subsets, many of which have only one instance. These instances are probably noise, but the algorithm mistakenly generates prototypes for these subsets. This paper proposes simple and fast variations of RSP3 that avoid the computationally costly partitioning tasks and remove the noisy training instances. The experimental study conducted on sixteen datasets and the corresponding statistical tests show that the proposed variations of the algorithm are much faster and achieve higher reduction rates than the conventional RSP3 without negatively affecting the accuracy. Full article
(This article belongs to the Special Issue Computing and Embedded Artificial Intelligence)
Show Figures

Figure 1

19 pages, 3074 KiB  
Article
An Efficient Malware Classification Method Based on the AIFS-IDL and Multi-Feature Fusion
by Xuan Wu and Yafei Song
Information 2022, 13(12), 571; https://doi.org/10.3390/info13120571 - 9 Dec 2022
Viewed by 1926
Abstract
In recent years, the presence of malware has been growing exponentially, resulting in enormous demand for efficient malware classification methods. However, the existing machine learning-based classifiers have high false positive rates and cannot effectively classify malware variants, packers, and obfuscation. To address this [...] Read more.
In recent years, the presence of malware has been growing exponentially, resulting in enormous demand for efficient malware classification methods. However, the existing machine learning-based classifiers have high false positive rates and cannot effectively classify malware variants, packers, and obfuscation. To address this shortcoming, this paper proposes an efficient deep learning-based method named AIFS-IDL (Atanassov Intuitionistic Fuzzy Sets-Integrated Deep Learning), which uses static features to classify malware. The proposed method first extracts six types of features from the disassembly and byte files and then fuses them to solve the single-feature problem in traditional classification methods. Next, Atanassov’s intuitionistic fuzzy set-based method is used to integrate the result of the three deep learning models, namely, GRU (Temporal Convolutional Network), TCN (Temporal Convolutional Network), and CNN (Convolutional Neural Networks), which improves the classification accuracy and generalizability of the classification model. The proposed method is verified by experiments and the results show that the proposed method can effectively improve the accuracy of malware classification compared to the existing methods. Experiments were carried out on the six types of features of malicious code and compared with traditional classification algorithms and ensemble learning algorithms. A variety of comparative experiments show that the classification accuracy rate of integrating multi-feature, multi-model aspects can reach 99.92%. The results show that, compared with other static classification methods, this method has better malware identification and classification ability. Full article
(This article belongs to the Special Issue Malware Behavior Analysis Applying Machine Learning)
Show Figures

Figure 1

20 pages, 5557 KiB  
Article
How the V4 Nations Handle the Idea of Smart Cities
by Roman Blazek, Pavol Durana and Jaroslav Jaros
Information 2022, 13(12), 570; https://doi.org/10.3390/info13120570 - 8 Dec 2022
Cited by 1 | Viewed by 2555
Abstract
Smart city is a term that includes digital, information, and communication technologies that contribute to increasing the level and quality of life in individual cities. It focuses primarily on the efficient use of existing resources but also on the discovery of new ones, [...] Read more.
Smart city is a term that includes digital, information, and communication technologies that contribute to increasing the level and quality of life in individual cities. It focuses primarily on the efficient use of existing resources but also on the discovery of new ones, with the goal of lowering energy consumption while also reducing environmental impact and optimizing traffic in specific areas of the city. This concept is increasingly coming to the fore. Thus, the aim of this article was to determine the level of involvement of Slovak, Czech, Polish, and Hungarian authors in solutions for Smart cities using Web of Science data. The analysis of countries that form the Visegrad Four (V4) region reveals how the region ranks compared to other countries that are actively involved in Smart cities based on VosViewer. To map a specific region of countries, it is necessary to first understand the underlying causes of the problem worldwide. Then, the status of the authors, the number of articles and citations, and universities may be actively discussed and graphically depicted for each nation in Visegrad. Based on the discovered results, academics can identify the contributors and institutions that have solved the issue individually or in co-authorships over a long period. The findings provide data for future testing of selected dependencies and a platform for creating a scientific model to rank countries. In addition, the authorities may focus on identified clusters of key areas that are an essential part of Smart cities and provide a higher quality of life in their city for the people. Full article
Show Figures

Figure 1

16 pages, 3739 KiB  
Article
Serious Games for Vision Training Exercises with Eye-Tracking Technologies: Lessons from Developing a Prototype
by Qasim Ali, Ilona Heldal, Carsten Gunnar Helgesen and Are Dæhlen
Information 2022, 13(12), 569; https://doi.org/10.3390/info13120569 - 7 Dec 2022
Cited by 3 | Viewed by 3540
Abstract
Eye-tracking technologies (ETs) and serious games (SGs) have emerged as new methods promising better support for vision screening and training. Previous research has shown the practicality of eye-tracking technology for vision screening in health care, but there remains a need for studies showing [...] Read more.
Eye-tracking technologies (ETs) and serious games (SGs) have emerged as new methods promising better support for vision screening and training. Previous research has shown the practicality of eye-tracking technology for vision screening in health care, but there remains a need for studies showing that the effective utilization of SGs and ETs are beneficial for vision training. This study investigates the feasibility of SGs and ETs for vision training by designing, developing, and evaluating a prototype influenced by commercially available games, based on a battery of exercises previously defined by vision experts. Data were collected from five participants, including a vision teacher, through a user experience questionnaire (UEQ) following a mixed method. Data analysis of the UEQ results and interviews highlighted the current challenges and positive attitudes in using SGs and ET for vision training. In conjunction with UEQ indicators such as attractiveness and perspicuity, the stimulation of the vision training battery based on the user experience provided insights into using ETs and further developing SGs to better approach different eye movements for vision training. Full article
(This article belongs to the Special Issue Cloud Gamification 2021 & 2022)
Show Figures

Figure 1

38 pages, 2392 KiB  
Article
Incremental Entity Blocking over Heterogeneous Streaming Data
by Tiago Brasileiro Araújo, Kostas Stefanidis, Carlos Eduardo Santos Pires, Jyrki Nummenmaa and Thiago Pereira da Nóbrega
Information 2022, 13(12), 568; https://doi.org/10.3390/info13120568 - 5 Dec 2022
Cited by 1 | Viewed by 2054
Abstract
Web systems have become a valuable source of semi-structured and streaming data. In this sense, Entity Resolution (ER) has become a key solution for integrating multiple data sources or identifying similarities between data items, namely entities. To avoid the quadratic costs of the [...] Read more.
Web systems have become a valuable source of semi-structured and streaming data. In this sense, Entity Resolution (ER) has become a key solution for integrating multiple data sources or identifying similarities between data items, namely entities. To avoid the quadratic costs of the ER task and improve efficiency, blocking techniques are usually applied. Beyond the traditional challenges faced by ER and, consequently, by the blocking techniques, there are also challenges related to streaming data, incremental processing, and noisy data. To address them, we propose a schema-agnostic blocking technique capable of handling noisy and streaming data incrementally through a distributed computational infrastructure. To the best of our knowledge, there is a lack of blocking techniques that address these challenges simultaneously. This work proposes two strategies (attribute selection and top-n neighborhood entities) to minimize resource consumption and improve blocking efficiency. Moreover, this work presents a noise-tolerant algorithm, which minimizes the impact of noisy data (e.g., typos and misspellings) on blocking effectiveness. In our experimental evaluation, we use real-world pairs of data sources, including a case study that involves data from Twitter and Google News. The proposed technique achieves better results regarding effectiveness and efficiency compared to the state-of-the-art technique (metablocking). More precisely, the application of the two strategies over the proposed technique alone improves efficiency by 56%, on average. Full article
(This article belongs to the Special Issue Novel Methods and Applications in Natural Language Processing)
Show Figures

Figure 1

16 pages, 2124 KiB  
Article
The Impact of Story Structure, Meaningfulness, and Concentration in Serious Games
by Sofia Pescarin and Delfina S. Martinez Pandiani
Information 2022, 13(12), 567; https://doi.org/10.3390/info13120567 - 3 Dec 2022
Cited by 2 | Viewed by 2302
Abstract
This contribution analyzes the impact of factors related to story structure, meaningfulness, and concentration in the design of Serious Games. To explore them, the authors carried out an experimental evaluation aiming to identify relevant aspects affecting the cognitive-emotional impact of immersive Virtual Reality [...] Read more.
This contribution analyzes the impact of factors related to story structure, meaningfulness, and concentration in the design of Serious Games. To explore them, the authors carried out an experimental evaluation aiming to identify relevant aspects affecting the cognitive-emotional impact of immersive Virtual Reality (VR), specifically Educational Environmental Narrative (EEN) Games. The experiment was designed around three main research questions: if passive or active interaction is preferable for factual and spatial knowledge acquisition; whether meaningfulness is a relevant experience in a serious game (SG) context; and if concentration impacts knowledge acquisition and engagement also in VR educational games. The findings highlight that passive interaction should only be encouraged for factual knowledge acquisition, that meaningfulness is a relevant experience and should be included in serious game design, and, finally, that concentration is a factor that impacts the experience in immersive games. The authors discuss potential design paths to improve both factual and spatial knowledge acquisition, such as abstract concept-oriented design, concluding that SGs should contain game mechanics explicitly supporting players’ moments of reflection, and story structures explicitly aligned to educational facts. Full article
(This article belongs to the Special Issue eXtended Reality for Social Inclusion and Educational Purpose)
Show Figures

Figure 1

21 pages, 1751 KiB  
Article
Freehand Gestural Selection with Haptic Feedback in Wearable Optical See-Through Augmented Reality
by Gang Wang, Gang Ren, Xinye Hong, Xun Peng, Wenbin Li and Eamonn O’Neill
Information 2022, 13(12), 566; https://doi.org/10.3390/info13120566 - 2 Dec 2022
Cited by 4 | Viewed by 2777
Abstract
Augmented reality (AR) technologies can blend digital and physical space and serve a variety of applications intuitively and effectively. Specifically, wearable AR enabled by optical see-through (OST) AR head-mounted displays (HMDs) might provide users with a direct view of the physical environment containing [...] Read more.
Augmented reality (AR) technologies can blend digital and physical space and serve a variety of applications intuitively and effectively. Specifically, wearable AR enabled by optical see-through (OST) AR head-mounted displays (HMDs) might provide users with a direct view of the physical environment containing digital objects. Besides, users could directly interact with three-dimensional (3D) digital artefacts using freehand gestures captured by OST HMD sensors. However, as an emerging user interaction paradigm, freehand interaction with OST AR still requires further investigation to improve user performance and satisfaction. Thus, we conducted two studies to investigate various freehand selection design aspects in OST AR, including target placement, size, distance, position, and haptic feedback on the hand and body. The user evaluation results indicated that 40 cm might be an appropriate target distance for freehand gestural selection. A large target size might lower the selection time and error rate, and a small target size could minimise selection effort. The targets positioned in the centre are the easiest to select, while those in the corners require extra time and effort. Furthermore, we discovered that haptic feedback on the body could lead to high user preference and satisfaction. Based on the research findings, we conclude with design recommendations for effective and comfortable freehand gestural interaction in OST AR. Full article
(This article belongs to the Special Issue Extended Reality: A New Way of Interacting with the World)
Show Figures

Figure 1

14 pages, 1256 KiB  
Article
CA-STD: Scene Text Detection in Arbitrary Shape Based on Conditional Attention
by Xing Wu, Yangyang Qi, Jun Song, Junfeng Yao, Yanzhong Wang, Yang Liu, Yuexing Han and Quan Qian
Information 2022, 13(12), 565; https://doi.org/10.3390/info13120565 - 1 Dec 2022
Cited by 6 | Viewed by 1747
Abstract
Scene Text Detection (STD) is critical for obtaining textual information from natural scenes, serving for automated driving and security surveillance. However, existing text detection methods fall short when dealing with the variation in text curvatures, orientations, and aspect ratios in complex backgrounds. To [...] Read more.
Scene Text Detection (STD) is critical for obtaining textual information from natural scenes, serving for automated driving and security surveillance. However, existing text detection methods fall short when dealing with the variation in text curvatures, orientations, and aspect ratios in complex backgrounds. To meet the challenge, we propose a method called CA-STD to detect arbitrarily shaped text against a complicated background. Firstly, a Feature Refinement Module (FRM) is proposed to enhance feature representation. Additionally, the conditional attention mechanism is proposed not only to decouple the spatial and textual information from scene text images, but also to model the relationship among different feature vectors. Finally, the Contour Information Aggregation (CIA) is presented to enrich the feature representation of text contours by considering circular topology and semantic information simultaneously to obtain the detection curves with arbitrary shapes. The proposed CA-STD method is evaluated on different datasets with extensive experiments. On the one hand, the CA-STD outperforms state-of-the-art methods and achieves 82.9 in precision on the dataset of TotalText. On the other hand, the method has better performance than state-of-the-art methods and achieves the F1 score of 83.8 on the dataset of CTW-1500. The quantitative and qualitative analysis proves that the CA-STD can detect variably shaped scene text effectively. Full article
(This article belongs to the Special Issue Intelligence Computing and Systems)
Show Figures

Figure 1

21 pages, 2153 KiB  
Article
Deep Reinforcement Learning-Based iTrain Serious Game for Caregivers Dealing with Post-Stroke Patients
by Rytis Maskeliunas, Robertas Damasevicius, Andrius Paulauskas, Maria Gabriella Ceravolo, Marina Charalambous, Maria Kambanaros, Eliada Pampoulou, Francesco Barbabella, Arianna Poli and Carlos V. Carvalho
Information 2022, 13(12), 564; https://doi.org/10.3390/info13120564 - 30 Nov 2022
Cited by 6 | Viewed by 3564
Abstract
This paper describes a serious game based on a knowledge transfer model using deep reinforcement learning, with an aim to improve the caretakers’ knowledge and abilities in post-stroke care. The iTrain game was designed to improve caregiver knowledge and abilities by providing non-traditional [...] Read more.
This paper describes a serious game based on a knowledge transfer model using deep reinforcement learning, with an aim to improve the caretakers’ knowledge and abilities in post-stroke care. The iTrain game was designed to improve caregiver knowledge and abilities by providing non-traditional training to formal and informal caregivers who deal with stroke survivors. The methodologies utilized professional medical experiences and real-life evidence data gathered during the duration of the iTrain project to create the scenarios for the game’s deep reinforcement caregiver behavior improvement model, as well as the design of game mechanics, game images and game characters, and gameplay implementation. Furthermore, the results of the game’s direct impact on caregivers (n = 25) and stroke survivors (n = 21) in Lithuania using the Geriatric Depression Scale (GDS) and user experience questionnaire (UEQ) are presented. Both surveys had favorable outcomes, showing the effectiveness of the approach. The GDS scale (score 10) revealed a low number of 28% of individuals depressed, and the UEQ received a very favorable grade of +0.8. Full article
(This article belongs to the Special Issue Cloud Gamification 2021 & 2022)
Show Figures

Figure 1

25 pages, 4921 KiB  
Article
A Context-Aware Android Malware Detection Approach Using Machine Learning
by Mohammed N. AlJarrah, Qussai M. Yaseen and Ahmad M. Mustafa
Information 2022, 13(12), 563; https://doi.org/10.3390/info13120563 - 30 Nov 2022
Cited by 12 | Viewed by 4190 | Correction
Abstract
The Android platform has become the most popular smartphone operating system, which makes it a target for malicious mobile apps. This paper proposes a machine learning-based approach for Android malware detection based on application features. Unlike many prior research that focused exclusively on [...] Read more.
The Android platform has become the most popular smartphone operating system, which makes it a target for malicious mobile apps. This paper proposes a machine learning-based approach for Android malware detection based on application features. Unlike many prior research that focused exclusively on API Calls and permissions features to improve detection efficiency and accuracy, this paper incorporates applications’ contextual features with API Calls and permissions features. Moreover, the proposed approach extracted a new dataset of static API Calls and permission features using a large dataset of malicious and benign Android APK samples. Furthermore, the proposed approach used the Information Gain algorithm to reduce the API and permission feature space from 527 to the most relevant 50 features only. Several combinations of API Calls, permissions, and contextual features were used. These combinations were fed into different machine-learning algorithms to show the significance of using the selected contextual features in detecting Android malware. The experiments show that the proposed model achieved a very high accuracy of about 99.4% when using contextual features in comparison to 97.2% without using contextual features. Moreover, the paper shows that the proposed approach outperformed the state-of-the-art models considered in this work. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

14 pages, 494 KiB  
Article
Systematic Construction of Knowledge Graphs for Research-Performing Organizations
by David Chaves-Fraga, Oscar Corcho, Francisco Yedro, Roberto Moreno, Juan Olías and Alejandro De La Azuela
Information 2022, 13(12), 562; https://doi.org/10.3390/info13120562 - 30 Nov 2022
Cited by 7 | Viewed by 2874
Abstract
Research-Performing Organizations (e.g., research centers, universities) usually accumulate a wealth of data related to their researchers, the generated scientific results and research outputs, and publicly and privately-funded projects that support their activities, etc. Even though the types of data handled may look similar [...] Read more.
Research-Performing Organizations (e.g., research centers, universities) usually accumulate a wealth of data related to their researchers, the generated scientific results and research outputs, and publicly and privately-funded projects that support their activities, etc. Even though the types of data handled may look similar across organizations, it is common to see that each institution has developed its own data model to provide support for many of their administrative activities (project reporting, curriculum management, personnel management, etc.). This creates obstacles to the integration and linking of knowledge across organizations, as well as difficulties when researchers move from one institution to another. In this paper, we take advantage of the ontology network created by the Spanish HERCULES initiative to facilitate the construction of knowledge graphs from existing information systems, such as the one managed by the company Universitas XXI, which provides support to more than 100 Spanish-speaking research-performing organizations worldwide. Our effort is not just focused on following the modeling choices from that ontology, but also on demonstrating how the use of standard declarative mapping rules (i.e., R2RML) guarantees a systematic and sustainable workflow for constructing and maintaining a KG. We also present several real-world use cases in which the proposed workflow is adopted together with a set of lessons learned and general recommendations that may also apply to other domains. The next steps include researching in the automation of the creation of the mapping rules, the enrichment of the KG with external sources, and its exploitation though distributed environments. Full article
(This article belongs to the Special Issue Knowledge Graph Technology and Its Applications)
Show Figures

Figure 1

16 pages, 328 KiB  
Article
Generalized Zero-Shot Learning for Image Classification—Comparing Performance of Popular Approaches
by Elie Saad, Marcin Paprzycki, Maria Ganzha, Amelia Bădică, Costin Bădică, Stefka Fidanova, Ivan Lirkov and Mirjana Ivanović
Information 2022, 13(12), 561; https://doi.org/10.3390/info13120561 - 30 Nov 2022
Cited by 1 | Viewed by 2763
Abstract
There are many areas where conventional supervised machine learning does not work well, for instance, in cases with a large, or systematically increasing, number of countably infinite classes. Zero-shot learning has been proposed to address this. In generalized settings, the zero-shot learning problem [...] Read more.
There are many areas where conventional supervised machine learning does not work well, for instance, in cases with a large, or systematically increasing, number of countably infinite classes. Zero-shot learning has been proposed to address this. In generalized settings, the zero-shot learning problem represents real-world applications where test instances are present during inference. Separately, recently, there has been increasing interest in meta-classifiers, which combine the results from individual classifications to improve the overall classification quality. In this context, the purpose of the present paper is two-fold: First, the performance of five state-of-the-art, generalized zero-shot learning methods is compared for five popular benchmark datasets. Second, six standard meta-classification approaches are tested by experiment. In the experiments undertaken, all meta-classifiers were applied to the same datasets; their performance was compared to each other and to the original classifiers. Full article
(This article belongs to the Section Artificial Intelligence)
26 pages, 16128 KiB  
Article
Using Crypto-Asset Pricing Methods to Build Technical Oscillators for Short-Term Bitcoin Trading
by Zixiu Yang and Dean Fantazzini
Information 2022, 13(12), 560; https://doi.org/10.3390/info13120560 - 29 Nov 2022
Cited by 1 | Viewed by 3475
Abstract
This paper examines the trading performances of several technical oscillators created using crypto-asset pricing methods for short-term bitcoin trading. Seven pricing models proposed in the professional and academic literature were transformed into oscillators, and two thresholds were introduced to create buy and sell [...] Read more.
This paper examines the trading performances of several technical oscillators created using crypto-asset pricing methods for short-term bitcoin trading. Seven pricing models proposed in the professional and academic literature were transformed into oscillators, and two thresholds were introduced to create buy and sell signals. The empirical back-testing analysis showed that some of these methods proved to be profitable with good Sharpe ratios and limited max drawdowns. However, the trading performances of almost all methods significantly worsened after 2017, thus indirectly confirming an increasing financial literature that showed that the introduction of bitcoin futures in 2017 improved the efficiency of bitcoin markets. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop