Next Issue
Volume 13, March
Previous Issue
Volume 13, January
 
 

Information, Volume 13, Issue 2 (February 2022) – 59 articles

Cover Story (view full-size image): The use of adversarial attacks against authentication systems causes the accuracy of such systems to be substantially reduced, despite the complexity of generating a competitive example. As part of this study a white-box adversarial attack was carried out on an authentication system. The basis of the authentication system is a neural network perceptron, trained on a dataset of frequency signatures of sign. For an attack on an atypical dataset the following results were obtained: with an attack intensity of 25% the authentication system availability decreases to 50% for a particular user; with a further increase in the attack intensity the accuracy decreases to 5%. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 1858 KiB  
Article
Automatic Hemiplegia Type Detection (Right or Left) Using the Levenberg-Marquardt Backpropagation Method
by Vasileios Christou, Alexandros Arjmand, Dimitrios Dimopoulos, Dimitrios Varvarousis, Ioannis Tsoulos, Alexandros T. Tzallas, Christos Gogos, Markos G. Tsipouras, Evripidis Glavas, Avraam Ploumis and Nikolaos Giannakeas
Information 2022, 13(2), 101; https://doi.org/10.3390/info13020101 - 21 Feb 2022
Cited by 7 | Viewed by 4682
Abstract
Hemiplegia affects a significant portion of the human population. It is a condition that causes motor impairment and severely reduces the patient’s quality of life. This paper presents an automatic system for identifying the hemiplegia type (right or left part of the body [...] Read more.
Hemiplegia affects a significant portion of the human population. It is a condition that causes motor impairment and severely reduces the patient’s quality of life. This paper presents an automatic system for identifying the hemiplegia type (right or left part of the body is affected). The proposed system utilizes the data taken from patients and healthy subjects using the accelerometer sensor from the RehaGait mobile gait analysis system. The collected data undergo a pre-processing procedure followed by a feature extraction stage. The extracted features are then sent to a neural network trained by the Levenberg-Marquardt backpropagation (LM-BP) algorithm. The experimental part of this research involved creating a custom-created dataset containing entries taken from ten healthy and twenty non-healthy subjects. The data were taken from seven different sensors placed in specific areas of the subjects’ bodies. These sensors can capture a three-dimensional (3D) signal using the accelerometer, magnetometer, and gyroscope device types. The proposed system used the signals taken from the accelerometers, which were split into 2-sec windows. The proposed system achieved a classification accuracy of 95.12% and was compared with fourteen commonly used machine learning approaches. Full article
Show Figures

Figure 1

21 pages, 2121 KiB  
Article
Data Quality Barriers for Transparency in Public Procurement
by Ahmet Soylu, Óscar Corcho, Brian Elvesæter, Carlos Badenes-Olmedo, Francisco Yedro-Martínez, Matej Kovacic, Matej Posinkovic, Mitja Medvešček, Ian Makgill, Chris Taggart, Elena Simperl, Till C. Lech and Dumitru Roman
Information 2022, 13(2), 99; https://doi.org/10.3390/info13020099 - 20 Feb 2022
Cited by 15 | Viewed by 6810
Abstract
Governments need to be accountable and transparent for their public spending decisions in order to prevent losses through fraud and corruption as well as to build healthy and sustainable economies. Open data act as a major instrument in this respect by enabling public [...] Read more.
Governments need to be accountable and transparent for their public spending decisions in order to prevent losses through fraud and corruption as well as to build healthy and sustainable economies. Open data act as a major instrument in this respect by enabling public administrations, service providers, data journalists, transparency activists, and regular citizens to identify fraud or uncompetitive markets through connecting related, heterogeneous, and originally unconnected data sources. To this end, in this article, we present our experience in the case of Slovenia, where we successfully applied a number of anomaly detection techniques over a set of open disparate data sets integrated into a Knowledge Graph, including procurement, company, and spending data, through a linked data-based platform called TheyBuyForYou. We then report a set of guidelines for publishing high quality procurement data for better procurement analytics, since our experience has shown us that there are significant shortcomings in the quality of data being published. This article contributes to enhanced policy making by guiding public administrations at local, regional, and national levels on how to improve the way they publish and use procurement-related data; developing technologies and solutions that buyers in the public and private sectors can use and adapt to become more transparent, make markets more competitive, and reduce waste and fraud; and providing a Knowledge Graph, which is a data resource that is designed to facilitate integration across multiple data silos by showing how it adds context and domain knowledge to machine-learning-based procurement analytics. Full article
(This article belongs to the Topic Digital Transformation and E-Government)
Show Figures

Figure 1

8 pages, 2605 KiB  
Article
Physical Layer Security for Military IoT Links Using MIMO-Beamforming at 60 GHz
by Ahmed Iyanda Sulyman and Calvin Henggeler
Information 2022, 13(2), 100; https://doi.org/10.3390/info13020100 - 20 Feb 2022
Cited by 7 | Viewed by 2595
Abstract
This paper discusses the concept and practicality of internet-of-things (IoT) link security enhancements using multiple-input multiple-output (MIMO) and beamforming solutions at the physical layer of the wireless system. Large-scale MIMO and beamforming techniques have been studied extensively in the context of 5G cellular [...] Read more.
This paper discusses the concept and practicality of internet-of-things (IoT) link security enhancements using multiple-input multiple-output (MIMO) and beamforming solutions at the physical layer of the wireless system. Large-scale MIMO and beamforming techniques have been studied extensively in the context of 5G cellular systems. The concept of utilizing these transmission techniques for security enhancements in cellular IoT systems, however, have not yet been fully explored. This article will list a variety of options that may be explored in realizing more secure IoT links using MIMO-beamforming techniques. The paper provides a valuable tutorial for both engineers and researchers working in this field. Full article
(This article belongs to the Special Issue Channel Estimation and Detection for Large-Scale MIMO Systems)
Show Figures

Figure 1

13 pages, 3073 KiB  
Article
Robust Segmentation Based on Salient Region Detection Coupled Gaussian Mixture Model
by Xiaoyan Pan, Yuhui Zheng and Byeungwoo Jeon
Information 2022, 13(2), 98; https://doi.org/10.3390/info13020098 - 18 Feb 2022
Cited by 5 | Viewed by 2210
Abstract
The impressive progress on image segmentation has been witnessed recently. In this paper, an improved model introducing frequency-tuned salient region detection into Gaussian mixture model (GMM) is proposed, which is named FTGMM. Frequency-tuned salient region detection is added to achieve the saliency map [...] Read more.
The impressive progress on image segmentation has been witnessed recently. In this paper, an improved model introducing frequency-tuned salient region detection into Gaussian mixture model (GMM) is proposed, which is named FTGMM. Frequency-tuned salient region detection is added to achieve the saliency map of the original image, which is combined with the original image, and the value of the saliency map is added into the Gaussian mixture model in the form of spatial information weight. The proposed method (FTGMM) calculates the model parameters by the expectation maximization (EM) algorithm with low computational complexity. In the qualitative and quantitative analysis of the experiment, the subjective visual effect and the value of the evaluation index are found to be better than other methods. Therefore, the proposed method (FTGMM) is proven to have high precision and better robustness. Full article
(This article belongs to the Special Issue Recent Advances in Video Compression and Coding)
Show Figures

Figure 1

13 pages, 310 KiB  
Article
COVID-19 and Science Communication: The Recording and Reporting of Disease Mortality
by Ognjen Arandjelović
Information 2022, 13(2), 97; https://doi.org/10.3390/info13020097 - 18 Feb 2022
Cited by 1 | Viewed by 2130
Abstract
The ongoing COVID-19 pandemic has brought science to the fore of public discourse and, considering the complexity of the issues involved, with it also the challenge of effective and informative science communication. This is a particularly contentious topic, in that it is both [...] Read more.
The ongoing COVID-19 pandemic has brought science to the fore of public discourse and, considering the complexity of the issues involved, with it also the challenge of effective and informative science communication. This is a particularly contentious topic, in that it is both highly emotional in and of itself; sits at the nexus of the decision-making process regarding the handling of the pandemic, which has effected lockdowns, social behaviour measures, business closures, and others; and concerns the recording and reporting of disease mortality. To clarify a point that has caused much controversy and anger in the public debate, the first part of the present article discusses the very fundamentals underlying the issue of causative attribution with regards to mortality, lays out the foundations of the statistical means of mortality estimation, and concretizes these by analysing the recording and reporting practices adopted in England and their widespread misrepresentations. The second part of the article is empirical in nature. I present data and an analysis of how COVID-19 mortality has been reported in the mainstream media in the UK and the USA, including a comparative analysis both across the two countries as well as across different media outlets. The findings clearly demonstrate a uniform and worrying lack of understanding of the relevant technical subject matter by the media in both countries. Of particular interest is the finding that with a remarkable regularity (ρ>0.998), the greater the number of articles a media outlet has published on COVID-19 mortality, the greater the proportion of its articles misrepresented the disease mortality figures. Full article
Show Figures

Figure 1

16 pages, 2448 KiB  
Article
Tuberculosis Bacteria Detection and Counting in Fluorescence Microscopy Images Using a Multi-Stage Deep Learning Pipeline
by Marios Zachariou, Ognjen Arandjelović, Wilber Sabiiti, Bariki Mtafya and Derek Sloan
Information 2022, 13(2), 96; https://doi.org/10.3390/info13020096 - 18 Feb 2022
Cited by 18 | Viewed by 5598
Abstract
The manual observation of sputum smears by fluorescence microscopy for the diagnosis and treatment monitoring of patients with tuberculosis (TB) is a laborious and subjective task. In this work, we introduce an automatic pipeline which employs a novel deep learning-based approach to rapidly [...] Read more.
The manual observation of sputum smears by fluorescence microscopy for the diagnosis and treatment monitoring of patients with tuberculosis (TB) is a laborious and subjective task. In this work, we introduce an automatic pipeline which employs a novel deep learning-based approach to rapidly detect Mycobacterium tuberculosis (Mtb) organisms in sputum samples and thus quantify the burden of the disease. Fluorescence microscopy images are used as input in a series of networks, which ultimately produces a final count of present bacteria more quickly and consistently than manual analysis by healthcare workers. The pipeline consists of four stages: annotation by cycle-consistent generative adversarial networks (GANs), extraction of salient image patches, classification of the extracted patches, and finally, regression to yield the final bacteria count. We empirically evaluate the individual stages of the pipeline as well as perform a unified evaluation on previously unseen data that were given ground-truth labels by an experienced microscopist. We show that with no human intervention, the pipeline can provide the bacterial count for a sample of images with an error of less than 5%. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 6299 KiB  
Article
Satellite Pose Estimation via Only a Single Spatial Circle
by Wei Zhang, Pingguo Xiao and Junlin Li
Information 2022, 13(2), 95; https://doi.org/10.3390/info13020095 - 17 Feb 2022
Cited by 1 | Viewed by 2238
Abstract
To estimate the pose of satellites in space, the docking ring component has strong rigid body characteristics and can provide a fixed circular feature, which is an important object. However, due to the need for additional constraints to estimate a single spatial circle [...] Read more.
To estimate the pose of satellites in space, the docking ring component has strong rigid body characteristics and can provide a fixed circular feature, which is an important object. However, due to the need for additional constraints to estimate a single spatial circle pose on the docking ring, practical applications are greatly limited. In response to the above problems, this paper proposes a pose solution method based on a single spatial circle. First, the spatial circle is discretized into a set of 3D asymmetric specific sparse points, eliminating the strict central symmetry of the circle. Then, a two-stage pose estimation network, Hvnet, based on Hough voting is proposed to locate the 2D sparse points on the image. Finally, the position and orientation of the spatial circle are obtained by the Perspective-n-Point (PnP) algorithm. The effectiveness of the proposed method was verified through experiments, and the method was found to achieve good solution accuracy under a complex lighting environment. Full article
Show Figures

Figure 1

14 pages, 612 KiB  
Article
Detecting Learning Patterns in Tertiary Education Using K-Means Clustering
by Emmanuel Tuyishimire, Wadzanai Mabuto, Paul Gatabazi and Sylvie Bayisingize
Information 2022, 13(2), 94; https://doi.org/10.3390/info13020094 - 17 Feb 2022
Cited by 5 | Viewed by 3605
Abstract
We are in the era where various processes need to be online. However, data from digital learning platforms are still underutilised in higher education, yet, they contain student learning patterns, whose awareness would contribute to educational development. Furthermore, the knowledge of student progress [...] Read more.
We are in the era where various processes need to be online. However, data from digital learning platforms are still underutilised in higher education, yet, they contain student learning patterns, whose awareness would contribute to educational development. Furthermore, the knowledge of student progress would inform educators whether they would mitigate teaching conditions for critically performing students. Less knowledge of performance patterns limits the development of adaptive teaching and learning mechanisms. In this paper, a model for data exploitation to dynamically study students progress is proposed. Variables to determine current students progress are defined and are used to group students into different clusters. A model for dynamic clustering is proposed and related cluster migration is analysed to isolate poorer or higher performing students. K-means clustering is performed on real data consisting of students from a South African tertiary institution. The proposed model for cluster migration analysis is applied and the corresponding learning patterns are revealed. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications for Education)
Show Figures

Figure 1

14 pages, 344 KiB  
Article
Lexical Diversity in Statistical and Neural Machine Translation
by Mojca Brglez and Špela Vintar
Information 2022, 13(2), 93; https://doi.org/10.3390/info13020093 - 15 Feb 2022
Cited by 6 | Viewed by 3943
Abstract
Neural machine translation systems have revolutionized translation processes in terms of quantity and speed in recent years, and they have even been claimed to achieve human parity. However, the quality of their output has also raised serious doubts and concerns, such as loss [...] Read more.
Neural machine translation systems have revolutionized translation processes in terms of quantity and speed in recent years, and they have even been claimed to achieve human parity. However, the quality of their output has also raised serious doubts and concerns, such as loss in lexical variation, evidence of “machine translationese”, and its effect on post-editing, which results in “post-editese”. In this study, we analyze the outputs of three English to Slovenian machine translation systems in terms of lexical diversity in three different genres. Using both quantitative and qualitative methods, we analyze one statistical and two neural systems, and we compare them to a human reference translation. Our quantitative analyses based on lexical diversity metrics show diverging results; however, translation systems, particularly neural ones, mostly exhibit larger lexical diversity than their human counterparts. Nevertheless, a qualitative method shows that these quantitative results are not always a reliable tool to assess true lexical diversity and that a lot of lexical “creativity”, especially by neural translation systems, is often unreliable, inconsistent, and misguided. Full article
(This article belongs to the Special Issue Frontiers in Machine Translation)
Show Figures

Figure 1

23 pages, 5179 KiB  
Article
Multi-Objective Optimization of a Task-Scheduling Algorithm for a Secure Cloud
by Wei Li, Qi Fan, Fangfang Dang, Yuan Jiang, Haomin Wang, Shuai Li and Xiaoliang Zhang
Information 2022, 13(2), 92; https://doi.org/10.3390/info13020092 - 15 Feb 2022
Cited by 4 | Viewed by 3201
Abstract
As more and more power information systems are gradually deployed to cloud servers, the task scheduling of a secure cloud is facing challenges. Optimizing the scheduling strategy only from a single aspect cannot meet the needs of power business. At the same time, [...] Read more.
As more and more power information systems are gradually deployed to cloud servers, the task scheduling of a secure cloud is facing challenges. Optimizing the scheduling strategy only from a single aspect cannot meet the needs of power business. At the same time, the power information system deployed on the security cloud will face different types of business traffic, and each business traffic has different risk levels. However, the existing research has not conducted in-depth research on this aspect, so it is difficult to obtain the optimal scheduling scheme. To solve the above problems, we first build a security cloud task-scheduling model combined with the power information system, and then we define the risk level of business traffic and the objective function of task scheduling. Based on the above, we propose a multi-objective optimization task-scheduling algorithm based on artificial fish swarm algorithm (MOOAFSA). MOOAFSA initializes the fish population through chaotic mapping, which improves the global optimization capability. Moreover, MOOAFSA uses a dynamic step size and field of view, as well as the introduction of adaptive weight factor, which accelerates the convergence and improves optimization accuracy. Finally, MOOAFSA applies crossovers and mutations, which make it easier to jump out of a local optimum. The experimental results show that compared with ant colony (ACO), particle swarm optimization (PSO) and artificial fish swarm algorithm (AFSA), MOOAFSA not only significantly accelerates the convergence speed but also reduces the task-completion time, load balancing and execution cost by 15.62–28.69%, 66.91–75.62% and 32.37–41.31%, respectively. Full article
Show Figures

Figure 1

18 pages, 3371 KiB  
Article
MEduKG: A Deep-Learning-Based Approach for Multi-Modal Educational Knowledge Graph Construction
by Nan Li, Qiang Shen, Rui Song, Yang Chi and Hao Xu
Information 2022, 13(2), 91; https://doi.org/10.3390/info13020091 - 15 Feb 2022
Cited by 23 | Viewed by 6718
Abstract
The popularity of information technology has given rise to a growing interest in smart education and has provided the possibility of combining online and offline education. Knowledge graphs, an effective technology for knowledge representation and management, have been successfully utilized to manage massive [...] Read more.
The popularity of information technology has given rise to a growing interest in smart education and has provided the possibility of combining online and offline education. Knowledge graphs, an effective technology for knowledge representation and management, have been successfully utilized to manage massive educational resources. However, the existing research on constructing educational knowledge graphs ignores multiple modalities and their relationships, such as teacher speeches and their relationship with knowledge. To tackle this problem, we propose an automatic approach to construct multi-modal educational knowledge graphs that integrate speech as a modal resource to facilitate the reuse of educational resources. Specifically, we first propose a fine-tuned Bidirectional Encoder Representation from Transformers (BERT) model based on education lexicon, called EduBERT, which can adaptively capture effective information in the education field. We also add a Bidirectional Long Short-Term Memory-Conditional Random Field (BiLSTM-CRF) to effectively identify educational entities. Then, the locational information of the entity is incorporated into BERT to extract the educational relationship. In addition, to cover the shortage of traditional text-based knowledge graphs, we focus on collecting teacher speech to construct a multi-modal knowledge graph. We propose a speech-fusion method that links these data into the graph as a class of entities. The numeric results show that our proposed approach can manage and present various modes of educational resources and that it can provide better education services. Full article
Show Figures

Figure 1

12 pages, 3551 KiB  
Article
The Influence of Network Public Opinion on Audit Credibility: A Dynamic Rumor Propagation Model Based on User Weight
by Lin Zhu, Jinyu Li and Luyi Bai
Information 2022, 13(2), 90; https://doi.org/10.3390/info13020090 - 14 Feb 2022
Cited by 3 | Viewed by 2581
Abstract
Network public opinion is one of the factors that affects the credibility of audits, especially falsified network public opinion, which can easily result in the public losing trust in audits and may even impact the financial market. As users of social networks are [...] Read more.
Network public opinion is one of the factors that affects the credibility of audits, especially falsified network public opinion, which can easily result in the public losing trust in audits and may even impact the financial market. As users of social networks are not online 24 h a day, and their network behaviors are dynamic, in this study, we constructed a dynamic rumor-spreading model. Because the influence and authority of different user nodes in the network are different, we added user weights to the rumor propagation model, and finally, we established a dynamic rumor propagation model based on user weights. The experimental results showed that the rumor propagation model had a good monitoring effect, so it could help with managing the public opinion of audit institutions, maintaining the image of audit fairness and justice, and maintaining the stability of the capital market. Full article
Show Figures

Figure 1

13 pages, 4205 KiB  
Article
A New Edge Computing Architecture for IoT and Multimedia Data Management
by Olivier Debauche, Saïd Mahmoudi and Adriano Guttadauria
Information 2022, 13(2), 89; https://doi.org/10.3390/info13020089 - 14 Feb 2022
Cited by 29 | Viewed by 8570
Abstract
The Internet of Things and multimedia devices generate a tremendous amount of data. The transfer of this data to the cloud is a challenging problem because of the congestion at the network level, and therefore processing time could be too long when we [...] Read more.
The Internet of Things and multimedia devices generate a tremendous amount of data. The transfer of this data to the cloud is a challenging problem because of the congestion at the network level, and therefore processing time could be too long when we use a pure cloud computing strategy. On the other hand, new applications requiring the processing of large amounts of data in real time have gradually emerged, such as virtual reality and augmented reality. These new applications have gradually won over users and developed a demand for near real-time interaction of their applications, which has completely called into question the way we process and store data. To address these two problems of congestion and computing time, edge architecture has emerged with the goal of processing data as close as possible to users, and to ensure privacy protection and responsiveness in real-time. With the continuous increase in computing power, amounts of memory and data storage at the level of smartphone and connected objects, it is now possible to process data as close as possible to sensors or directly on users devices. The coupling of these two types of processing as close as possible to the data and to the user opens up new perspectives in terms of services. In this paper, we present a new distributed edge architecture aiming to process and store Internet of Things and multimedia data close to the data producer, offering fast response time (closer to real time) in order to meet the demands of modern applications. To do this, the processing at the level of the producers of data collaborate with the processing ready for the users, establishing a new paradigm of short supply circuit for data transmission inspired of short supply chains in agriculture. The removing of unnecessary intermediaries between the producer and the consumer of the data improves efficiency. We named this new paradigm the Short Supply Circuit Internet of Things (SSCIoT). Full article
Show Figures

Figure 1

14 pages, 841 KiB  
Article
Knowledge Distillation: A Method for Making Neural Machine Translation More Efficient
by Wandri Jooste, Rejwanul Haque and Andy Way
Information 2022, 13(2), 88; https://doi.org/10.3390/info13020088 - 14 Feb 2022
Cited by 15 | Viewed by 4831
Abstract
Neural machine translation (NMT) systems have greatly improved the quality available from machine translation (MT) compared to statistical machine translation (SMT) systems. However, these state-of-the-art NMT models need much more computing power and data than SMT models, a requirement that is unsustainable in [...] Read more.
Neural machine translation (NMT) systems have greatly improved the quality available from machine translation (MT) compared to statistical machine translation (SMT) systems. However, these state-of-the-art NMT models need much more computing power and data than SMT models, a requirement that is unsustainable in the long run and of very limited benefit in low-resource scenarios. To some extent, model compression—more specifically state-of-the-art knowledge distillation techniques—can remedy this. In this work, we investigate knowledge distillation on a simulated low-resource German-to-English translation task. We show that sequence-level knowledge distillation can be used to train small student models on knowledge distilled from large teacher models. Part of this work examines the influence of hyperparameter tuning on model performance when lowering the number of Transformer heads or limiting the vocabulary size. Interestingly, the accuracy of these student models is higher than that of the teachers in some cases even though the student model training times are shorter in some cases. In a novel contribution, we demonstrate for a specific MT service provider that in the post-deployment phase, distilled student models can reduce emissions, as well as cost purely in monetary terms, by almost 50%. Full article
(This article belongs to the Special Issue Novel Methods and Applications in Natural Language Processing)
Show Figures

Figure 1

16 pages, 1938 KiB  
Article
A Privacy-Preserving and Standard-Based Architecture for Secondary Use of Clinical Data
by Mario Ciampi, Mario Sicuranza and Stefano Silvestri
Information 2022, 13(2), 87; https://doi.org/10.3390/info13020087 - 13 Feb 2022
Cited by 9 | Viewed by 6534
Abstract
The heterogeneity of the formats and standards of clinical data, which includes both structured, semi-structured, and unstructured data, in addition to the sensitive information contained in them, require the definition of specific approaches that are able to implement methodologies that can permit the [...] Read more.
The heterogeneity of the formats and standards of clinical data, which includes both structured, semi-structured, and unstructured data, in addition to the sensitive information contained in them, require the definition of specific approaches that are able to implement methodologies that can permit the extraction of valuable information buried under such data. Although many challenges and issues that have not been fully addressed still exist when this information must be processed and used for further purposes, the most recent techniques based on machine learning and big data analytics can support the information extraction process for the secondary use of clinical data. In particular, these techniques can facilitate the transformation of heterogeneous data into a common standard format. Moreover, they can also be exploited to define anonymization or pseudonymization approaches, respecting the privacy requirements stated in the General Data Protection Regulation, Health Insurance Portability and Accountability Act and other national and regional laws. In fact, compliance with these laws requires that only de-identified clinical and personal data can be processed for secondary analyses, in particular when data is shared or exchanged across different institutions. This work proposes a modular architecture capable of collecting clinical data from heterogeneous sources and transforming them into useful data for secondary uses, such as research, governance, and medical education purposes. The proposed architecture is able to exploit appropriate modules and algorithms, carry out transformations (pseudonymization and standardization) required to use data for the second purposes, as well as provide efficient tools to facilitate the retrieval and analysis processes. Preliminary experimental tests show good accuracy in terms of quantitative evaluations. Full article
(This article belongs to the Special Issue Health Data Information Retrieval)
Show Figures

Figure 1

16 pages, 681 KiB  
Article
A Study on the Effect of Customer Habits on Revisit Intention Focusing on Franchise Coffee Shops
by Hong-Joo Lee
Information 2022, 13(2), 86; https://doi.org/10.3390/info13020086 - 13 Feb 2022
Cited by 4 | Viewed by 13652
Abstract
As the economy grows in South Korea and the needs of consumers become increasingly diversified, various consumer markets are emerging. In this study, the reasons for customers’ intentions to revisit a franchise coffee shop were explored. Many researchers have found that customer satisfaction [...] Read more.
As the economy grows in South Korea and the needs of consumers become increasingly diversified, various consumer markets are emerging. In this study, the reasons for customers’ intentions to revisit a franchise coffee shop were explored. Many researchers have found that customer satisfaction is a major factor in the intention to revisit, and the factors affecting this satisfaction have been presented as major research topics. However, in this study, in order to examine the consumers’ behavior toward revisit intentions, the results were analyzed by dividing the consumer data into a group of college students in their 20s and a group of office workers in their 30s and 40s. The results of this study found that in regard to customers’ habitual intentions to revisit, perceived product quality and brand awareness were more influential than service quality or the physical environment. In particular, before returning to a coffee shop, customers habitually recalled the coffee shop they wanted to revisit, suggesting that the taste of the coffee and the quality of the various products had a very important impact on their revisiting habits. Full article
(This article belongs to the Special Issue Data Analytics and Consumer Behavior)
Show Figures

Figure 1

19 pages, 1250 KiB  
Article
Data Processing in Cloud Computing Model on the Example of Salesforce Cloud
by Witold Marańda, Aneta Poniszewska-Marańda and Małgorzata Szymczyńska
Information 2022, 13(2), 85; https://doi.org/10.3390/info13020085 - 12 Feb 2022
Cited by 5 | Viewed by 6178
Abstract
Data processing is integrated with every aspect of operation enterprises—from accounting to marketing and communication internal and control of production processes. The best place to store the information is a properly prepared data center. There are a lot of providers of cloud computing [...] Read more.
Data processing is integrated with every aspect of operation enterprises—from accounting to marketing and communication internal and control of production processes. The best place to store the information is a properly prepared data center. There are a lot of providers of cloud computing and methods of data storage and processing. Every business must do the right thing, which is to think over how the data at your disposal are to be managed. The main purpose of this paper is research and the comparison of available methods of data processing and storage outside the enterprise in the cloud computing model. The cloud in SaaS (software as a service) model—Salesforce.com and a free platform development offered by Salesforce.com—force.com were used to perform the research. The paper presents the analysis results of available methods of processing and storing data outside the enterprise in the cloud computing model on the example of Salesforce cloud. Salesforce.com offers several benefits, but each service provider offers different services, systems, products, and forms of data protection. The choice of customer depends on individual needs and business plans for the future. A comparison of available methods of data processing and storage outside the enterprise in the cloud computing model was presented. On the basis of collected results, it was determined for what purposes the data processing methods available on the platform are suitable and how they can meet the needs of enterprises. Full article
Show Figures

Figure 1

14 pages, 1207 KiB  
Article
Learning Static-Adaptive Graphs for RGB-T Image Saliency Detection
by Zhengmei Xu, Jin Tang, Aiwu Zhou and Huaming Liu
Information 2022, 13(2), 84; https://doi.org/10.3390/info13020084 - 12 Feb 2022
Viewed by 2366
Abstract
Many works have been proposed on image saliency detection to handle challenging issues including low illumination, cluttered background, low contrast, and so on. Although good performance has been achieved by these algorithms, detection results are still poor based on RGB modality. Inspired by [...] Read more.
Many works have been proposed on image saliency detection to handle challenging issues including low illumination, cluttered background, low contrast, and so on. Although good performance has been achieved by these algorithms, detection results are still poor based on RGB modality. Inspired by the recent progress of multi-modality fusion, we propose a novel RGB-thermal saliency detection algorithm through learning static-adaptive graphs. Specifically, we first extract superpixels from the two modalities and calculate their affinity matrix. Then, we learn the affinity matrix dynamically and construct a static-adaptive graph. Finally, the saliency maps can be obtained by a two-stage ranking algorithm. Our method is evaluated on RGBT-Saliency Dataset with eleven kinds of challenging subsets. Experimental results show that the proposed method has better generalization performance. The complementary benefits of RGB and thermal images and the more robust feature expression of learning static-adaptive graphs create an effective way to improve the detection effectiveness of image saliency in complex scenes. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

39 pages, 794 KiB  
Review
A Survey on Text Classification Algorithms: From Text to Predictions
by Andrea Gasparetto, Matteo Marcuzzo, Alessandro Zangari and Andrea Albarelli
Information 2022, 13(2), 83; https://doi.org/10.3390/info13020083 - 11 Feb 2022
Cited by 84 | Viewed by 21704
Abstract
In recent years, the exponential growth of digital documents has been met by rapid progress in text classification techniques. Newly proposed machine learning algorithms leverage the latest advancements in deep learning methods, allowing for the automatic extraction of expressive features. The swift development [...] Read more.
In recent years, the exponential growth of digital documents has been met by rapid progress in text classification techniques. Newly proposed machine learning algorithms leverage the latest advancements in deep learning methods, allowing for the automatic extraction of expressive features. The swift development of these methods has led to a plethora of strategies to encode natural language into machine-interpretable data. The latest language modelling algorithms are used in conjunction with ad hoc preprocessing procedures, of which the description is often omitted in favour of a more detailed explanation of the classification step. This paper offers a concise review of recent text classification models, with emphasis on the flow of data, from raw text to output labels. We highlight the differences between earlier methods and more recent, deep learning-based methods in both their functioning and in how they transform input data. To give a better perspective on the text classification landscape, we provide an overview of datasets for the English language, as well as supplying instructions for the synthesis of two new multilabel datasets, which we found to be particularly scarce in this setting. Finally, we provide an outline of new experimental results and discuss the open research challenges posed by deep learning-based language models. Full article
(This article belongs to the Topic Big Data and Artificial Intelligence)
Show Figures

Figure 1

8 pages, 237 KiB  
Article
Ordered Weighted Averaging (OWA), Decision Making under Uncertainty, and Deep Learning: How Is This All Related?
by Vladik Kreinovich
Information 2022, 13(2), 82; https://doi.org/10.3390/info13020082 - 11 Feb 2022
Viewed by 4075
Abstract
Among many research areas to which Ron Yager contributed are decision making under uncertainty (in particular, under interval and fuzzy uncertainty) and aggregation—where he proposed, analyzed, and utilized ordered weighted averaging (OWA). The OWA algorithm itself provides only a specific type of data [...] Read more.
Among many research areas to which Ron Yager contributed are decision making under uncertainty (in particular, under interval and fuzzy uncertainty) and aggregation—where he proposed, analyzed, and utilized ordered weighted averaging (OWA). The OWA algorithm itself provides only a specific type of data aggregation. However, it turns out that if we allow several OWA stages, one after another, we obtain a scheme with a universal approximation property—moreover, a scheme which is perfectly equivalent to modern ReLU-based deep neural networks. In this sense, Ron Yager can be viewed as a (grand)father of ReLU-based deep learning. We also recall that the existing schemes for decision making under uncertainty are also naturally interpretable in OWA terms. Full article
24 pages, 2344 KiB  
Article
Towards an Engaging and Gamified Online Learning Environment—A Real CaseStudy
by Filipe Portela
Information 2022, 13(2), 80; https://doi.org/10.3390/info13020080 - 9 Feb 2022
Cited by 6 | Viewed by 3757
Abstract
Currently, remote work is common, and this trend has come to several areas and processes, such as education and teaching. Regarding higher education, universities have several challenges to overcome, the most challenging being transforming teaching to be more digital and engaging. Therefore, TechTeach [...] Read more.
Currently, remote work is common, and this trend has come to several areas and processes, such as education and teaching. Regarding higher education, universities have several challenges to overcome, the most challenging being transforming teaching to be more digital and engaging. Therefore, TechTeach has arisen as a new teaching paradigm that creates a unique learning environment and satisfies students’ and professors’ expectations. After the success of the b-learning approach, professors created new experiences utilizing an entirely online learning environment following this paradigm. This article shows the work performed through a real case study, explains the strategy used to implement this paradigm, provides students’ opinions, and analyses the results achieved. The results demonstrated that, while the effort was tremendous, the result was beneficial to all. After 208 online hours of classes, 11,173 downloads, 15,224 messages, 200,000 sessions, 3 rescues requests, and 28t cards, 98.15% of the active participants gave it their approval, 96.53% considered this subject equal to or better than the others, and 85% of accepted the gamification system. These results show that a class can be an engaging environment where students can learn and enjoy it regardless of whether it is physical or not. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

15 pages, 3025 KiB  
Article
Architecture of a Hybrid Video/Optical See-through Head-Mounted Display-Based Augmented Reality Surgical Navigation Platform
by Marina Carbone, Fabrizio Cutolo, Sara Condino, Laura Cercenelli, Renzo D’Amato, Giovanni Badiali and Vincenzo Ferrari
Information 2022, 13(2), 81; https://doi.org/10.3390/info13020081 - 8 Feb 2022
Cited by 20 | Viewed by 4543
Abstract
In the context of image-guided surgery, augmented reality (AR) represents a ground-breaking enticing improvement, mostly when paired with wearability in the case of open surgery. Commercially available AR head-mounted displays (HMDs), designed for general purposes, are increasingly used outside their indications to develop [...] Read more.
In the context of image-guided surgery, augmented reality (AR) represents a ground-breaking enticing improvement, mostly when paired with wearability in the case of open surgery. Commercially available AR head-mounted displays (HMDs), designed for general purposes, are increasingly used outside their indications to develop surgical guidance applications with the ambition to demonstrate the potential of AR in surgery. The applications proposed in the literature underline the hunger for AR-guidance in the surgical room together with the limitations that hinder commercial HMDs from being the answer to such a need. The medical domain demands specifically developed devices that address, together with ergonomics, the achievement of surgical accuracy objectives and compliance with medical device regulations. In the framework of an EU Horizon2020 project, a hybrid video and optical see-through augmented reality headset paired with a software architecture, both specifically designed to be seamlessly integrated into the surgical workflow, has been developed. In this paper, the overall architecture of the system is described. The developed AR HMD surgical navigation platform was positively tested on seven patients to aid the surgeon while performing Le Fort 1 osteotomy in cranio-maxillofacial surgery, demonstrating the value of the hybrid approach and the safety and usability of the navigation platform. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

15 pages, 1196 KiB  
Article
Audio Classification Algorithm for Hearing Aids Based on Robust Band Entropy Information
by Weiyun Jin and Xiaohua Fan
Information 2022, 13(2), 79; https://doi.org/10.3390/info13020079 - 8 Feb 2022
Cited by 1 | Viewed by 2355
Abstract
Audio classification algorithms for hearing aids require excellent classification accuracy. To achieve effective performance, we first present a novel supervised method, involving a spectral entropy-based magnitude feature with a random forest classifier (SEM-RF). A novel-feature SEM based on the similarity and stability of [...] Read more.
Audio classification algorithms for hearing aids require excellent classification accuracy. To achieve effective performance, we first present a novel supervised method, involving a spectral entropy-based magnitude feature with a random forest classifier (SEM-RF). A novel-feature SEM based on the similarity and stability of band signals is introduced to improve the classification accuracy of each audio environment. The random forest (RF) model is applied to perform the classification process. Subsequently, to resolve the problem of decreasing classification accuracy of the SEM-RF algorithm in mixed speech environments, an improved algorithm, ImSEM-RF, is proposed. The SEM features and corresponding phase features are fused on multiple time resolutions to form a robust multi-time resolution magnitude and phase (multi-MP) feature, which improves the stability of the feature with which the speech signal interferes. The RF model is improved using the linear discriminant analysis (LDA) method to form a linear discriminant analysis-random forest (LDA-RF) joint classification model, which performs model acceleration. Through experiments on hearing aid research data sets for acoustic environment recognition, the effectiveness of the SEM-RF algorithm was confirmed on a background audio signal dataset. The classification accuracy increased by approximately 7% compared with the background noise classification algorithm using an RF tree classifier. The validity of the ImSEM-RF algorithm in speech-interference environments was confirmed using the speech in the background audio signal dataset. Compared with the SEM-RF algorithm, the classification accuracy was improved by approximately 2%. The LDA-RF reduced the program’s running time by >80% with multi-MP features compared with RF. Full article
Show Figures

Figure 1

14 pages, 3538 KiB  
Article
Security Service Function Chain Based on Graph Neural Network
by Wei Li, Haomin Wang, Xiaoliang Zhang, Dingding Li, Lijing Yan, Qi Fan, Yuan Jiang and Ruoyu Yao
Information 2022, 13(2), 78; https://doi.org/10.3390/info13020078 - 7 Feb 2022
Cited by 5 | Viewed by 2824
Abstract
With the rapid development and wide application of cloud computing, security protection in cloud environment has become an urgent problem to be solved. However, traditional security service equipment is closely coupled with the network topology, so it is difficult to upgrade and expand [...] Read more.
With the rapid development and wide application of cloud computing, security protection in cloud environment has become an urgent problem to be solved. However, traditional security service equipment is closely coupled with the network topology, so it is difficult to upgrade and expand the security service, which cannot change with the change of network application security requirements. Building a security service function chain (SSFC) makes the deployment of security service functions more dynamic and scalable. Based on a software defined network (SDN) and network function virtualization (NFV) environment, this paper proposes a solution to the particularity optimization algorithm of network topology feature extraction using graph neural network. The experimental results show that, compared with the shortest path, greedy algorithm and hybrid bee colony algorithm, the average success rate of the graph neural network algorithm in the construction of the security service function chain is more than 90%, far more than other algorithms, and far less than other algorithms in construction time. It effectively reduces the end-to-end delay and increases the network throughput. Full article
Show Figures

Figure 1

12 pages, 6233 KiB  
Article
Adversarial Attacks Impact on the Neural Network Performance and Visual Perception of Data under Attack
by Yakov Usoltsev, Balzhit Lodonova, Alexander Shelupanov, Anton Konev and Evgeny Kostyuchenko
Information 2022, 13(2), 77; https://doi.org/10.3390/info13020077 - 5 Feb 2022
Viewed by 3111
Abstract
Machine learning algorithms based on neural networks are vulnerable to adversarial attacks. The use of attacks against authentication systems greatly reduces the accuracy of such a system, despite the complexity of generating a competitive example. As part of this study, a white-box adversarial [...] Read more.
Machine learning algorithms based on neural networks are vulnerable to adversarial attacks. The use of attacks against authentication systems greatly reduces the accuracy of such a system, despite the complexity of generating a competitive example. As part of this study, a white-box adversarial attack on an authentication system was carried out. The basis of the authentication system is a neural network perceptron, trained on a dataset of frequency signatures of sign. For an attack on an atypical dataset, the following results were obtained: with an attack intensity of 25%, the authentication system availability decreases to 50% for a particular user, and with a further increase in the attack intensity, the accuracy decreases to 5%. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence Using Real Data)
Show Figures

Figure 1

17 pages, 2048 KiB  
Article
HIV Patients’ Tracer for Clinical Assistance and Research during the COVID-19 Epidemic (INTERFACE): A Paradigm for Chronic Conditions
by Antonella Cingolani, Konstantina Kostopoulou, Alice Luraschi, Aristodemos Pnevmatikakis, Silvia Lamonica, Sofoklis Kyriazakos, Chiara Iacomini, Francesco Vladimiro Segala, Giulia Micheli, Cristina Seguiti, Stathis Kanavos, Alfredo Cesario, Enrica Tamburrini, Stefano Patarnello, Vincenzo Valentini and Roberto Cauda
Information 2022, 13(2), 76; https://doi.org/10.3390/info13020076 - 5 Feb 2022
Cited by 1 | Viewed by 3545
Abstract
The health emergency linked to the SARS-CoV-2 pandemic has highlighted problems in the health management of chronic patients due to their risk of infection, suggesting the need of new methods to monitor patients. People living with HIV/AIDS (PLWHA) represent a paradigm of chronic [...] Read more.
The health emergency linked to the SARS-CoV-2 pandemic has highlighted problems in the health management of chronic patients due to their risk of infection, suggesting the need of new methods to monitor patients. People living with HIV/AIDS (PLWHA) represent a paradigm of chronic patients where an e-health-based remote monitoring could have a significant impact in maintaining an adequate standard of care. The key objective of the study is to provide both an efficient operating model to “follow” the patient, capture the evolution of their disease, and establish proximity and relief through a remote collaborative model. These dimensions are collected through a dedicated mobile application that triggers questionnaires on the basis of decision-making algorithms, tagging patients and sending alerts to staff in order to tailor interventions. All outcomes and alerts are monitored and processed through an innovative e-Clinical platform. The processing of the collected data aims into learning and evaluating predictive models for the possible upcoming alerts on the basis of past data, using machine learning algorithms. The models will be clinically validated as the study collects more data, and, if successful, the resulting multidimensional vector of past attributes will act as a digital composite biomarker capable of predicting HIV-related alerts. Design: All PLWH > 18 sears old and stable disease followed at the outpatient services of a university hospital (n = 1500) will be enrolled in the interventional study. The study is ongoing, and patients are currently being recruited. Preliminary results are yielding monthly data to facilitate learning of predictive models for the alerts of interest. Such models are learnt for one or two months of history of the questionnaire data. In this manuscript, the protocol—including the rationale, detailed technical aspects underlying the study, and some preliminary results—are described. Conclusions: The management of HIV-infected patients in the pandemic era represents a challenge for future patient management beyond the pandemic period. The application of artificial intelligence and machine learning systems as described in this study could enable remote patient management that takes into account the real needs of the patient and the monitoring of the most relevant aspects of PLWH management today. Full article
(This article belongs to the Special Issue Advances in AI for Health and Medical Applications)
Show Figures

Figure 1

15 pages, 14073 KiB  
Article
SPMOO: A Multi-Objective Offloading Algorithm for Dependent Tasks in IoT Cloud-Edge-End Collaboration
by Liu Liu, Haiming Chen and Zhengtao Xu
Information 2022, 13(2), 75; https://doi.org/10.3390/info13020075 - 5 Feb 2022
Cited by 6 | Viewed by 3345
Abstract
With the rapid development of the internet of things, there are more and more end devices, such as wearable devices, USVs and intelligent automobiles, connected to the internet. These devices tend to require large amounts of computing resources with stringent latency requirements, which [...] Read more.
With the rapid development of the internet of things, there are more and more end devices, such as wearable devices, USVs and intelligent automobiles, connected to the internet. These devices tend to require large amounts of computing resources with stringent latency requirements, which inevitably increases the burden on edge server nodes. Therefore, in order to alleviate the problem that the computing capacity of edge server nodes is limited and cannot meet the computing service requirements of a large number of end devices in the internet of things scenario, we combined the characteristics of rich computing resources of cloud servers and low transmission delay of edge servers to build a hybrid computing task-offloading architecture of cloud-edge-end collaboration. Then, we study offloading based on this architecture for complex dependent tasks generated on end devices. We introduce a two-dimensional offloading decision factor to model latency and energy consumption, and formalize the model as a multi-objective optimization problem with the optimization objective of minimizing the average latency and average energy consumption of the task’s computation offloading. Based on this, we propose a multi-objective offloading (SPMOO) algorithm based on an improved strength Pareto evolutionary algorithm (SPEA2) for solving the problem. A large number of experimental results show that the algorithm proposed in this paper has good performance. Full article
(This article belongs to the Topic Wireless Sensor Networks)
Show Figures

Figure 1

29 pages, 3172 KiB  
Article
Application and Investigation of Multimedia Design Principles in Augmented Reality Learning Environments
by Jule M. Krüger and Daniel Bodemer
Information 2022, 13(2), 74; https://doi.org/10.3390/info13020074 - 4 Feb 2022
Cited by 13 | Viewed by 5191
Abstract
Digital media have changed the way educational instructions are designed. Learning environments addressing different presentation modes, sensory modalities and realities have evolved, with augmented reality (AR) as one of the latest developments in which multiple aspects of all three dimensions can be united. [...] Read more.
Digital media have changed the way educational instructions are designed. Learning environments addressing different presentation modes, sensory modalities and realities have evolved, with augmented reality (AR) as one of the latest developments in which multiple aspects of all three dimensions can be united. Multimedia learning principles can generally be applied to AR scenarios that combine physical environments and virtual elements, but their AR-specific effectiveness is unclear so far. In the current paper, we describe two studies examining AR-specific occurrences of two basic multimedia learning principles: (1) the spatial contiguity principle with visual learning material, leveraging AR-specific spatiality potentials, and (2) the coherence principle with audiovisual learning material, leveraging AR-specific contextuality potentials. Both studies use video-based implementations of AR experiences combining textual and pictorial representation modes as well as virtual and physical visuals. We examine the effects of integrated and separated visual presentations of virtual and physical elements (study one, N = 80) in addition to the effects of the omission of or the addition of matching or non-matching sounds (study two, N = 130) on cognitive load, task load and knowledge. We find only few significant effects and interesting descriptive results. We discuss the results and the implementations based on theory and make suggestions for future research. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

24 pages, 655 KiB  
Article
Automatic Identification of Similar Pull-Requests in GitHub’s Repositories Using Machine Learning
by Hamzeh Eyal Salman, Zakarea Alshara and Abdelhak-Djamel Seriai
Information 2022, 13(2), 73; https://doi.org/10.3390/info13020073 - 3 Feb 2022
Cited by 6 | Viewed by 3029
Abstract
Context: In a social coding platform such as GitHub, a pull-request mechanism is frequently used by contributors to submit their code changes to reviewers of a given repository. In general, these code changes are either to add a new feature or to fix [...] Read more.
Context: In a social coding platform such as GitHub, a pull-request mechanism is frequently used by contributors to submit their code changes to reviewers of a given repository. In general, these code changes are either to add a new feature or to fix an existing bug. However, this mechanism is distributed and allows different contributors to submit unintentionally similar pull-requests that perform similar development activities. Similar pull-requests may be submitted to review in parallel time by different reviewers. This will cause redundant reviewing time and efforts. Moreover, it will complicate the collaboration process. Objective: Therefore, it is useful to assign similar pull-requests to the same reviewer to be able to decide which pull-request to choose in effective time and effort. In this article, we propose to group similar pull-requests together into clusters so that each cluster is assigned to the same reviewer or the same reviewing team. This proposal allows saving reviewing efforts and time. Method: To do so, we first extract descriptive textual information from pull-requests content to link similar pull-requests together. Then, we employ the extracted information to find similarities among pull-requests. Finally, machine learning algorithms (K-Means clustering and agglomeration hierarchical clustering algorithms) are used to group similar pull-requests together. Results: To validate our proposal, we have applied it to twenty popular repositories from public dataset. The experimental results show that the proposed approach achieved promising results according to the well-known metrics in this subject: precision and recall. Furthermore, it helps to save the reviewer time and effort. Conclusion: According to the obtained results, the K-Means algorithm achieves 94% and 91% average precision and recall values over all considered repositories, respectively, while agglomeration hierarchical clustering performs 93% and 98% average precision and recall values over all considered repositories, respectively. Moreover, the proposed approach saves reviewing time and effort on average between (67% and 91%) by K-Means algorithm and between (67% and 83%) by agglomeration hierarchical clustering algorithm. Full article
Show Figures

Figure 1

13 pages, 3761 KiB  
Article
DOA Estimation Algorithm for Reconfigurable Intelligent Surface Co-Prime Linear Array Based on Multiple Signal Classification Approach
by Tianyu Lan, Kaizhi Huang, Liang Jin, Xiaoming Xu, Xiaoli Sun and Zhou Zhong
Information 2022, 13(2), 72; https://doi.org/10.3390/info13020072 - 3 Feb 2022
Cited by 4 | Viewed by 2361
Abstract
Co-prime linear arrays (CLAs) provide an additional degree of freedom (DOF) with a limited number of physical sensors, and thus help to improve the resolution of direction of arrival (DOA) estimation algorithms. However, the DOF of traditional CLA is restrained by the structure [...] Read more.
Co-prime linear arrays (CLAs) provide an additional degree of freedom (DOF) with a limited number of physical sensors, and thus help to improve the resolution of direction of arrival (DOA) estimation algorithms. However, the DOF of traditional CLA is restrained by the structure of the array, which cannot be adjusted after deployment. In this paper, we propose a DOA estimation algorithm for reconfigurable intelligent surface co-prime linear array (RIS CLA) based on the multiple signal classification approach. Specifically, an RIS CLA is first constructed on the ground of RIS antenna, by turning on/off specific elements at different times. Then, the covariance matrix of the received signal is vectorized, so as to construct a virtual difference array, whose aperture is considerably expanded. Finally, a spectral peak search on the noise subspace of the received signal of the difference array is conducted to obtain the DOA estimation result. Simulations verify the improvement of the proposed algorithm in terms of DOF and resolution. To be specific, the DOF provided by RIS CLA outnumbers that of CLA by more than 30%, and the resolution of the proposed DOA estimation algorithm is effectively improved, with its accuracy increased up to 70% under the low signal-noise-ratio (SNR) scenario, compared with existing algorithms. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop