Journal Description
Information
Information
is a scientific, peer-reviewed, open access journal of information science and technology, data, knowledge, and communication, and is published monthly online by MDPI. The International Society for Information Studies (IS4SI) is affiliated with Information and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, and other databases.
- Journal Rank: CiteScore - Q2 (Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 18 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.1 (2022);
5-Year Impact Factor:
2.9 (2022)
Latest Articles
The Era of Artificial Intelligence Deception: Unraveling the Complexities of False Realities and Emerging Threats of Misinformation
Information 2024, 15(6), 299; https://doi.org/10.3390/info15060299 - 23 May 2024
Abstract
This study delves into the dual nature of artificial intelligence (AI), illuminating its transformative potential that has the power to revolutionize various aspects of our lives. We delve into critical issues such as AI hallucinations, misinformation, and unpredictable behavior, particularly in large language
[...] Read more.
This study delves into the dual nature of artificial intelligence (AI), illuminating its transformative potential that has the power to revolutionize various aspects of our lives. We delve into critical issues such as AI hallucinations, misinformation, and unpredictable behavior, particularly in large language models (LLMs) and AI-powered chatbots. These technologies, while capable of manipulating human decisions and exploiting cognitive vulnerabilities, also hold the key to unlocking unprecedented opportunities for innovation and progress. Our research underscores the need for robust, ethical AI development and deployment frameworks, advocating a balance between technological advancement and societal values. We emphasize the importance of collaboration among researchers, developers, policymakers, and end users to steer AI development toward maximizing benefits while minimizing potential harms. This study highlights the critical role of responsible AI practices, including regular training, engagement, and the sharing of experiences among AI users, to mitigate risks and develop the best practices. We call for updated legal and regulatory frameworks to keep pace with AI advancements and ensure their alignment with ethical principles and societal values. By fostering open dialog, sharing knowledge, and prioritizing ethical considerations, we can harness AI’s transformative potential to drive human advancement while managing its inherent risks and challenges.
Full article
(This article belongs to the Section Information Applications)
Open AccessArticle
Unmasking Banking Fraud: Unleashing the Power of Machine Learning and Explainable AI (XAI) on Imbalanced Data
by
S. M. Nuruzzaman Nobel, Shirin Sultana, Sondip Poul Singha, Sudipto Chaki, Md. Julkar Nayeen Mahi, Tony Jan, Alistair Barros and Md Whaiduzzaman
Information 2024, 15(6), 298; https://doi.org/10.3390/info15060298 - 23 May 2024
Abstract
Recognizing fraudulent activity in the banking system is essential due to the significant risks involved. When fraudulent transactions are vastly outnumbered by non-fraudulent ones, dealing with imbalanced datasets can be difficult. This study aims to determine the best model for detecting fraud by
[...] Read more.
Recognizing fraudulent activity in the banking system is essential due to the significant risks involved. When fraudulent transactions are vastly outnumbered by non-fraudulent ones, dealing with imbalanced datasets can be difficult. This study aims to determine the best model for detecting fraud by comparing four commonly used machine learning algorithms: Support Vector Machine (SVM), XGBoost, Decision Tree, and Logistic Regression. Additionally, we utilized the Synthetic Minority Over-sampling Technique (SMOTE) to address the issue of class imbalance. The XGBoost Classifier proved to be the most successful model for fraud detection, with an accuracy of 99.88%. We utilized SHAP and LIME analyses to provide greater clarity into the decision-making process of the XGBoost model and improve overall comprehension. This research shows that the XGBoost Classifier is highly effective in detecting banking fraud on imbalanced datasets, with an impressive accuracy score. The interpretability of the XGBoost Classifier model was further enhanced by applying SHAP and LIME analysis, which shed light on the significant features that contribute to fraud detection. The insights and findings presented here are valuable contributions to the ongoing efforts aimed at developing effective fraud detection systems for the banking industry.
Full article
(This article belongs to the Special Issue Technoeconomics of the Internet of Things)
►▼
Show Figures
Figure 1
Open AccessArticle
Advancing Medical Assistance: Developing an Effective Hungarian-Language Medical Chatbot with Artificial Intelligence
by
Barbara Simon, Ádám Hartveg, Lehel Dénes-Fazakas, György Eigner and László Szilágyi
Information 2024, 15(6), 297; https://doi.org/10.3390/info15060297 - 22 May 2024
Abstract
In recent times, the prevalence of chatbot technology has notably increased, particularly in the realm of medical assistants. However, there is a noticeable absence of medical chatbots that cater to the Hungarian language. Consequently, Hungarian-speaking people currently lack access to an automated system
[...] Read more.
In recent times, the prevalence of chatbot technology has notably increased, particularly in the realm of medical assistants. However, there is a noticeable absence of medical chatbots that cater to the Hungarian language. Consequently, Hungarian-speaking people currently lack access to an automated system capable of providing assistance with their health-related inquiries or issues. Our research aims to establish a competent medical chatbot assistant that is accessible through both a website and a mobile app. It is crucial to highlight that the project’s objective extends beyond mere linguistic localization; our goal is to develop an official and effectively functioning Hungarian chatbot. The assistant’s task is to answer medical questions, provide health advice, and inform users about health problems and treatments. The chatbot should be able to recognize and interpret user-provided text input and offer accurate and relevant responses using specific algorithms. In our work, we put a lot of emphasis on having steady input so that it can detect all the diseases that the patient is dealing with. Our database consisted of sentences and phrases that a user would type into a chatbot. We assigned health problems to these and then assigned the categories to the corresponding cure. Within the research, we developed a website and mobile app, so that users can easily use the assistant. The app plays a particularly important role for users because it allows them to use the assistant anytime and anywhere, taking advantage of the portability of mobile devices. At the current stage of our research, the precision and validation accuracy of the system is greater than 90%, according to the selected test methods.
Full article
(This article belongs to the Special Issue Application of Machine Learning and Deep Learning in Pattern Recognition and Biometrics)
Open AccessArticle
Object Tracking Based on Optical Flow Reconstruction of Motion-Group Parameters
by
Simeon Karpuzov, George Petkov, Sylvia Ilieva, Alexander Petkov and Stiliyan Kalitzin
Information 2024, 15(6), 296; https://doi.org/10.3390/info15060296 - 22 May 2024
Abstract
Rationale. Object tracking has significance in many applications ranging from control of unmanned vehicles to autonomous monitoring of specific situations and events, especially when providing safety for patients with certain adverse conditions such as epileptic seizures. Conventional tracking methods face many challenges, such
[...] Read more.
Rationale. Object tracking has significance in many applications ranging from control of unmanned vehicles to autonomous monitoring of specific situations and events, especially when providing safety for patients with certain adverse conditions such as epileptic seizures. Conventional tracking methods face many challenges, such as the need for dedicated attached devices or tags, influence by high image noise, complex object movements, and intensive computational requirements. We have developed earlier computationally efficient algorithms for global optical flow reconstruction of group velocities that provide means for convulsive seizure detection and have potential applications in fall and apnea detection. Here, we address the challenge of using the same calculated group velocities for object tracking in parallel. Methods. We propose a novel optical flow-based method for object tracking. It utilizes real-time image sequences from the camera and directly reconstructs global motion-group parameters of the content. These parameters can steer a rectangular region of interest surrounding the moving object to follow the target. The method successfully applies to multi-spectral data, further improving its effectiveness. Besides serving as a modular extension to clinical alerting applications, the novel technique, compared with other available approaches, may provide real-time computational advantages as well as improved stability to noisy inputs. Results. Experimental results on simulated tests and complex real-world data demonstrate the method’s capabilities. The proposed optical flow reconstruction can provide accurate, robust, and faster results compared to current state-of-the-art approaches.
Full article
(This article belongs to the Special Issue Emerging Research in Target Detection and Recognition in Remote Sensing Images)
Open AccessArticle
Advanced Machine Learning Techniques for Predictive Modeling of Property Prices
by
Kanchana Vishwanadee Mathotaarachchi, Raza Hasan and Salman Mahmood
Information 2024, 15(6), 295; https://doi.org/10.3390/info15060295 - 22 May 2024
Abstract
Real estate price prediction is crucial for informed decision making in the dynamic real estate sector. In recent years, machine learning (ML) techniques have emerged as powerful tools for enhancing prediction accuracy and data-driven decision making. However, the existing literature lacks a cohesive
[...] Read more.
Real estate price prediction is crucial for informed decision making in the dynamic real estate sector. In recent years, machine learning (ML) techniques have emerged as powerful tools for enhancing prediction accuracy and data-driven decision making. However, the existing literature lacks a cohesive synthesis of methodologies, findings, and research gaps in ML-based real estate price prediction. This study addresses this gap through a comprehensive literature review, examining various ML approaches, including neural networks, ensemble methods, and advanced regression techniques. We identify key research gaps, such as the limited exploration of hybrid ML-econometric models and the interpretability of ML predictions. To validate the robustness of regression models, we conduct generalization testing on an independent dataset. Results demonstrate the applicability of regression models in predicting real estate prices across diverse markets. Our findings underscore the importance of addressing research gaps to advance the field and enhance the practical applicability of ML techniques in real estate price prediction. This study contributes to a deeper understanding of ML’s role in real estate forecasting and provides insights for future research and practical implementation in the real estate industry.
Full article
(This article belongs to the Special Issue Second Edition of Predictive Analytics and Data Science)
Open AccessArticle
Understanding the Impact of Perceived Challenge on Narrative Immersion in Video Games: The Role-Playing Game Genre as a Case Study
by
José Miguel Domingues, Vítor Filipe, André Carita and Vítor Carvalho
Information 2024, 15(6), 294; https://doi.org/10.3390/info15060294 - 22 May 2024
Abstract
►▼
Show Figures
This paper explores the intricate interplay between perceived challenge and narrative immersion within role-playing game (RPG) video games, motivated by the escalating influence of game difficulty on player choices. A quantitative methodology was employed, utilizing three specific questionnaires for data collection on player
[...] Read more.
This paper explores the intricate interplay between perceived challenge and narrative immersion within role-playing game (RPG) video games, motivated by the escalating influence of game difficulty on player choices. A quantitative methodology was employed, utilizing three specific questionnaires for data collection on player habits and experiences, perceived challenge, and narrative immersion. The study consisted of two interconnected stages: an initial research phase to identify and understand player habits, followed by an in-person intervention involving the playing of three distinct RPG video games. During this intervention, selected players engaged with the chosen RPG video games separately, and after each session, responded to two surveys assessing narrative immersion and perceived challenge. The study concludes that a meticulous adjustment of perceived challenge by video game studios moderately influences narrative immersion, reinforcing the enduring prominence of the RPG genre as a distinctive choice in narrative.
Full article
Figure 1
Open AccessArticle
Task-Adaptive Multi-Source Representations for Few-Shot Image Recognition
by
Ge Liu, Zhongqiang Zhang and Xiangzhong Fang
Information 2024, 15(6), 293; https://doi.org/10.3390/info15060293 - 21 May 2024
Abstract
Conventional few-shot learning (FSL) mainly focuses on knowledge transfer from a single source dataset to a recognition scenario with only a few training samples available but still similar to the source domain. In this paper, we consider a more practical FSL setting where
[...] Read more.
Conventional few-shot learning (FSL) mainly focuses on knowledge transfer from a single source dataset to a recognition scenario with only a few training samples available but still similar to the source domain. In this paper, we consider a more practical FSL setting where multiple semantically different datasets are available to address a wide range of FSL tasks, especially for some recognition scenarios beyond natural images, such as remote sensing and medical imagery. It can be referred to as multi-source cross-domain FSL. To tackle the problem, we propose a two-stage learning scheme, termed learning and adapting multi-source representations (LAMR). In the first stage, we propose a multi-head network to obtain efficient multi-domain representations, where all source domains share the same backbone except for the last parallel projection layers for domain specialization. We train the representations in a multi-task setting where each in-domain classification task is taken by a cosine classifier. In the second stage, considering that instance discrimination and class discrimination are crucial for robust recognition, we propose two contrastive objectives for adapting the pre-trained representations to be task-specialized on the few-shot data. Careful ablation studies verify that LAMR significantly improves representation transferability, showing consistent performance boosts. We also extend LAMR to single-source FSL by introducing a dataset-splitting strategy that equally splits one source dataset into sub-domains. The empirical results show that LAMR can achieve SOTA performance on the BSCD-FSL benchmark and competitive performance on mini-ImageNet, highlighting its versatility and effectiveness for FSL of both natural and specific imaging.
Full article
(This article belongs to the Special Issue Few-Shot Learning for Knowledge Engineering and Intellectual System)
Open AccessArticle
Designing Gestures for Data Exploration with Public Displays via Identification Studies
by
Adina Friedman and Francesco Cafaro
Information 2024, 15(6), 292; https://doi.org/10.3390/info15060292 - 21 May 2024
Abstract
In-lab elicitation studies inform the design of gestures by having the participants suggest actions to activate the system functions. Conversely, crowd-sourced identification studies follow the opposite path, asking the users to associate the control actions with functions. Identification studies have been used to
[...] Read more.
In-lab elicitation studies inform the design of gestures by having the participants suggest actions to activate the system functions. Conversely, crowd-sourced identification studies follow the opposite path, asking the users to associate the control actions with functions. Identification studies have been used to validate the gestures produced by elicitation studies, but not to design interactive systems. In this paper, we show that identification studies can be combined with in situ observations to design the gestures for data exploration with public displays. To illustrate this method, we developed two versions of a gesture-controlled system for data exploration with 368 users: one designed through an elicitation study, and one designed through in situ observations followed by an identification study. Our results show that the users discovered the majority of the gestures with similar accuracy across the two prototypes. Additionally, the in situ approach enabled the direct recruitment of target users, and the crowd-sourced approach typical of identification studies expedited the design process.
Full article
(This article belongs to the Special Issue Recent Advances and Perspectives in Human-Computer Interaction)
►▼
Show Figures
Figure 1
Open AccessArticle
Predictions from Generative Artificial Intelligence Models: Towards a New Benchmark in Forecasting Practice
by
Hossein Hassani and Emmanuel Sirimal Silva
Information 2024, 15(6), 291; https://doi.org/10.3390/info15060291 - 21 May 2024
Abstract
This paper aims to determine whether there is a case for promoting a new benchmark for forecasting practice via the innovative application of generative artificial intelligence (Gen-AI) for predicting the future. Today, forecasts can be generated via Gen-AI models without the need for
[...] Read more.
This paper aims to determine whether there is a case for promoting a new benchmark for forecasting practice via the innovative application of generative artificial intelligence (Gen-AI) for predicting the future. Today, forecasts can be generated via Gen-AI models without the need for an in-depth understanding of forecasting theory, practice, or coding. Therefore, using three datasets, we present a comparative analysis of forecasts from Gen-AI models against forecasts from seven univariate and automated models from the forecast package in R, covering both parametric and non-parametric forecasting techniques. In some cases, we find statistically significant evidence to conclude that forecasts from Gen-AI models can outperform forecasts from popular benchmarks like seasonal ARIMA, seasonal naïve, exponential smoothing, and Theta forecasts (to name a few). Our findings also indicate that the accuracy of forecasts from Gen-AI models can vary not only based on the underlying data structure but also on the quality of prompt engineering (thus highlighting the continued importance of forecasting education), with the forecast accuracy appearing to improve at longer horizons. Therefore, we find some evidence towards promoting forecasts from Gen-AI models as benchmarks in future forecasting practice. However, at present, users are cautioned against reliability issues and Gen-AI being a black box in some cases.
Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
►▼
Show Figures
Figure 1
Open AccessArticle
A Lightweight Face Detector via Bi-Stream Convolutional Neural Network and Vision Transformer
by
Zekun Zhang, Qingqing Chao, Shijie Wang and Teng Yu
Information 2024, 15(5), 290; https://doi.org/10.3390/info15050290 - 20 May 2024
Abstract
Lightweight convolutional neural networks are widely used for face detection due to their ability to learn local representations through spatial induction bias and translational invariance. However, convolutional face detectors have limitations in detecting faces under challenging conditions like occlusion, blurring, or changes in
[...] Read more.
Lightweight convolutional neural networks are widely used for face detection due to their ability to learn local representations through spatial induction bias and translational invariance. However, convolutional face detectors have limitations in detecting faces under challenging conditions like occlusion, blurring, or changes in facial poses, primarily attributed to fixed-size receptive fields and a lack of global modeling. Transformer-based models have advantages on learning global representations but are insensitive to capture local patterns. To address these limitations, we propose an efficient face detector that combines convolutional neural network and transformer architectures. We introduce a bi-stream structure that integrates convolutional neural network and transformer blocks within the backbone network, enabling the preservation of local pattern features and the extraction of global context. To further preserve the local details captured by convolutional neural networks, we propose a feature enhancement convolution block in a hierarchical backbone structure. Additionally, we devise a multiscale feature aggregation module to enhance obscured and blurred facial features. Experimental results demonstrate that our method has achieved improved lightweight face detection accuracy with an average precision of 95.30%, 94.20%, and 87.56% across the easy, medium, and hard subdatasets of WIDER FACE, respectively. Therefore, we believe our method will be a useful supplement to the collection of current artificial intelligence models and benefit the engineering applications of face detection.
Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
NDNOTA: NDN One-Time Authentication
by
Manar Aldaoud, Dawood Al-Abri, Firdous Kausar and Medhat Awadalla
Information 2024, 15(5), 289; https://doi.org/10.3390/info15050289 - 20 May 2024
Abstract
Named Data Networking (NDN) stands out as a prominent architectural framework for the future Internet, aiming to address deficiencies present in IP networks, specifically in the domain of security. Although NDN packets containing requested content are signed with the publisher’s signature which establishes
[...] Read more.
Named Data Networking (NDN) stands out as a prominent architectural framework for the future Internet, aiming to address deficiencies present in IP networks, specifically in the domain of security. Although NDN packets containing requested content are signed with the publisher’s signature which establishes data provenance for content, the NDN domain still requires more holistic frameworks that address consumers’ identity verification while accessing protected contents or services using producer/publisher-preapproved authentication servers. In response, this paper introduces the NDN One-Time Authentication (NDNOTA) framework, designed to authenticate NDN online services, applications, and data in real time. NDNOTA comprises three fundamental elements: the consumer, producer, and authentication server. Employing a variety of security measures such as single sign-on (SSO), token credentials, certified asymmetric keys, and signed NDN packets, NDNOTA aims to reinforce the security of NDN-based interactions. To assess the effectiveness of the proposed framework, we validate and evaluate its impact on the three core elements in terms of time performance. For example, when accessing authenticated content through the entire NDNOTA process, consumers experience an additional time overhead of 70 milliseconds, making the total process take 83 milliseconds. In contrast, accessing normal content that does not require authentication does not incur this delay. The additional NDNOTA delay is mitigated once the authentication token is generated and stored, resulting in a comparable time frame to unauthenticated content requests. Additionally, obtaining private content through the authentication process requires 10 messages, whereas acquiring public data only requires two messages.
Full article
(This article belongs to the Section Information Security and Privacy)
►▼
Show Figures
Figure 1
Open AccessArticle
Technoeconomic Analysis for Deployment of Gait-Oriented Wearable Medical Internet-of-Things Platform in Catalonia
by
Marc Codina, David Castells-Rufas, Maria-Jesus Torrelles and Jordi Carrabina
Information 2024, 15(5), 288; https://doi.org/10.3390/info15050288 - 18 May 2024
Abstract
The Internet of Medical Things (IoMT) extends the concept of eHealth and mHealth for patients with continuous monitoring requirements. This research concentrates on the use of wearable devices based on the use of inertial measurement units (IMUs) that account for a gait analysis
[...] Read more.
The Internet of Medical Things (IoMT) extends the concept of eHealth and mHealth for patients with continuous monitoring requirements. This research concentrates on the use of wearable devices based on the use of inertial measurement units (IMUs) that account for a gait analysis for its use in three health cases, equilibrium evaluation, fall prevention and surgery recovery, that impact a large elderly population. We also analyze two different scenarios for data capture: supervised by clinicians and unsupervised during activities of daily life (ADLs). The continuous monitoring of patients produces large amounts of data that are analyzed in specific IoMT platforms that must be connected to the health system platforms containing the health records of the patients. The aim of this study is to evaluate the factors that impact the cost of the deployment of such an IoMT solution. We use population data from Catalonia together with an IoMT deployment model for costs from the current deployment of connected devices for monitoring diabetic patients. Our study reveals the critical dependencies of the proposed IoMT platforms: from the devices and cloud cost, the size of the population using these services and the savings from the current model under key parameters such as fall reduction or rehabilitation duration. Future research should investigate the benefit of continuous monitoring in improving the quality of life of patients.
Full article
(This article belongs to the Special Issue Technoeconomics of the Internet of Things)
►▼
Show Figures
Figure 1
Open AccessArticle
Principle of Information Increase: An Operational Perspective on Information Gain in the Foundations of Quantum Theory
by
Yang Yu and Philip Goyal
Information 2024, 15(5), 287; https://doi.org/10.3390/info15050287 - 17 May 2024
Abstract
►▼
Show Figures
A measurement performed on a quantum system is an act of gaining information about its state. However, in the foundations of quantum theory, the concept of information is multiply defined, particularly in the area of quantum reconstruction, and its conceptual foundations remain surprisingly
[...] Read more.
A measurement performed on a quantum system is an act of gaining information about its state. However, in the foundations of quantum theory, the concept of information is multiply defined, particularly in the area of quantum reconstruction, and its conceptual foundations remain surprisingly under-explored. In this paper, we investigate the gain of information in quantum measurements from an operational viewpoint in the special case of a two-outcome probabilistic source. We show that the continuous extension of the Shannon entropy naturally admits two distinct measures of information gain, differential information gain and relative information gain, and that these have radically different characteristics. In particular, while differential information gain can increase or decrease as additional data are acquired, relative information gain consistently grows and, moreover, exhibits asymptotic indifference to the data or choice of Bayesian prior. In order to make a principled choice between these measures, we articulate a Principle of Information Increase, which incorporates a proposal due to Summhammer that more data from measurements leads to more knowledge about the system, and also takes into consideration black swan events. This principle favours differential information gain as the more relevant metric and guides the selection of priors for these information measures. Finally, we show that, of the symmetric beta distribution priors, the Jeffreys binomial prior is the prior that ensures maximal robustness of information gain for the particular data sequence obtained in a run of experiments.
Full article
Figure 1
Open AccessArticle
Telehealth-Based Information Retrieval and Extraction for Analysis of Clinical Characteristics and Symptom Patterns in Mild COVID-19 Patients
by
Edison Jahaj, Parisis Gallos, Melina Tziomaka, Athanasios Kallipolitis, Apostolos Pasias, Christos Panagopoulos, Andreas Menychtas, Ioanna Dimopoulou, Anastasia Kotanidou, Ilias Maglogiannis and Alice Georgia Vassiliou
Information 2024, 15(5), 286; https://doi.org/10.3390/info15050286 - 17 May 2024
Abstract
Clinical characteristics of COVID-19 patients have been mostly described in hospitalised patients, yet most are managed in an outpatient setting. The COVID-19 pandemic transformed healthcare delivery models and accelerated the implementation and adoption of telemedicine solutions. We employed a modular remote monitoring system
[...] Read more.
Clinical characteristics of COVID-19 patients have been mostly described in hospitalised patients, yet most are managed in an outpatient setting. The COVID-19 pandemic transformed healthcare delivery models and accelerated the implementation and adoption of telemedicine solutions. We employed a modular remote monitoring system with multi-modal data collection, aggregation, and analytics features to monitor mild COVID-19 patients and report their characteristics and symptoms. At enrolment, the patients were equipped with wearables, which were associated with their accounts, provided the respective in-system consents, and, in parallel, reported the demographics and patient characteristics. The patients monitored their vitals and symptoms daily during a 14-day monitoring period. Vital signs were entered either manually or automatically through wearables. We enrolled 162 patients from February to May 2022. The median age was 51 (42–60) years; 44% were male, 22% had at least one comorbidity, and 73.5% were fully vaccinated. The vitals of the patients were within normal range throughout the monitoring period. Thirteen patients were asymptomatic, while the rest had at least one symptom for a median of 11 (7–16) days. Fatigue was the most common symptom, followed by fever and cough. Loss of taste and smell was the longest-lasting symptom. Age positively correlated with the duration of fatigue, anorexia, and low-grade fever. Comorbidities, the number of administered doses, the days since the last dose, and the days since the positive test did not seem to affect the number of sick days or symptomatology. The i-COVID platform allowed us to provide remote monitoring and reporting of COVID-19 outpatients. We were able to report their clinical characteristics while simultaneously helping reduce the spread of the virus through hospitals by minimising hospital visits. The monitoring platform also offered advanced knowledge extraction and analytic capabilities to detect health condition deterioration and automatically trigger personalised support workflows.
Full article
(This article belongs to the Special Issue Health Data Information Retrieval)
►▼
Show Figures
Figure 1
Open AccessArticle
MCF-YOLOv5: A Small Target Detection Algorithm Based on Multi-Scale Feature Fusion Improved YOLOv5
by
Song Gao, Mingwang Gao and Zhihui Wei
Information 2024, 15(5), 285; https://doi.org/10.3390/info15050285 - 17 May 2024
Abstract
In recent years, many deep learning-based object detection methods have performed well in various applications, especially in large-scale object detection. However, when detecting small targets, previous object detection algorithms cannot achieve good results due to the characteristics of the small targets themselves. To
[...] Read more.
In recent years, many deep learning-based object detection methods have performed well in various applications, especially in large-scale object detection. However, when detecting small targets, previous object detection algorithms cannot achieve good results due to the characteristics of the small targets themselves. To address the aforementioned issues, we propose the small object algorithm model MCF-YOLOv5, which has undergone three improvements based on YOLOv5. Firstly, a data augmentation strategy combining Mixup and Mosaic is used to increase the number of small targets in the image and reduce the interference of noise and changes in detection. Secondly, in order to accurately locate the position of small targets and reduce the impact of unimportant information on small targets in the image, the attention mechanism coordinate attention is introduced in YOLOv5’s neck network. Finally, we improve the Feature Pyramid Network (FPN) structure and add a small object detection layer to enhance the feature extraction ability of small objects and improve the detection accuracy of small objects. The experimental results show that, with a small increase in computational complexity, the proposed MCF-YOLOv5 achieves better performance than the baseline on both the VisDrone2021 dataset and the Tsinghua Tencent100K dataset. Compared with YOLOv5, MCF-YOLOv5 has improved detection APsmall by 3.3% and 3.6%, respectively.
Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning)
►▼
Show Figures
Figure 1
Open AccessArticle
Resonating with the World: Thinking Critically about Brain Criticality in Consciousness and Cognition
by
Gerry Leisman and Paul Koch
Information 2024, 15(5), 284; https://doi.org/10.3390/info15050284 - 17 May 2024
Abstract
Aim: Biofields combine many physiological levels, both spatially and temporally. These biofields reflect naturally resonant forms of synaptic energy reflected in growing and spreading waves of brain activity. This study aims to theoretically understand better how resonant continuum waves may be reflective of
[...] Read more.
Aim: Biofields combine many physiological levels, both spatially and temporally. These biofields reflect naturally resonant forms of synaptic energy reflected in growing and spreading waves of brain activity. This study aims to theoretically understand better how resonant continuum waves may be reflective of consciousness, cognition, memory, and thought. Background: The metabolic processes that maintain animal cellular and physiological functions are enhanced by physiological coherence. Internal biological-system coordination and sensitivity to particular stimuli and signal frequencies are two aspects of coherent physiology. There exists significant support for the notion that exogenous biologically and non-biologically generated energy entrains human physiological systems. All living things have resonant frequencies that are either comparable or coherent; therefore, eventually, all species will have a shared resonance. An organism’s biofield activity and resonance are what support its life and allow it to react to stimuli. Methods: As the naturally resonant forms of synaptic energy grow and spread waves of brain activity, the temporal and spatial frequency of the waves are effectively regulated by a time delay (T) in inter-layer signals in a layered structure that mimics the structure of the mammalian cortex. From ubiquitous noise, two different types of waves can arise as a function of T. One is coherent, and as T rises, so does its resonant spatial frequency. Results: Continued growth eventually causes both the wavelength and the temporal frequency to abruptly increase. Two waves expand simultaneously and randomly interfere in an area of T values as a result. Conclusion: We suggest that because of this extraordinary dualism, which has its roots in the phase relationships of amplified waves, coherent waves are essential for memory retrieval, whereas random waves represent original cognition.
Full article
(This article belongs to the Special Issue The Resonant Brain: A Themed Issue Dedicated to Professor Stephen Grossberg)
►▼
Show Figures
Figure 1
Open AccessArticle
Optimizing Energy Efficiency in Opportunistic Networks: A Heuristic Approach to Adaptive Cluster-Based Routing Protocol
by
Meisam Sharifi Sani, Saeid Iranmanesh, Hamidreza Salarian, Faisel Tubbal and Raad Raad
Information 2024, 15(5), 283; https://doi.org/10.3390/info15050283 - 16 May 2024
Abstract
Opportunistic Networks (OppNets) are characterized by intermittently connected nodes with fluctuating performance. Their dynamic topology, caused by node movement, activation, and deactivation, often relies on controlled flooding for routing, leading to significant resource consumption and network congestion. To address this challenge, we propose
[...] Read more.
Opportunistic Networks (OppNets) are characterized by intermittently connected nodes with fluctuating performance. Their dynamic topology, caused by node movement, activation, and deactivation, often relies on controlled flooding for routing, leading to significant resource consumption and network congestion. To address this challenge, we propose the Adaptive Clustering-based Routing Protocol (ACRP). This ACRP protocol uses the common member-based adaptive dynamic clustering approach to produce optimal clusters, and the OppNet is converted into a TCP/IP network. This protocol adaptively creates dynamic clusters in order to facilitate the routing by converting the network from a disjointed to a connected network. This strategy creates a persistent connection between nodes, resulting in more effective routing and enhanced network performance. It should be noted that ACRP is scalable and applicable to a variety of applications and scenarios, including smart cities, disaster management, military networks, and distant places with inadequate infrastructure. Simulation findings demonstrate that the ACRP protocol outperforms alternative clustering approaches such as kRop, QoS-OLSR, LBC, and CBVRP. The analysis of the ACRP approach reveals that it can boost packet delivery by 28% and improve average end-to-end, throughput, hop count, and reachability metrics by 42%, 45%, 44%, and 80%, respectively.
Full article
(This article belongs to the Special Issue Advances in Communication Systems and Networks)
►▼
Show Figures
Figure 1
Open AccessArticle
Cost-Effective Signcryption for Securing IoT: A Novel Signcryption Algorithm Based on Hyperelliptic Curves
by
Junaid Khan, Congxu Zhu, Wajid Ali, Muhammad Asim and Sadique Ahmad
Information 2024, 15(5), 282; https://doi.org/10.3390/info15050282 - 15 May 2024
Abstract
Security and efficiency remain a serious concern for Internet of Things (IoT) environments due to the resource-constrained nature and wireless communication. Traditional schemes are based on the main mathematical operations, including pairing, pairing-based scalar multiplication, bilinear pairing, exponential operations, elliptic curve scalar multiplication,
[...] Read more.
Security and efficiency remain a serious concern for Internet of Things (IoT) environments due to the resource-constrained nature and wireless communication. Traditional schemes are based on the main mathematical operations, including pairing, pairing-based scalar multiplication, bilinear pairing, exponential operations, elliptic curve scalar multiplication, and point multiplication operations. These traditional operands are cost-intensive and require high computing power and bandwidth overload, thus affecting efficiency. Due to the cost-intensive nature and high resource requirements, traditional approaches are not feasible and are unsuitable for resource-limited IoT devices. Furthermore, the lack of essential security attributes in traditional schemes, such as unforgeability, public verifiability, non-repudiation, forward secrecy, and resistance to denial-of-service attacks, puts data security at high risk. To overcome these challenges, we have introduced a novel signcryption algorithm based on hyperelliptic curve divisor multiplication, which is much faster than other traditional mathematical operations. Hence, the proposed methodology is based on a hyperelliptic curve, due to which it has enhanced security with smaller key sizes that reduce computational complexity by 38.16% and communication complexity by 62.5%, providing a well-balanced solution by utilizing few resources while meeting the security and efficiency requirements of resource-constrained devices. The proposed strategy also involves formal security validation, which provides confidence for the proposed methodology in practical implementations.
Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
►▼
Show Figures
Figure 1
Open AccessEditorial
Preface to the Special Issue on Computational Linguistics and Natural Language Processing
by
Peter Z. Revesz
Information 2024, 15(5), 281; https://doi.org/10.3390/info15050281 - 15 May 2024
Abstract
Computational linguistics and natural language processing are at the heart of the AI revolution that is currently transforming our lives [...]
Full article
(This article belongs to the Special Issue Computational Linguistics and Natural Language Processing)
Open AccessArticle
Fuzzy Integrated Delphi-ISM-MICMAC Hybrid Multi-Criteria Approach to Optimize the Artificial Intelligence (AI) Factors Influencing Cost Management in Civil Engineering
by
Hongxia Hu, Shouguo Jiang, Shankha Shubhra Goswami and Yafei Zhao
Information 2024, 15(5), 280; https://doi.org/10.3390/info15050280 - 14 May 2024
Abstract
This research paper presents a comprehensive study on optimizing the critical artificial intelligence (AI) factors influencing cost management in civil engineering projects using a multi-criteria decision-making (MCDM) approach. The problem addressed revolves around the need to effectively manage costs in civil engineering endeavors
[...] Read more.
This research paper presents a comprehensive study on optimizing the critical artificial intelligence (AI) factors influencing cost management in civil engineering projects using a multi-criteria decision-making (MCDM) approach. The problem addressed revolves around the need to effectively manage costs in civil engineering endeavors amidst the growing complexity of projects and the increasing integration of AI technologies. The methodology employed involves the utilization of three MCDM tools, specifically Delphi, interpretive structural modeling (ISM), and Cross-Impact Matrix Multiplication Applied to Classification (MICMAC). A total of 17 AI factors, categorized into eight broad groups, were identified and analyzed. Through the application of different MCDM techniques, the relative importance and interrelationships among these factors were determined. The key findings reveal the critical role of certain AI factors, such as risk mitigation and cost components, in optimizing the cost management processes. Moreover, the hierarchical structure generated through ISM and the influential factors identified via MICMAC provide insights for prioritizing strategic interventions. The implications of this study extend to informing decision-makers in the civil engineering domain about effective strategies for leveraging AI in their cost management practices. By adopting a systematic MCDM approach, stakeholders can enhance project outcomes while optimizing resource allocation and mitigating financial risks.
Full article
(This article belongs to the Special Issue AI Applications in Construction and Infrastructure)
►▼
Show Figures
Figure 1
Journal Menu
► ▼ Journal Menu-
- Information Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Algorithms, Diagnostics, Entropy, Information, J. Imaging
Application of Machine Learning in Molecular Imaging
Topic Editors: Allegra Conti, Nicola Toschi, Marianna Inglese, Andrea Duggento, Matthew Grech-Sollars, Serena Monti, Giancarlo Sportelli, Pietro CarraDeadline: 31 May 2024
Topic in
Drones, Electronics, Future Internet, Information, Mathematics
Future Internet Architecture: Difficulties and Opportunities
Topic Editors: Peiying Zhang, Haotong Cao, Keping YuDeadline: 30 June 2024
Topic in
Algorithms, Computation, Information, Mathematics
Complex Networks and Social Networks
Topic Editors: Jie Meng, Xiaowei Huang, Minghui Qian, Zhixuan XuDeadline: 31 July 2024
Topic in
Algorithms, Future Internet, Information, Mathematics, Symmetry
Research on Data Mining of Electronic Health Records Using Deep Learning Methods
Topic Editors: Dawei Yang, Yu Zhu, Hongyi XinDeadline: 31 August 2024
Conferences
Special Issues
Special Issue in
Information
Health Data Information Retrieval
Guest Editors: Mario Ciampi, Mario SicuranzaDeadline: 31 May 2024
Special Issue in
Information
Text Mining: Challenges, Algorithms, Tools and Applications
Guest Editor: Fei LiuDeadline: 15 June 2024
Special Issue in
Information
Systems Engineering and Knowledge Management
Guest Editor: Vladimír BurešDeadline: 30 June 2024
Special Issue in
Information
New Generation of Intelligent Transit Systems: Theory and Applications
Guest Editor: Antonio ComiDeadline: 15 July 2024
Topical Collections
Topical Collection in
Information
Natural Language Processing and Applications: Challenges and Perspectives
Collection Editor: Diego Reforgiato Recupero
Topical Collection in
Information
Knowledge Graphs for Search and Recommendation
Collection Editors: Pierpaolo Basile, Annalina Caputo
Topical Collection in
Information
Augmented Reality Technologies, Systems and Applications
Collection Editors: Ramon Fabregat, Jorge Bacca-Acosta, N.D. Duque-Mendez