Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = shareable data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 3406 KB  
Article
A Solution for the Health Data Sharing Dilemma: Data-Less and Identity-Less Model Sharing Through Federated Learning and Digital Twin-Assisted Clinical Decision Making
by Nilmini Wickramasinghe and Nalika Ulapane
Electronics 2025, 14(4), 682; https://doi.org/10.3390/electronics14040682 - 10 Feb 2025
Cited by 4 | Viewed by 1593
Abstract
Digital twins are essentially digital replicas of physical entities. Their usage is becoming more common across various industries, including healthcare. However, the implementation of digital twins in healthcare is uniquely challenging. This is partly because of the sensitive nature of health data and [...] Read more.
Digital twins are essentially digital replicas of physical entities. Their usage is becoming more common across various industries, including healthcare. However, the implementation of digital twins in healthcare is uniquely challenging. This is partly because of the sensitive nature of health data and privacy concerns. These concerns limit health data accessibility and shareability. This paper attempts to address this challenge of health data sharing. We propose a novel approach that leverages federated learning, model sharing, and digital twin-assisted clinical decision making. Our approach ensures that health data are kept federated with healthcare providers. Healthcare providers train machine learning models on their own data. Then, instead of sharing the data, the trained models are shared. This is enabled via an arrangement like a private blockchain that is accessible to subscribed healthcare providers. This approach allows healthcare providers to access and use machine learning models for clinical decision support without compromising sensitive data about patients. Certain information about machine learning models will be shared. These include indicators such as the sample size on which a model has been trained on, validation metrics, and model accuracy. Such information assists other healthcare providers in selecting the most effective models. We demonstrate the efficacy of this approach through a case study on chronic disease management (e.g., cancer) using Liquid Neural Networks. Our results show how federated learning and model sharing can enhance clinical decision making and improve patient outcomes while ensuring the privacy of data. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

25 pages, 8282 KB  
Article
A Secure Data Publishing and Access Service for Sensitive Data from Living Labs: Enabling Collaboration with External Researchers via Shareable Data
by Mikel Hernandez, Evdokimos Konstantinidis, Gorka Epelde, Francisco Londoño, Despoina Petsani, Michalis Timoleon, Vasiliki Fiska, Lampros Mpaltadoros, Christoniki Maga-Nteve, Ilias Machairas and Panagiotis D. Bamidis
Big Data Cogn. Comput. 2024, 8(6), 55; https://doi.org/10.3390/bdcc8060055 - 28 May 2024
Cited by 4 | Viewed by 2186
Abstract
Intending to enable a broader collaboration with the scientific community while maintaining privacy of the data stored and generated in Living Labs, this paper presents the Shareable Data Publishing and Access Service for Living Labs, implemented within the framework of the H2020 VITALISE [...] Read more.
Intending to enable a broader collaboration with the scientific community while maintaining privacy of the data stored and generated in Living Labs, this paper presents the Shareable Data Publishing and Access Service for Living Labs, implemented within the framework of the H2020 VITALISE project. Building upon previous work, significant enhancements and improvements are presented in the architecture enabling Living Labs to securely publish collected data in an internal and isolated node for external use. External researchers can access a portal to discover and download shareable data versions (anonymised or synthetic data) derived from the data stored across different Living Labs that they can use to develop, test, and debug their processing scripts locally, adhering to legal and ethical data handling practices. Subsequently, they may request remote execution of the same algorithms against the real internal data in Living Lab nodes, comparing the outcomes with those obtained using shareable data. The paper details the architecture, data flows, technical details and validation of the service with real-world usage examples, demonstrating its efficacy in promoting data-driven research in digital health while preserving privacy. The presented service can be used as an intermediary between Living Labs and external researchers for secure data exchange and to accelerate research on data analytics paradigms in digital health, ensuring compliance with data protection laws. Full article
Show Figures

Figure 1

40 pages, 9776 KB  
Article
Framework to Create Inventory Dataset for Disaster Behavior Analysis Using Google Earth Engine: A Case Study in Peninsular Malaysia for Historical Forest Fire Behavior Analysis
by Yee Jian Chew, Shih Yin Ooi, Ying Han Pang and Zheng You Lim
Forests 2024, 15(6), 923; https://doi.org/10.3390/f15060923 - 26 May 2024
Cited by 5 | Viewed by 2232
Abstract
This study developed a comprehensive framework using Google Earth Engine to efficiently generate a forest fire inventory dataset, which enhanced data accessibility without specialized knowledge or access to private datasets. The framework is applicable globally, and the datasets generated are freely accessible and [...] Read more.
This study developed a comprehensive framework using Google Earth Engine to efficiently generate a forest fire inventory dataset, which enhanced data accessibility without specialized knowledge or access to private datasets. The framework is applicable globally, and the datasets generated are freely accessible and shareable. By implementing the framework in Peninsular Malaysia, significant forest fire factors were successfully extracted, including the Keetch–Byram Drought Index (KBDI), soil moisture, temperature, windspeed, land surface temperature (LST), Palmer Drought Severity Index (PDSI), Normalized Vegetation Index (NDVI), landcover, and precipitation, among others. Additionally, this study also adopted large language models, specifically GPT-4 with the Noteable plugin, for preliminary data analysis to assess the dataset’s validity. Although the plugin effectively performed basic statistical analyses and visualizations, it demonstrated limitations, such as selectively dropping or choosing only relevant columns for tests and automatically modifying scales. These behaviors underscore the need for users to perform additional checks on the codes generated to ensure that they accurately reflect the intended analyses. The initial findings indicate that factors such as KBDI, LST, climate water deficit, and precipitation significantly impact forest fire occurrences in Peninsular Malaysia. Future research should explore extending the framework’s application to various regions and further refine it to accommodate a broader range of factors. Embracing and rigorously validating large language model technologies, alongside developing new tools and plugins, are essential for advancing the field of data analysis. Full article
Show Figures

Figure 1

18 pages, 7197 KB  
Article
Research on the Digital Preservation of Architectural Heritage Based on Virtual Reality Technology
by Haohua Zheng, Leyang Chen, Hui Hu, Yihan Wang and Yangyang Wei
Buildings 2024, 14(5), 1436; https://doi.org/10.3390/buildings14051436 - 16 May 2024
Cited by 13 | Viewed by 5146
Abstract
As a representative of the scientific and technological achievements of the new era, the overall development of virtual reality (VR) technology is becoming increasingly refined, which provides new development ideas and technical support in the field of ancient building restoration and architectural heritage [...] Read more.
As a representative of the scientific and technological achievements of the new era, the overall development of virtual reality (VR) technology is becoming increasingly refined, which provides new development ideas and technical support in the field of ancient building restoration and architectural heritage preservation. In this context, digital conservation and the practice of architectural heritage have become important focuses of application in the industry. This paper starts from the core concept of VR technology, analyzes the value of the application of VR technology in the protection of ancient architecture, puts forward relevant suggestions and technical application methods, and takes Red Pagoda in Fuliang County as an example. In this sense, virtual reality technology is used to restore and protect the buildings, forming a digital heritage of ancient architecture. This study first utilizes a three-dimensional laser scanning instrument to collect point cloud data, and then the plane graph is drawn by measurement. Then, an Architectural Heritage Building Information Model is created, and comprehensive information on historical buildings is integrated. Finally, VR technology is used to show the effect of digital display and preservation. This study transforms architectural cultural heritage into a shareable and renewable digital form through restoration and reproduction, interpreting and utilizing it from a new perspective and providing new ideas and methods for architectural heritage conservation. Full article
Show Figures

Figure 1

24 pages, 2592 KB  
Article
Delving into Causal Discovery in Health-Related Quality of Life Questionnaires
by Maria Ganopoulou, Efstratios Kontopoulos, Konstantinos Fokianos, Dimitris Koparanis, Lefteris Angelis, Ioannis Kotsianidis and Theodoros Moysiadis
Algorithms 2024, 17(4), 138; https://doi.org/10.3390/a17040138 - 27 Mar 2024
Cited by 1 | Viewed by 2085
Abstract
Questionnaires on health-related quality of life (HRQoL) play a crucial role in managing patients by revealing insights into physical, psychological, lifestyle, and social factors affecting well-being. A methodological aspect that has not been adequately explored yet, and is of considerable potential, is causal [...] Read more.
Questionnaires on health-related quality of life (HRQoL) play a crucial role in managing patients by revealing insights into physical, psychological, lifestyle, and social factors affecting well-being. A methodological aspect that has not been adequately explored yet, and is of considerable potential, is causal discovery. This study explored causal discovery techniques within HRQoL, assessed various considerations for reliable estimation, and proposed means for interpreting outcomes. Five causal structure learning algorithms were employed to examine different aspects in structure estimation based on simulated data derived from HRQoL-related directed acyclic graphs. The performance of the algorithms was assessed based on various measures related to the differences between the true and estimated structures. Moreover, the Resource Description Framework was adopted to represent the responses to the HRQoL questionnaires and the detected cause–effect relationships among the questions, resulting in semantic knowledge graphs which are structured representations of interconnected information. It was found that the structure estimation was impacted negatively by the structure’s complexity and favorably by increasing the sample size. The performance of the algorithms over increasing sample size exhibited a similar pattern, with distinct differences being observed for small samples. This study illustrates the dynamics of causal discovery in HRQoL-related research, highlights aspects that should be addressed in estimation, and fosters the shareability and interoperability of the output based on globally established standards. Thus, it provides critical insights in this context, further promoting the critical role of HRQoL questionnaires in advancing patient-centered care and management. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Reasoning)
Show Figures

Figure 1

15 pages, 1051 KB  
Article
Digital Redesign of Problem-Based Learning (PBL) from Face-to-Face to Synchronous Online in Biomedical Sciences MSc Courses and the Student Perspective
by Stella A. Nicolaou and Ioanna Petrou
Educ. Sci. 2023, 13(8), 850; https://doi.org/10.3390/educsci13080850 - 20 Aug 2023
Cited by 7 | Viewed by 2463
Abstract
PBL is a widely used teaching approach that is increasingly incorporating digital components. Although, by its nature, a face-to-face approach is the preferred mode of delivery, its digital counterpart is gaining ground. The current paper discusses the digital redesign of PBL in an [...] Read more.
PBL is a widely used teaching approach that is increasingly incorporating digital components. Although, by its nature, a face-to-face approach is the preferred mode of delivery, its digital counterpart is gaining ground. The current paper discusses the digital redesign of PBL in an MSc in Biomedical Sciences. Face-to-face and online PBL followed the seven steps of the PBL process, and each case was completed in three sessions. For the delivery of online PBL, collaborative tools were utilized, including CiscoWebex, the online platform for synchronous meetings, and OneDrive, shareable PPT, and Moodle for synchronous and asynchronous self-directed learning. Three cohorts were followed, and students had both face-to-face and online PBL experiences. Student feedback was obtained using focus groups, and data analysis utilized a deductive and inductive approach. Our data indicate that CiscoWebex is a suitable and user-friendly platform for synchronous online PBL. The students enjoyed both formats and stated that online PBL is an effective teaching approach for promoting student learning. In regards to student interaction, the face-to-face mode was preferred, while online PBL was perceived as more organized. The redesign allowed for effective student learning and could pave the way forward for a fully online MSc program in Biomedical Sciences. Full article
Show Figures

Figure 1

16 pages, 1490 KB  
Article
Identification of Key Elements in Prostate Cancer for Ontology Building via a Multidisciplinary Consensus Agreement
by Amy Moreno, Abhishek A. Solanki, Tianlin Xu, Ruitao Lin, Jatinder Palta, Emily Daugherty, David Hong, Julian Hong, Sophia C. Kamran, Evangelia Katsoulakis, Kristy Brock, Mary Feng, Clifton Fuller, Charles Mayo and BDSC Prostate Cancer
Cancers 2023, 15(12), 3121; https://doi.org/10.3390/cancers15123121 - 8 Jun 2023
Viewed by 2401
Abstract
Background: Clinical data collection related to prostate cancer (PCa) care is often unstructured or heterogeneous among providers, resulting in a high risk for ambiguity in its meaning when sharing or analyzing data. Ontologies, which are shareable formal (i.e., computable) representations of knowledge, can [...] Read more.
Background: Clinical data collection related to prostate cancer (PCa) care is often unstructured or heterogeneous among providers, resulting in a high risk for ambiguity in its meaning when sharing or analyzing data. Ontologies, which are shareable formal (i.e., computable) representations of knowledge, can address these challenges by enabling machine-readable semantic interoperability. The purpose of this study was to identify PCa-specific key data elements (KDEs) for standardization in clinic and research. Methods: A modified Delphi method using iterative online surveys was performed to report a consensus agreement on KDEs by a multidisciplinary panel of 39 PCa specialists. Data elements were divided into three themes in PCa and included (1) treatment-related toxicities (TRT), (2) patient-reported outcome measures (PROM), and (3) disease control metrics (DCM). Results: The panel reached consensus on a thirty-item, two-tiered list of KDEs focusing mainly on urinary and rectal symptoms. The Expanded Prostate Cancer Index Composite (EPIC-26) questionnaire was considered most robust for PROM multi-domain monitoring, and granular KDEs were defined for DCM. Conclusions: This expert consensus on PCa-specific KDEs has served as a foundation for a professional society-endorsed, publicly available operational ontology developed by the American Association of Physicists in Medicine (AAPM) Big Data Sub Committee (BDSC). Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

28 pages, 29575 KB  
Article
Digital Twinning for 20th Century Concrete Heritage: HBIM Cognitive Model for Torino Esposizioni Halls
by Antonia Spanò, Giacomo Patrucco, Giulia Sammartano, Stefano Perri, Marco Avena, Edoardo Fillia and Stefano Milan
Sensors 2023, 23(10), 4791; https://doi.org/10.3390/s23104791 - 16 May 2023
Cited by 16 | Viewed by 3907
Abstract
In the wide scenario of heritage documentation and conservation, the multi-scale nature of digital models is able to twin the real object, as well as to store information and record investigation results, in order to detect and analyse deformation and materials deterioration, especially [...] Read more.
In the wide scenario of heritage documentation and conservation, the multi-scale nature of digital models is able to twin the real object, as well as to store information and record investigation results, in order to detect and analyse deformation and materials deterioration, especially from a structural point of view. The contribution proposes an integrated approach for the generation of an n-D enriched model, also called a digital twin, able to support the interdisciplinary investigation process conducted on the site and following the processing of the collected data. Particularly for 20th Century concrete heritage, an integrated approach is required in order to adapt the more consolidated approaches to a new conception of the spaces, where structure and architecture are often coincident. The research plans to present the documentation process for the halls of Torino Esposizioni (Turin, Italy), built in the mid-twentieth century and designed by Pier Luigi Nervi. The HBIM paradigm is explored and expanded in order to fulfil the multi-source data requirements and adapt the consolidated reverse modelling processes based on scan-to-BIM solutions. The most relevant contributions of the research reside in the study of the chances of using and adapting the characteristics of the IFC (Industry Foundation Classes) standard to the archiving needs of the diagnostic investigations results so that the digital twin model can meet the requirements of replicability in the context of the architectural heritage and interoperability with respect to the subsequent intervention phases envisaged by the conservation plan. Another crucial innovation is a proposal of a scan-to-BIM process improved by an automated approach performed by VPL (Visual Programming Languages) contribution. Finally, an online visualisation tool enables the HBIM cognitive system to be accessible and shareable by stakeholders involved in the general conservation process. Full article
Show Figures

Figure 1

20 pages, 4253 KB  
Article
Compliance with HIPAA and GDPR in Certificateless-Based Authenticated Key Agreement Using Extended Chaotic Maps
by Tian-Fu Lee, I-Pin Chang and Guo-Jun Su
Electronics 2023, 12(5), 1108; https://doi.org/10.3390/electronics12051108 - 23 Feb 2023
Cited by 13 | Viewed by 4252
Abstract
Electronically protected health information is held in computerized healthcare records that contain complete healthcare information and are easily shareable or retrieved by various health care providers via the Internet. The two most important concerns regarding their use involve the security of the Internet [...] Read more.
Electronically protected health information is held in computerized healthcare records that contain complete healthcare information and are easily shareable or retrieved by various health care providers via the Internet. The two most important concerns regarding their use involve the security of the Internet and the privacy of patients. To protect the privacy of patients, various regions of the world maintain privacy standards. These are set, for example, by the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in Europe. Most recently developed authenticated key agreement schemes for HIPAA and GDPR privacy/security involve modular exponential computations or scalar multiplications on elliptic curves to provide higher security, but they are computationally heavy and therefore costly to implement. Recent studies have shown that cryptosystems that use modular exponential computation and scalar multiplication on elliptic curves are less efficient than those based on Chebyshev chaotic maps. Therefore, this investigation develops a secure and efficient non-certificate-based authenticated key agreement scheme that uses lightweight operations, including Chebyshev chaotic maps and hash operations. The proposed scheme overcomes the limitations of alternative schemes, is computationally more efficient, and provides more functionality. The proposed scheme complies with the privacy principles of HIPAA and GDPR. Full article
(This article belongs to the Special Issue Emerging Trends and Approaches to Cyber Security)
Show Figures

Figure 1

20 pages, 947 KB  
Article
Digital-Twin-Based Security Analytics for the Internet of Things
by Philip Empl and Günther Pernul
Information 2023, 14(2), 95; https://doi.org/10.3390/info14020095 - 4 Feb 2023
Cited by 29 | Viewed by 6579
Abstract
Although there are numerous advantages of the IoT in industrial use, there are also some security problems, such as insecure supply chains or vulnerabilities. These lead to a threatening security posture in organizations. Security analytics is a collection of capabilities and technologies systematically [...] Read more.
Although there are numerous advantages of the IoT in industrial use, there are also some security problems, such as insecure supply chains or vulnerabilities. These lead to a threatening security posture in organizations. Security analytics is a collection of capabilities and technologies systematically processing and analyzing data to detect or predict threats and imminent incidents. As digital twins improve knowledge generation and sharing, they are an ideal foundation for security analytics in the IoT. Digital twins map physical assets to their respective virtual counterparts along the lifecycle. They leverage the connection between the physical and virtual environments and manage semantics, i.e., ontologies, functional relationships, and behavioral models. This paper presents the DT2SA model that aligns security analytics with digital twins to generate shareable cybersecurity knowledge. The model relies on a formal model resulting from previously defined requirements. We validated the DT2SA model with a microservice architecture called Twinsight, which is publicly available, open-source, and based on a real industry project. The results highlight challenges and strategies for leveraging cybersecurity knowledge in IoT using digital twins. Full article
(This article belongs to the Special Issue Secure and Trustworthy Cyber–Physical Systems)
Show Figures

Figure 1

10 pages, 899 KB  
Article
Macro-moth (Lepidoptera) Diversity of a Newly Shaped Ecological Corridor and the Surrounding Forest Area in the Western Italian Alps
by Irene Piccini, Marta Depetris, Federica Paradiso, Francesca Cochis, Michela Audisio, Patrick Artioli, Stefania Smargiassi, Marco Bonifacino, Davide Giuliano, Sara La Cava, Giuseppe Rijllo, Simona Bonelli and Stefano Scalercio
Diversity 2023, 15(1), 95; https://doi.org/10.3390/d15010095 - 11 Jan 2023
Cited by 10 | Viewed by 4311
Abstract
In addition to the compilation of biodiversity inventories, checklists, especially if combined with abundance data, are important tools to understand species distribution, habitat use, and community composition over time. Their importance is even higher when ecological indicator taxa are considered, as in the [...] Read more.
In addition to the compilation of biodiversity inventories, checklists, especially if combined with abundance data, are important tools to understand species distribution, habitat use, and community composition over time. Their importance is even higher when ecological indicator taxa are considered, as in the case of moths. In this work, we investigated macro-moth diversity in a forest area (30 ha) in the Western Italian Alps, recently subjected to intense management activities. Indeed, an ecological corridor, which includes 10 clearings, has been shaped thanks to forest compensation related to the construction site of the Turin–Lyon High-Speed Railway. Here, we identified 17 patches (9 clearings and 8 forests), and we conducted moth surveys using UV–LED light traps. A total of 15,614 individuals belonging to 442 species were collected in 2020 and 2021. Two and fifteen species are new records for Piedmont and for Susa Valley, respectively. In addition to the faunistic interest of the data, this study—using a standardized method—provides geo-referenced occurrences, species-richness, and abundance values useful to compile a baseline dataset for future comparisons. Indeed, the replicable and easy shareable method allows us to make comparisons with other research and thus assess the impact of environmental changes. Full article
(This article belongs to the Special Issue Zoological Checklists: From Natural History Museums to Ecosystems)
Show Figures

Graphical abstract

15 pages, 2442 KB  
Review
Optimisation of Knowledge Management (KM) with Machine Learning (ML) Enabled
by Muhammad Anshari, Muhammad Syafrudin, Abby Tan, Norma Latif Fitriyani and Yabit Alas
Information 2023, 14(1), 35; https://doi.org/10.3390/info14010035 - 6 Jan 2023
Cited by 29 | Viewed by 9413
Abstract
The emergence of artificial intelligence (AI) and its derivative technologies, such as machine learning (ML) and deep learning (DL), heralds a new era of knowledge management (KM) presentation and discovery. KM necessitates ML for improved organisational experiences, particularly in making knowledge management more [...] Read more.
The emergence of artificial intelligence (AI) and its derivative technologies, such as machine learning (ML) and deep learning (DL), heralds a new era of knowledge management (KM) presentation and discovery. KM necessitates ML for improved organisational experiences, particularly in making knowledge management more discoverable and shareable. Machine learning (ML) is a type of artificial intelligence (AI) that requires new tools and techniques to acquire, store, and analyse data and is used to improve decision-making and to make more accurate predictions of future outcomes. ML demands big data be used to develop a method of data analysis that automates the construction of analytical models for the purpose of improving the organisational knowledge. Knowledge, as an organisation’s most valuable asset, must be managed in automation to support decision-making, which can only be accomplished by activating ML in knowledge management systems (KMS). The main objective of this study is to investigate the extent to which machine learning applications are used in knowledge management applications. This is very important because ML with AI capabilities will become the future of managing knowledge for business survival. This research used a literature review and theme analysis of recent studies to acquire its data. The results of this research provide an overview of the relationship between big data, machine learning, and knowledge management. This research also shows that only 10% of the research that has been published is about machine learning and knowledge management in business and management applications. Therefore, this study gives an overview of the knowledge gap in investigating how ML can be used in KM for business applications in organisations. Full article
(This article belongs to the Special Issue Systems Engineering and Knowledge Management)
Show Figures

Figure 1

11 pages, 1409 KB  
Article
Catching the Wave: Detecting Strain-Specific SARS-CoV-2 Peptides in Clinical Samples Collected during Infection Waves from Diverse Geographical Locations
by Subina Mehta, Valdemir M. Carvalho, Andrew T. Rajczewski, Olivier Pible, Björn A. Grüning, James E. Johnson, Reid Wagner, Jean Armengaud, Timothy J. Griffin and Pratik D. Jagtap
Viruses 2022, 14(10), 2205; https://doi.org/10.3390/v14102205 - 7 Oct 2022
Cited by 1 | Viewed by 2439
Abstract
The Coronavirus disease 2019 (COVID-19) pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) resulted in a major health crisis worldwide with its continuously emerging new strains, resulting in new viral variants that drive “waves” of infection. PCR or antigen detection [...] Read more.
The Coronavirus disease 2019 (COVID-19) pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) resulted in a major health crisis worldwide with its continuously emerging new strains, resulting in new viral variants that drive “waves” of infection. PCR or antigen detection assays have been routinely used to detect clinical infections; however, the emergence of these newer strains has presented challenges in detection. One of the alternatives has been to detect and characterize variant-specific peptide sequences from viral proteins using mass spectrometry (MS)-based methods. MS methods can potentially help in both diagnostics and vaccine development by understanding the dynamic changes in the viral proteome associated with specific strains and infection waves. In this study, we developed an accessible, flexible, and shareable bioinformatics workflow that was implemented in the Galaxy Platform to detect variant-specific peptide sequences from MS data derived from the clinical samples. We demonstrated the utility of the workflow by characterizing published clinical data from across the world during various pandemic waves. Our analysis identified six SARS-CoV-2 variant-specific peptides suitable for confident detection by MS in commonly collected clinical samples. Full article
(This article belongs to the Section General Virology)
Show Figures

Graphical abstract

26 pages, 990 KB  
Article
PF-ClusterCache: Popularity and Freshness-Aware Collaborative Cache Clustering for Named Data Networking of Things
by Samar Alduayji, Abdelfettah Belghith, Achraf Gazdar and Saad Al-Ahmadi
Appl. Sci. 2022, 12(13), 6706; https://doi.org/10.3390/app12136706 - 2 Jul 2022
Cited by 14 | Viewed by 2577
Abstract
Named Data Networking (NDN) has been recognized as the most promising information-centric networking architecture that fits the application model of IoT systems. In-network caching is one of NDN’s most fundamental features for improving data availability and diversity and reducing the content retrieval delay [...] Read more.
Named Data Networking (NDN) has been recognized as the most promising information-centric networking architecture that fits the application model of IoT systems. In-network caching is one of NDN’s most fundamental features for improving data availability and diversity and reducing the content retrieval delay and network traffic load. Several caching decision algorithms have been proposed; however, retrieving and delivering data content with minimal resource usage, reduced communication overhead, and a short retrieval time remains a great challenge. In this article, we propose an efficient popularity and freshness caching approach named PF-ClusterCache that efficiently aggregates the storage of different nodes within a given cluster as global shareable storage so that zero redundancy be obtained in any cluster of nodes. This increases the storage capacity for caching with no additional storage resource. PF-ClusterCache ensures that only the newest, most frequent data content is cached, and caching is only performed at the edge of the network, resulting in a wide diversity of cached data content across the entire network and much better overall performance. In-depth simulations using the ndnSIM simulator are performed using a large transit stub topology and various networking scenarios. The results show the effectiveness of PF-ClusterCache in sharing and controlling the local global storage, and in accounting for the popularity and freshness of data content. PF-ClusterCache clearly outperforms the benchmark caching schemes considered, especially in terms of the significantly greater server access reduction and much lower content retrieval time, while efficiently conserving network resources. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

10 pages, 1700 KB  
Proceeding Paper
Synthetic Subject Generation with Coupled Coherent Time Series Data
by Xabat Larrea, Mikel Hernandez, Gorka Epelde, Andoni Beristain, Cristina Molina, Ane Alberdi, Debbie Rankin, Panagiotis Bamidis and Evdokimos Konstantinidis
Eng. Proc. 2022, 18(1), 7; https://doi.org/10.3390/engproc2022018007 - 21 Jun 2022
Cited by 7 | Viewed by 2698
Abstract
A large amount of health and well-being data is collected daily, but little of it reaches its research potential because personal data privacy needs to be protected as an individual’s right, as reflected in the data protection regulations. Moreover, the data that do [...] Read more.
A large amount of health and well-being data is collected daily, but little of it reaches its research potential because personal data privacy needs to be protected as an individual’s right, as reflected in the data protection regulations. Moreover, the data that do reach the public domain will typically have under-gone anonymization, a process that can result in a loss of information and, consequently, research potential. Lately, synthetic data generation, which mimics the statistics and patterns of the original, real data on which it is based, has been presented as an alternative to data anonymization. As the data collected from health and well-being activities often have a temporal nature, these data tend to be time series data. The synthetic generation of this type of data has already been analyzed in different studies. However, in the healthcare context, time series data have reduced research potential without the subjects’ metadata, which are essential to explain the temporal data. Therefore, in this work, the option to generate synthetic subjects using both time series data and subject metadata has been analyzed. Two approaches for generating synthetic subjects are proposed. Real time series data are used in the first approach, while in the second approach, time series data are synthetically generated. Furthermore, the first proposed approach is implemented and evaluated. The generation of synthetic subjects with real time series data has been demonstrated to be functional, whilst the generation of synthetic subjects with synthetic time series data requires further improvements to demonstrate its viability. Full article
(This article belongs to the Proceedings of The 8th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

Back to TopTop