1. Introduction
Data sovereignty is a concept with multiple interpretations. It generally refers to the self-determination of individuals and organizations over the use of their data [
1]. It is also understood as a legal concept, meaning that data is subject to the laws of the country in which it is stored [
2]. Data sovereignty is a component of the broader concept of digital sovereignty [
3], which also includes software sovereignty and operational sovereignty.
In essence, data sovereignty is about empowering individuals and organizations to have meaningful control over their data, enabling them to protect their interests, comply with regulations, and participate in the data economy with confidence [
2].
To enforce data sovereignty in public cloud and edge environments, a range of solutions are currently employed. Geofencing and data localization techniques are implemented to control the physical storage of data and restrict it to specific geographical boundaries. Another technology supporting data sovereignty is confidential computing, which processes data in encrypted form using Trusted Execution Environments (TEEs) based on secure enclaves (e.g., Intel SGX, AMD SEV).
Cloud service providers offer sovereign cloud architectures that deploy physically isolated data centers, co-designed with national or regional authorities, to ensure compliance with local sovereignty requirements.
Data Spaces offer an alternative approach to ensuring data sovereignty. They create trusted, distributed ecosystems that implement usage control mechanisms. As there is no centralized data storage, data remains with the data owner within their trusted infrastructure and is shared only under strict conditions, as defined and negotiated in a signed contract.
The topic of secure data sharing and exchange is currently one of the most important aspects of the daily operations of all modern companies that want to grow and stay competitive in the market. To achieve this goal, organizations can utilize managed cloud storage services from the broad portfolio offered by cloud service providers (CSPs). However, to ensure the data sovereignty of their assets, an additional layer of control and a policy enforcement platform are required to guarantee that usage constraints defined in contracts are adhered to by data consumers. This is the primary value added by the Data Space architecture, which has been described in a comprehensive publication [
4]. Secondly, unlike public cloud services, Data Spaces support heterogeneous architectures and Data Space Connectors; that is, data providers can be hosted in various locations, including the cloud, on-premises environments, and at the edge. Data sovereignty features, like data usage control protocols, allow for the control and enforcement of data usage exactly according to the restrictions stated in the signed contract. However, how much overhead does it cost? How big is the performance impact of implementing data sovereignty on data sharing services? The Data Space environment (including IDS) must provide additional services and policy enforcement components, which may issue extra HTTP requests to decide whether access to a particular dataset should be granted. This process introduces additional delays in data access and processing for consumers.
In [
5], an analysis of a Data Sovereignty Model is presented; it is divided into functional layers and groups different software components that build overall Data Sovereignty solutions in the public cloud. This extends the review of the current state of knowledge on Data Sovereignty solutions presented in [
6]. Experiments were also conducted to compare the performance of the Dataspace Connector hosted in an edge environment and two public cloud platforms, namely Google Cloud Platform and Microsoft Azure. It also included an extensive systematic literature review exploring the topic of data sovereignty in the public cloud. A research gap was identified: although several papers, such as [
7,
8,
9], compare cloud data sharing services, none of them analyze the performance overhead introduced by Data Space architectures. The selection of data sharing services is crucial for industries that design solutions for data exchange between parties—both internal and external—because these solutions must be optimized for performance and security, which depend on the choice of provider, hosting option, and geographical location.
This paper presents a set of experiments which aim to compare the Data Sovereignty overhead with managed cloud-native data sharing managed services from three major cloud service providers: Amazon, Microsoft, and Google. The performance of data sharing is compared using the Dataspace Connector hosted on the edge and in the public cloud (AWS, Azure, and GCP). Secondly, a comparison of two implementations of IDS Connectors is presented: the Dataspace Connector [
10] and the Prometheus-X Dataspace Connector [
11]. The IDS Connectors are key components in the IDS architecture, providing access to Data Spaces for both data publishers and consumers, and acting as gateways that control access and data usage. Since all traffic flows through this component, its performance has a major impact on the overall performance of the system. For details on the IDS reference architecture, see [
5]. The novelty of this study lies in its comprehensive comparison of the performance of data sharing services, based on a series of experiments designed to evaluate the performance impact of hosting Data Space Connectors in different environments, namely, three major public cloud service providers and edge computing. Furthermore, it compares the performance of various implementations of Data Space Connectors to determine the extent to which any overhead is attributable to the Data Space architecture itself versus the specific implementations provided by connector vendors.
Data Spaces are being implemented across various domains, including health, manufacturing, mobility, energy, and education. Examples of Data Space implementations and use cases in different sectors include Medical Data Spaces [
12], the Smart Manufacturing Data Space [
13], the Mobility Data Space [
14], and Catena-X in the automotive industry [
15], among others. Designing a Data Space requires making important architectural decisions, such as selecting infrastructure providers and connector vendors. This makes the present research an important guideline in this domain.
While several studies emphasize the importance of data sovereignty and Data Spaces as key factors for maintaining competitiveness [
16] and introduce secure ecosystems for data exchange [
17], others, such as [
18], provide guidelines for practitioners and industry experts engaged in data-driven systems. However, none of these studies offer a comprehensive comparison of data sovereignty features in Data Spaces or a comparison with cloud-native data sharing managed services.
2. Materials and Methods
In this study, several experiments were conducted that were designed to evaluate the performance impact of data sovereignty features on Data Spaces on the performance of data sharing. This study compares the performance of two implementations of the IDS Connector and the performance impact of hosting connectors on the edge and on three public cloud platforms. The expectation was that managed services on public clouds would deliver high-performance data sharing capabilities with only slight differences in performance when the geographical location is the same. It was also expected that different implementations of IDS Connectors would provide similar performance but, when compared to cloud-native services, would introduce significant overhead due to the execution of policy enforcement mechanisms.
To evaluate the performance of various API endpoints, a Python 3.11.0 script was used, leveraging the requests library (v2.32.2) to programmatically send HTTP requests. Each API endpoint was tested by issuing 100 sequential HTTPS GET or POST requests, capturing the response time for each request. The average response time per endpoint was then computed to assess the performance. The average response time metric was chosen, as it is the most critical parameter from a practical standpoint when designing production-grade solutions.
The following Data Space Connector implementations were tested (latest available versions):
All connectors and related services were containerized and deployed using Docker runtime to ensure consistency across platforms. Performance metrics were recorded programmatically within the Python script, and network latency and infrastructure differences were taken into consideration by testing across four distinct deployment environments. Tests were conducted in four different computing environments representing edge and cloud deployments. Details of the hardware, OS, and geographic locations are outlined below:
To set up the testing environment for cloud-native storage services, the cloud consoles (web interfaces provided by CSPs) were used, with the following parameters used during set up:
Amazon S3 object store:
- ○
Class: Standard;
- ○
Location: Europe (Frankfurt);
- ○
Encryption type: Amazon S3 managed keys (SSE-S3).
Azure Blob Storage:
- ○
Access Tier: Hot;
- ○
Location: West Europe (Amsterdam);
- ○
Encryption type: Microsoft-managed keys.
Google Cloud Storage:
- ○
Class: Standard;
- ○
Location: West Europe (Amsterdam);
- ○
Data encryption: Google-managed encryption key.
Note: In all tests, this study used the closest available region to evaluate the actual performance of cloud services, rather than network latency. This choice was made because network latency has already been identified as the primary factor degrading overall service performance, as indicated in [
5].
Limitations: The experiment was designed to measure performance in sharing a test file of 1 MB across different services, including cloud-native solutions and Data Space Connectors. The same file was used consistently across all testing environments. However, it should be noted that this comparison may not be fully representative of real-world applications, where data sharing typically involves files of varying sizes.
Generative Artificial Intelligence (GenAI) has not been used in this paper.
3. Results
3.1. Cloud-Native Data Sharing Managed Services
The following three subsections describe the tested data sharing managed services from three global cloud providers, using identical or, where necessary, closely matched configurations.
3.1.1. Amazon S3
The first test scenario covered the Amazon cloud and the S3 object store.
The experiment was performed using the following environment and settings:
For this provider, an average response time of 153 ms was obtained, which is a very good result. See
Figure 1 for detailed results.
3.1.2. Azure Blob Storage
The experiment was performed using the following environment and settings:
For Azure Blob Storage, the response time was a little bit higher—175 ms. See
Figure 2 for detailed results. The average response time was 14% higher, which is considered acceptable.
3.1.3. Google Cloud Storage
The experiment was performed using the following environment and settings:
For Google Cloud Storage, the average response time was substantially higher at 439 ms, which is unexpectedly high compared to the results of the previous two tests.
This could be due to specific network routing configurations or performance issues within this particular data center. This result should be validated in future experiments.
3.2. IDS Connectors
The above tests serve as a baseline for testing IDS Connectors, which act as gateways to Data Spaces. The main difference in this use case is that Data Spaces provide a distributed data sharing environment, so each connector can be hosted in a different location and in a different environment, such as the cloud, on-premises, or edge. Additionally, data is not stored on a central server. The second main difference is that a data provider can attach a data usage policy to the artifact they are sharing, and the consumer connector will enforce it. This research used International Data Spaces Connectors, which support 21 predefined usage policy classes.
Examples of data usage policy classes (see full list in the Usage Control in the IDSA paper [
19]) include
Restricting data usage to a specific time interval, time duration, or a limited number of uses;
Notifying a party about data usage;
Deleting data after usage.
In a test environment, the Dataspace Connector was deployed on a local server when testing in the edge environment and on a virtual server when testing in the cloud. The basic “Allow Use” data policy was attached, using the same file as in previous tests, and we sent the GET HTTP request to the following URI: /api/artifacts/<id>/data.
3.2.1. IDS Dataspace Connector Hosted on Edge
For the connector hosted on the edge server, an average response time of 224 ms was obtained, which was actually an expected result. See
Figure 4 for detailed results.
3.2.2. IDS Dataspace Connector Hosted on AWS EC2
In a test executed on an AWS EC2 virtual server, we recorded an average response time of 711 milliseconds. This result is the highest among all experiments and is more than three times greater than in the edge environment. Higher latency was an expected outcome, as local computations do not involve network latency; however, the observed value is unexpectedly high.
3.2.3. IDS Dataspace Connector Hosted on Azure VM
When hosted on an Azure virtual machine, the Dataspace Connector showed an average response time of 606 ms. In this environment, the response time remains very high, being 246% greater than that of the Azure platform.
3.2.4. IDS Dataspace Connector Hosted on GCP VM
When hosted on Google Cloud Platform, the connector achieved an average response time of 313 milliseconds. This value appears consistent with expectations, being only slightly higher than the performance of the connector deployed on edge.
3.3. Cloud-Native vs. Data Space Data Sharing
Our experiments demonstrated that performance varies significantly between providers. Azure Blob Storage was 14% slower than Amazon S3, while Google Cloud Storage was 187% slower than Amazon’s managed service.
Overall, the IDS Connector exhibited lower performance than cloud-native services. However, when results are compared on a per-platform basis, it is observed that while the connector was slower on AWS and Azure, it actually outperformed native services on Google Cloud Platform by 28%; see
Figure 8 for details.
3.4. IDS Connectors–Catalog API Tests
In the next set of experiments, a comparison of the performance of another API endpoint is made, /api/catalog, which is used to create or retrieve catalogs from the IDS Connector.
The testing methodology was the same as in previous tests—sending 100 HTTP requests using both GET and POST methods—and measured the average response times across the edge, AWS, Azure, and GCP environments.
3.4.1. Dataspace Connector
Figure 9 depicts the average request response times of the Catalog API endpoint of the Dataspace Connector.
The response times for the connector hosted the edge environment were the fastest for both Get and Post requests, at 104 ms and 87, respectively. It was also observed that, in general, all Post requests were faster than Get requests, which was also observed in previous research.
The second fastest environment was GCP, with an average response time of 161 ms for Get and 194 ms for Post requests, followed by AWS (241 ms average response time for Post and 280 ms for Get) and finally Azure (248 ms average response time for Post and 310 ms for Get). For details, see
Figure 10,
Figure 11,
Figure 12 and
Figure 13, which present the response times of individual requests.
3.4.2. Prometheus-X Dataspace Connector
The subsequent phase of our experimental study focused on evaluating different implementations of IDS Connectors. For comparative analysis, it was necessary to select an alternative connector in addition to the Dataspace Connector.
The Dataspace Connector represents the first implementation of the IDS Connector specification to achieve IDS Ready certification and is included in the IDSA Reference Testbed [
20]. From the 38 available implementations (listed in the IDSA Data Connector Report [
21]), a set of mandatory selection criteria were applied, which included availability under an open-source license, support for a usage control protocol, and the ability to be deployed both on-premises and in cloud environments. Additional evaluation criteria encompassed the maturity of the implementation and the quality of its documentation. Based on these criteria, the “Prometheus-X Dataspace Connector” was selected as the second candidate for comparison. An identical series of tests was executed for the Prometheus-X Dataspace Connector on the Catalog API across four deployment environments. The results, visualized in
Figure 14, revealed substantial differences in performance between the two connectors.
The Prometheus-X Dataspace Connector consistently outperformed the Dataspace Connector across all environments. When average response times were computed across the edge, AWS, Azure, and GCP platforms, the Dataspace Connector exhibited an average latency of 203 ms, whereas the Prometheus-X Dataspace Connector achieved a significantly lower average of 89 ms, indicating that the Prometheus-X implementation is approximately 56% faster.
4. Discussion
This research clearly highlights significant differences in the performance of data sharing services offered by various providers. This observation applies not only to cloud service providers (CSPs) but also to vendors of IDS Connector implementations. It remains inconclusive whether the data usage policy protocols employed in Data Spaces introduce meaningful performance overhead, as results varied across different testing environments.
Among the cloud-native storage services, Amazon S3 exhibited the highest performance. Azure ranked second, with performance approximately 14% lower, while Google Cloud Storage performed the worst, with latency up to 187% higher compared to Amazon S3. In comparing the Dataspace Connector to native services, it was observed that the Connector outperformed Google Cloud’s native storage by 28%. Conversely, on the Azure platform, the Dataspace Connector was significantly less efficient, exhibiting a 246% increase in latency compared to Azure’s native storage solution. An even greater performance difference was observed on the AWS platform, where the Dataspace Connector was 364% slower. Since public cloud storage services are proprietary and their internal architectures are not publicly disclosed, it can only be speculated that these performance differences may stem from variations in the underlying platform implementations. Additionally, slight differences in server locations (e.g., Frankfurt versus Amsterdam) may also have influenced the test results.
The performance comparison between the Prometheus-X Dataspace Connector and the standard Dataspace Connector revealed significant differences depending on the hosting environment. The best performance was observed with the Prometheus-X Dataspace Connector hosted in an edge environment, achieving a response time of 42 ms—a notably strong result. In contrast, the standard Dataspace Connector on the same edge environment exhibited substantially worse performance: 107% slower for POST requests and 147% slower for GET requests, which is a surprisingly large disparity. The poorest performance was recorded for the standard Dataspace Connector hosted on an Azure virtual server, with response times of 310 ms for GET requests and 248 ms for POST requests—representing degradations of 89% and 161%, respectively, compared to the Prometheus-X Dataspace Connector.
The results above provide important guidance for platform selection by architects designing data sovereignty solutions. While AWS and Azure cloud-native storage services demonstrate higher performance, the GCP platform performs significantly better when hosting Data Space Connectors (i.e., for data sharing using DSC).
This research reveals significant differences, sometimes surprisingly large, between similar services offered by different providers. While the latency of service responses to HTTP requests can be measured and compared, the performance characteristics of proprietary CSP services can only be observed externally, since providers do not publish details of their internal architectures or the software components they employ.
In contrast, when comparing open-source implementations of IDS Connectors, namely the Dataspace Connector (version 8.0.2) and the Prometheus-X Dataspace Connector (version 1.9.2), the situation differs because their source code is publicly available in GitHub repositories. This accessibility allows for direct analysis. We observed that the connectors employ different underlying databases, PostgreSQL and MongoDB, which are optimized for different types of operations. PostgreSQL, as a relational SQL database, is particularly suited for complex queries, whereas MongoDB, as a NoSQL document store, is optimized for high-speed read and write operations.
5. Conclusions
5.1. Key Findings
This study revealed notable performance differences between IDS Connector implementations. On average, the Prometheus-X Dataspace Connector demonstrated 56% better performance than the standard Dataspace Connector. This performance disparity may be attributed to differences in authorization mechanisms (e.g., basic authentication versus bearer token) and backend database technologies (PostgreSQL versus MongoDB).
Another interesting observation is the very low performance of the Google Cloud Storage managed service, which showed significantly worse results than the other two hyperscalers, AWS and Azure. It would be worthwhile to repeat this test in a different Google data center to confirm whether similar performance is observed across other locations.
The key guidelines for practitioners derived from the experiments are as follows. When system requirements do not mandate sovereignty features, the most suitable option for hosting data is Amazon S3, which consistently delivers the best performance across major hyperscalers. In general, hosting data sharing services on an edge platform proves to be the most efficient approach. However, if data sovereignty features, such as usage control, are essential, and the service must be hosted in the cloud, then provided the designer has flexibility in selecting the IDS Connector implementation, the recommended option is the PDC hosted on the GCP platform.
5.2. Highlights
In this research, a comparison was made between cloud-native data sharing services from three major public cloud providers and two implementations of Data Space Connectors hosted in four environments to reveal the impact of data sovereignty features, such as usage control policies, on data sharing performance. A significant difference in performance was demonstrated between connector implementations, at 56% on average, which provides an important guideline for companies building data sovereignty solutions in cloud environments. Future research will involve testing actual data exchange between IDS Connectors from different vendors to verify interoperability and assess cross-connector performance. Another research direction will examine the latency impact of varying data usage policy classes, such as remote notifications and data encryption when stored or transferred to third parties.
Author Contributions
Conceptualization, S.G. (Stanisław Galij) and G.P.; methodology, G.P.; software, S.G. (Stanisław Galij); validation, G.P. and S.G. (Sławomir Grzyb); formal analysis, G.P. investigation, S.G. (Stanisław Galij); resources, S.G. (Stanisław Galij); data curation, S.G. (Stanisław Galij); writing—original draft preparation, S.G. (Stanisław Galij); writing—review and editing, S.G. (Stanisław Galij), G.P. and S.G. (Sławomir Grzyb); visualization, S.G. (Stanisław Galij); supervision, G.P. and S.G. (Sławomir Grzyb); project administration, G.P.; funding acquisition, S.G. (Stanisław Galij). All authors have read and agreed to the published version of the manuscript.
Funding
This research was partially funded by MEiN Program ‘Implementation Doctorate’, agreement No. (PP) RU00023881, dated: 30 September 2022, and statutory funds of Poznan University of Technology and West Pomeranian University of Technology in Szczecin.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The raw data supporting the conclusions of this article will be made available by the authors on request.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
GCP | Google Cloud Platform |
AWS | Amazon Web Services |
IDSA | International Data Spaces Association |
CSP | Cloud Service Provider |
VM | Virtual Machine |
DSC | Dataspace Connector |
PDC | Prometheus-X Dataspace Connector |
References
- Hellmeier, M.; Pampus, J.; Qarawlus, H.; Howar, F. Implementing data sovereignty: Requirements & challenges from practice. In Proceedings of the 18th International Conference on Availability, Reliability and Security, New York, NY, USA, 29 August 2023. [Google Scholar]
- Hummel, P.; Braun, M.; Tretter, M.; Dabrock, P. Data sovereignty: A review. Big Data Soc. 2021, 8, 1–17. [Google Scholar] [CrossRef]
- Feth, D.; Jung, C.; Eitel, A. Concepts for data sovereignty in digital value chains: Data cockpits—data usage control—data trustees. In New Digital Work II: Digital Sovereignty of Companies and Organizations; Springer Nature: Cham, Switzerland, 2025; pp. 75–92. [Google Scholar]
- Otto, B.; ten Hompel, M.; Wrobel, S. Designing Data Spaces: The Ecosystem Approach to Competitive Advantage; Springer Nature: Cham, Switzerland, 2022. [Google Scholar]
- Galij, S.; Pawlak, G.; Grzyb, S. Modeling Data Sovereignty in Public Cloud—A Comparison of Existing Solutions. Appl. Sci. 2024, 14, 10803. [Google Scholar] [CrossRef]
- Ernstberger, J.; Lauinger, J.; Elsheimy, F.; Zhou, L.; Steinhorst, S.; Canetti, R.; Miller, A.; Gervais, A.; Song, D. SoK: Data Sovereignty. In Proceedings of the 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P), Delft, The Netherlands, 3–7 July 2023. [Google Scholar]
- Dataspace Connector (GitHub Repository). Available online: https://github.com/International-Data-Spaces-Association/DataspaceConnector (accessed on 20 May 2025).
- Prometheus-X Dataspace Connector (GitHub Repository). Available online: https://github.com/Prometheus-X-association/dataspace-connector (accessed on 20 May 2025).
- IDS Test Bed. Available online: https://github.com/International-Data-Spaces-Association/IDS-testbed (accessed on 20 May 2025).
- Data Connector Report. Available online: https://internationaldataspaces.org/wp-content/uploads/dlm_uploads/IDSA-Data-Connector-Report-84-No-16-September-2024-1.pdf (accessed on 20 May 2025).
- Usage Control in the International Data Spaces. Available online: https://internationaldataspaces.org/wp-content/uploads/dlm_uploads/IDSA-Position-Paper-Usage-Control-in-the-IDS-V3..pdf (accessed on 3 June 2025).
- Berlage, T.; Claussen, C.; Geisler, S.; Velasco, C.A.; Decker, S. Medical Data Spaces in Healthcare Data Ecosystems; Springer: Berlin/Heidelberg, Germany, 2022; pp. 291–311. [Google Scholar]
- Curry, E.; Tuikka, T.; Metzger, A.; Zillner, S.; Bertels, N.; Ducuing, C.; Carbonare, D.D.; Gusmeroli, S.; Scerri, S.; de Vallejo, I.L.; et al. Data sharing spaces: The BDVA perspective. In Designing Data Spaces: The Ecosystem Approach to Competitive Advantage; Springer International Publishing: Cham, Switzerland, 2022; pp. 365–382. [Google Scholar]
- Sebastian, P.; Drees, H.; Rittershaus, L. Mobility data space. In Designing Data Spaces; Springer: Berlin/Heidelberg, Germany, 2022; Volume 343. [Google Scholar]
- Langdon, C.S.; Schweichhart, K. Data Spaces: First Applications in Mobility and Industry. In Designing Data Spaces; Springer: Berlin/Heidelberg, Germany, 2022; pp. 493–511. [Google Scholar]
- Jarke, M.; Otto, B.; Ram, S. Data sovereignty and data space ecosystems. Bus. Inf. Syst. Eng. 2019, 61, 549–550. [Google Scholar] [CrossRef]
- Bader, S.; Pullmann, J.; Mader, C.; Tramp, S.; Quix, C.; Müller, A.W.; Akyürek, H.; Böckmann, M.; Imbusch, B.T.; Lipp, J.; et al. The international data spaces information model–an ontology for sovereign exchange of digital content. In International Semantic Web Conference; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
- Mertens, C.; Alonso, J.; Lázaro, O.; Palansuriya, C.; Böge, G.; Nizamis, A.; Rousopoulou, V.; Ioannidis, D.; Tzovaras, D.; Touma, R.; et al. A framework for big data sovereignty: The European industrial data space (EIDS). In Data Spaces: Design, Deployment and Future Directions; Springer International Publishing: Cham, Switzerland, 2022; pp. 201–226. [Google Scholar]
- Bocchi, E.; Drago, I.; Mellia, M. Personal cloud storage benchmarks and comparison. IEEE Trans. Cloud Comput. 2015, 5, 751–764. [Google Scholar] [CrossRef]
- Drago, I.; Bocchi, E.; Mellia, M.; Slatman, H.; Pras, A. Benchmarking personal cloud storage. In Proceedings of the 2013 Conference on Internet Measurement Conference, New York, NY, USA, 23 October 2013. [Google Scholar]
- Bocchi, E.; Mellia, M.; Sarni, S. Cloud storage service benchmarking: Methodologies and experimentations. In Proceedings of the 2014 IEEE 3rd International Conference on Cloud Networking (CloudNet), Luxembourg, 8–10 October 2014. [Google Scholar]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).