Next Article in Journal
A Partition-Based Hybrid Algorithm for Effective Imbalanced Classification
Previous Article in Journal
Quantifying Relational Exploration in Cultural Heritage Knowledge Graphs with LLMs: A Neuro-Symbolic Approach for Enhanced Knowledge Discovery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sharing Research Data in Collaborative Material Science and Engineering Projects

1
Institute of Mechatronic Engineering (IMD), Technische Universität Dresden, 01069 Dresden, Germany
2
Fraunhofer Institute for Machine Tools and Forming Technology (IWU), 01187 Dresden, Germany
*
Author to whom correspondence should be addressed.
Data 2025, 10(4), 53; https://doi.org/10.3390/data10040053
Submission received: 21 February 2025 / Revised: 28 March 2025 / Accepted: 10 April 2025 / Published: 15 April 2025
(This article belongs to the Section Information Systems and Data Management)

Abstract

:
The objective of this paper was to examine the potential for the automated release of research data within the context of material science consortium projects, with adherence to the stipulated rules and contractual agreements, while also considering all relevant framework conditions, including those pertaining to protection and confidentiality. The investigation further aimed to explore the utilisation of the release process as a means for ensuring the quality of research data, employing an integrated review procedure. The study commenced with an examination of the regulations governing the sharing and reusing of research data, and the associated benefits. The fist phase of the study involved an evaluation of current release processes in common research data infrastructures. This was followed by the development of a methodological approach to the release of research data according to the needs of researcher in collaborative projects, such as automation of the release process, quality queries, reusability of data and more. The implementation of the main functions of this methodological approach was then undertaken in Kadi4Mat, an open-source research data infrastructure originally developed in the context of materials science. This implementation took the form of a prototypical and modular plugin.

1. Introduction

1.1. Motivation

In the domain of material and engineering sciences, collaborative projects frequently encompass multiple partners from academic and industrial sectors, working in concert to accomplish shared research objectives. In order to meet the objectives of the project, it is usually necessary for the participating partners to engage in continuous exchange of data, since access to data from other project members is often crucial. However, this exchange is not without its challenges.
A significant issue that arises from the need to protect research data generated under special conditions, such as those requiring intellectual property safeguards, is the challenge of ensuring compliance with collaboration terms. This is particularly salient in applied research projects with a private sector background, where multifarious responsibilities and legal requirements render compliance with collaboration terms both necessary and difficult. Consequently, the process of scientific and administrative release of data, encompassing legal and qualitative aspects, assumes paramount importance for researchers, who frequently operate within project environments characterised by multiple stakeholders.
The exchange of data in research projects involving industrial partners is a recurrent challenge in the domain of research data management. Manual, verbal agreements for data release are time-consuming and lack consistency, highlighting the clear need for improved solutions within the community. The (partially) automated implementation of data release processes in collaborative projects has the potential to enhance quality and reduce time expenditure in comparison with existing practices.
Notwithstanding the challenges encountered, the data release process remains pivotal to the success of collaborative endeavours. It facilitates the aggregation of data from multiple sources, thereby promoting cross-domain knowledge generation. Corresponding challenges are encountered in the context of data publication, where the release process is imperative to ensure data accessibility and usability across different disciplines.
In summary, the processes of data release and publication are highly relevant in joint material science and engineering research projects. To better understand these processes, the typical workflow and main challenges faced by researchers in sharing data within collaborative research environments will be described in the following sections.
Data sharing can be categorised into several forms; see [1]:
  • Sharing on request: Data are made available upon request.
  • Clique sharing: Data are made available after individual agreements are made between colleagues or project partners.
  • Controlled-access sharing: Data are made available to the public after certain criteria have been met, such as verification of terms of use or ethical commitments.
  • Public data sharing: Data are made available and accessible to the public with very few barriers through a public repository. This, in combination with methods to make the data more findable, interoperable and reusable, is often referred to ‘Open Data’.
As illustrated in Table 1, the different sharing options are summarised, with the number of barriers from the perspectives of both creators and users of the release workflow, as well as the lists of persons who can access the shared data.
In the context of collaborative research endeavours, clique sharing has emerged as the predominant release method.Typically, cooperation agreements are established between all project partners prior to the initiation of the project, and confidentiality agreements, such as non-disclosure agreements (NDAs), are customarily executed when industrial partners are involved. The utilisation of alternative methods is frequently precluded by the associated costs of enabling data sharing on request, as well as the requirement for controlled access. Furthermore, public sharing and controlled-access sharing is generally discouraged due to concerns regarding confidentiality and the potential for incomplete or untested data. Consequently, the following study will focus exclusively on the challenges and requirements of the data sharing process in the context of clique sharing within project partners, not the data sharing of Open Data. Nevertheless, alternative release methods, such as data sharing on request or public sharing, will be considered in the proposed workflow concept in order to ensure that the conceptual solution is not limited in scope.

1.2. Challenges and Requirements

In the following, the typical challenges and the resulting requirements in releasing data via clique sharing will be presented. These challenges and requirements are based on empirical values derived from the authors’ participation in collaborative joint research projects in the field of mechanical engineering.
  • Challenge: Release workflow. Researchers frequently lack awareness of the requisite correct steps for releasing data, and as a consequence, only process some parts of the workflow or execute the steps in an incorrect order.
    ⟶ Requirement: Methodological procedure and user-friendliness. The release process for research data has to include a generally valid, clearly defined procedure and be as independent as possible of specific applications so that it can be used in a variety of ways. For optimal acceptance, the approval process must be user-friendly, i.e., the process should require minimal user intervention and the user interfaces should be intuitive.
  • Challenge: Data documentation. Documentation for data or data sets is necessary to enable other researchers to understand and reuse the data.
    ⟶ Requirement: Adjusting metadata. Metadata can be defined as information that describes and documents data. It is therefore essential that metadata can be appended to data and adapted in a structured way throughout the release process.
  • Challenge: Protection of personal data. The protection of personal data is governed by national legislation and confidential agreements. In some countries, any information relating to an identified or identifiable natural person must be anonymised (removed) or pseudonymised (replaced). An individual is deemed identifiable if they can be identified either directly or indirectly through the utilisation of additional data and attributes. Personal data have to be protected according to national legislation or confidential agreements.
    ⟶ Requirement: Automated anonymisation and pseudonymisation. These manual processes should be automated to be less time consuming and error prone.
  • Challenge: Protection of sensitive data. To prevent the disclosure of sensitive data or metadata they have to be aggregated manually by the creator before sharing to project partners. Such transformation could encompass the exclusions of files, specific rows or columns, or the obfuscation of text. The extent of the data transformation is often defined in confidential agreements.
    ⟶ Requirement: Automated transformation of data. This manual process should be automated to be less time-consuming and error prone.
  • Challenge: Access permissions. The collection of data from multiple project partners is a prerequisite for its subsequent utilisation. However, this process cannot be undertaken without due consideration, as sharing the data may no longer allow for the tracking of copies made or the tracing of the dissemination of the data. The management and monitoring of access permission rights can present a significant challenge for researchers, as these rights must be manually checked and are often agreed verbally in advance, if at all.
    ⟶ Requirement: Role rights management. Different roles with suitable access rights to the research data have to be considered in fine-grained role rights management. Fundamental roles for the administration (the admin), editing (the editor), reading (the reader), and approval from the perspectives of data quality and data security (the data quality officer and the data security officer) must be established. The role management has to be further designed in such a way that typical roles in collaborative projects can be implemented, including roles like collaborator, researcher, and project leader. A fine-grained, graded permissions system for accessing data must be provided in order to consider a wide range of intellectual property rights occurring in collaborative engineering projects. The role rights management has to be law–contract-compliant, i.e., protection and confidentiality agreements have to be adhered to.
  • Challenge: Authorisations. The data set that has been produced must be approved by all individuals who are responsible for it, including the data authors and the project partners who are involved. Approval should encompass both the completeness and the quality of the data and metadata, with this approval to be granted by a designated data quality officer. Furthermore, access rights and the protection of personal data must be approved by a designated data security officer, who is bound by confidentiality agreements. Furthermore, the dissemination of data within the scope of the project or the publication of results is contingent upon the procurement of consent from both the relevant supervisors and project partners. This is typically achieved through manual requests made via email or telephone. This approach underscores the complexity of ensuring compliance with collaboration agreements, particularly in light of the diverse areas of responsibility involved.
    ⟶ Requirement: Automated requests. An automated request of permission rights and query regarding compliance with data protection should be sent to all responsible parties each time data are shared with additional individuals. In addition, given access permissions of data should be automatically compared to confidentiality agreements. An automatic request for the data quality should be performed at least once after data are shared with the project group or parts of it. The dual control principle, i.e., the approval of at least two individuals, has to be applied if sensitive data should be released.
  • Challenge: Simultaneous working. Often several researchers, even from different institutions, need to work on the same data at the same time. Currently, this is often achieved by sharing new versions of the data as an additional file.
    ⟶ Requirement: File version management. As a solution to this challenge, a data repository should have a file version management that makes the storage of new versions as additional files redundant.
  • Challenge: Reading regulations. Collaborative projects very often include confidentiality agreements that stipulate that the research data collected in the project are subject to certain release restrictions, especially in collaborative projects with industry participation. For example, research data may be subject to a blocking period, i.e., the data may not be released until a certain period of time has elapsed. Analogous to the embargo period, research data may also be subject to a deletion period, i.e., the data must be deleted after a certain period of time (e.g., immediately after the end of the project). Therefore, it is essential that the confidential agreements are checked and read manually by each researcher who wants to share data, and time periods must be checked regularly and manually.
    ⟶ Requirement: Automated regulation checks. Blocking and embargo periods should be checked automatically and access permissions should be automatically adjusted afterwards.

1.3. Aim, Main Research Question and Research Hypothesis

The objective of this contribution is twofold: firstly, to automate and digitise research data release processes as much as possible; and secondly, to enhance the efficiency and efficacy of researchers’ working methods. This is to be achieved in accordance with data privacy laws, as well as protection and confidentiality contracts of collaborative material science and engineering projects. In accordance with the aforementioned defined aim, the primary research question is to what extent it is possible to establish an automated data release workflow, with the objective of satisfying the aforementioned requirements. The underlying research hypothesis posits that such a workflow can take into account all the previously defined requirements, while the implementation of the workflow does not require any special hardware or software.

1.4. Contribution

The authors propose a solution concept for the semi-automated process of releasing research data by combining state-of-the-art release methods with existing confidentiality agreements. The proposed concept is tailored to the specific requirements of material and engineering research projects and the anonymisation process of this concept is implemented in an open-source research data management tool to prove low-requirement realisability of this concept.

1.5. Paper Organisation

The remainder of this paper is structured as follows. Section 2 provides an overview of existing work related to the regulations for publishing and sharing data. A description of the concept is presented in Section 3. Section 4 demonstrates the feasibility of the concept. An exemplary use-case to measure the benefits of the concept is provided in Section 5. Discussions based on the results constitute the content of Section 6. Section 7 summarises this contribution and draws perspectives for future work.

2. State of the Art and Related Work

In the following paragraphs, the state of the art in relation to existing release and approval workflows, research data infrastructures for collaborative research projects will be examined.

2.1. Release and Approval Workflows

Although many publications and guidelines for publishing data exist [2,3], only a few consider the data release process with regard to collaborative projects. With software tools like AristaFlow [4], for example, partially automated, user-defined approval processes can be created and also guidelines for approval processes exist [5,6]. However, none of the research data infrastructures mentioned in Section 2.2 have an integrated release workflow.

2.2. Research Data Infrastructures

A plethora of research data infrastructures (RDIs) and electronic lab books (ELNs) have been developed for collaborative projects, including Kadi4Mat [7] (Karlsruhe Digital Infrastructure for Materials Science), eLabFTW [8,9], MS SharePoint [10], RSpace [11], SciNote [12], Labfolder [13], and numerous others. In addition, evaluations of the impact of ELNs on the daily work of researchers [14,15] and comparison of specific ELN-tools [16] have been conducted. Of particular note are eLabNTW, RSpace and Kadi4Mat, which are open-source software tools that include numerous functions for data management and collaboration. In addition, Kadi4Mat’s basic functionality can be extended by plugins. MS SharePoint is used at many German universities (e.g., Dresden [17], Hamburg [18], Heidelberg [19], Göttingen [20], and Tübingen [21]) as a platform for collaborative work. The software contains a number of features that enable data management in collaborative projects (e.g., web-based access, fine-grained role rights management, and creation of reproducible workflows) and can thus be used as an electronic laboratory book. The functionality can also be extended by plugins (‘SharePoint apps’). Despite the large number of providers, none of the aforementioned RDIs have an integrated data release workflow that is adapted to confidentiality agreements and aggregate data and metadata on an automated basis. In addition, only MS SharePoint and Labfolder (for an additional fee [22]) from the aforementioned RDIs have solutions for automatic queries for content and security checks for data. Conversely, most RDIs have various user roles for access rights such as reading, editing, and managing permissions and they can collect users in groups to simplify the right-role-management. Furthermore, RDIs commonly offer features for research data management, including a comment function, collaborative editing, an added publication platform and the storage of metadata. These features enable data to be released within the repository or into the public domain in accordance with the FAIR principles [23,24], which aim to improve findability, accessibility, interoperability, and reusability of data.

2.3. Regulations for Research Data

Data sharing is an essential building block in collaborative research projects. It is therefore important to discuss the legal aspects of data sharing in order to clarify who can share data, under what conditions, and how data that have been shared can be further modified or copied by other researchers. To this end, the rules governing intellectual property in general and research data in particular are outlined below.
It should be noted that the authors’ main research focus is on the domain of material science and engineering, with a particular emphasis on collaborative research projects with German research organisations and industrial partners. Consequently, the discussion of legal aspects pertaining to data sharing is constrained to the specific context of Germany. However, given Germany’s membership in the European Union (EU), it is notable that many of the principles could be applied directly or in a slightly modified form to research projects in other European countries. Additionally, it should be noted that none of the authors have a legal background. Consequently, the authors cannot guarantee the correctness of the interpretation of the laws. Instead, readers are strongly advised to check the legal requirements for their specific use cases and country before applying the proposed solution.
The German law ‘Urheberrechtsgesetz’ [25] protects personal intellectual creations, including written works or computer programs, provided there is a sufficient level of creativity [26]. The law takes effect automatically, with no application or registration required as with patents [26]. In the following paragraphs, we refer to a paragraphs within the law as follows: ‘UrhG § X’ for paragraph X of the law.

2.3.1. Regulations for Sharing Research Data

The owner of data sets of measurement data is, in accordance with German law, the person that spent time, money or other investments on the procurement, verification or presentation of the dataset (UrhG § 87a). The database producer has exclusive rights to distribute and publish the data set (UrhG § 87b) with the exception that if the data and metadata are a result of actions dependent on instructions, the employer or principal is generally granted rights of use (UrhG § 43). Metadata, a term which describes research data, on the other hand, cannot be protected, as it provides only a brief description of the data unless it contains, for example, larger text passages or photos of experimental setups (UrhG § 87a). It should also be noted that data often have to be made available on a contractual basis; for example, for partners in collaborative research projects. These contracts can also include terms of use and restrictions on the use of the data as well as confidentiality agreements that overrule the copyright [27]. A further regulation for sharing data pertain personal data, which contains “any information relating to an identified or identifiable natural person […]” is the General Data Protection Regulation (GDPR) in European legislation. A natural person is considered identifiable if they can be recognised either directly or indirectly through the utilisation of additional data and attributes (Art. 4 § 1 GDPR [28]). The publishing of personal data, which is often not necessary in the domain of engineering or material science, necessitates legally effective consent according to national or regional laws (Art. 6 § 1 GDPR [28]). Like sensitive data, personal data can be protected by removal (anonymisation) or by pseudonymisation.

2.3.2. Regulations for Reusing Research Data

Regardless of the agreements, German law requires that the source of the data used, reproduced or distributed must be indicated (UrhG § 63). In addition, any agreement terms given by the owner must be observed.

2.4. Novelty/Degree of Innovation

In summary, the practice of sharing data is already commonplace among researchers. The graduated assignment of access rights is not a novel concept. However, the conceptualisation of a semi-automated solution tailored to the specific requirements of material science and engineering collaborative projects represents a significant innovation. The primary innovation resides in the integration of existing components, such as established release workflows and regulatory frameworks. To the best of the authors’ knowledge, a concept for semi-automated releases of research data that takes into account cooperation agreements and assured data quality does not yet exist or, if it does, is not implemented in a research data management (RDM) tool.

3. Conception of a Research Data Release Process

In the following, the concept of a research data release process will be presented. This will include a detailed description of the workflow.
The conceptual solution provides a predetermined data release process, which is integrated into the data flow. Releases are executed in stages, contingent upon the organisational and contractual framework of the collaborative project.

3.1. Requirements

The following conditions must be met for this process to be implemented:
  • The release of data is conducted in a staged manner at predetermined release levels or security classes:
    (a)
    ‘Private’: data are released only to the researchers who generated the data.
    (b)
    ‘Internal’: data are released only to colleagues and members of the same workgroup.
    (c)
    ‘Project’: data are released to all or determined project partners in the group project.
    (d)
    ‘World’: worldwide release data are publicly accessible as open data.
  • All release levels are provided with appropriate rights, taking into account the organisational and the project structure.
  • Information related to release and access permissions is available for all data sets and stored as additional metadata in the dataset documentation.
  • A virtual contract displays all arrangements made in the confidentiality agreement as well as responsibilities for data security and data quality.

3.2. Workflow

The concept of an automated release process is illustrated in Figure 1 as a data flow chart. Orange boxes represent individuals or groups of individuals, brown boxes denote data storage, and grey objects signify either actions or decisions. Green boxes represent the contractual agreements that are formed within the context of collaborative projects.
The workflow for releasing research data is as follows:
  • Creation of release templates: Firstly, the creator of a data set (referred to as the ‘data creator’, indicated by the orange box positioned at the top left in Figure 1) is able to create a release template when storing the data in a repository. Within this template, the creator specifies who should have access to the data, who holds editing rights, and which data each release level are permitted to see. The release levels are categorised as ‘Private’, ‘Internal’, ‘Project’ and ‘World’, each of which demands distinct requirements for data protection and the confidentiality of sensitive data. The process of determining which individuals or groups are granted which rights, such as access to the data, editing rights, or the right to assign rights, is referred to as role rights management. It is also recommended that the data creator should be able to assign a data record to a release template, including role rights management, which was created in advance of the project and defined in the virtual non-disclosure contract, thus avoiding the need for an additional work step.
  • Anonymisation and transformation of data on internal level: The creation of the template is followed, if necessary, by the initial anonymisation and transformation of data for the data creator’s internal working group. Sensitive data or personal data that could allow conclusions to be drawn about working hours, for example, can be redacted. Following the completion of the transformation process, designated members of the workgroup are granted access to the data.
  • Anonymisation and transformation of data on project level: The process of anonymisation and data transformation is initiated each time a new release state is reached until the highest release state, as defined in the contract or the release template, is arrived at. This denotes that the data are accessible to a broader range of individuals, accompanied by an augmented level of anonymisation and confidentiality for the additional individuals, in this instance encompassing all or selected project members and hierarchically superior persons within the company of the data creator.
  • Verification of access rights: Following an automated comparison of the released data with the specifications of the non-disclosure contract, and a manual conduction of additional verification of access rights by the group leader following an automated notification, access to the transformed data is granted to selected members of the project group. If editing rights for data are to be conferred upon specific members of the project team, a file version management system will ensure comprehensive oversight of all alterations made to the data as well as the metadata. The necessity for this additional verification can be revoked in the contract. If either the virtual contract, which reflects the confidentiality agreements, or the group leader refuse this release, the data record is stored in secure, internal workgroup data storage.
  • Review of project’s internal deletion period: Following the approval of the designated members of the project group, the software will automatically initiate a verification process to ascertain whether an internal deletion deadline for the data has been reached. This verification is conducted if such a deadline has been stipulated within the release template or the virtual non-disclosure contract. If the stipulated deletion deadline has been reached, members of the project group will no longer have access to the data, which will be stored in secure, internal working group storage.
  • Quality check by data quality officer: The data quality officer, who has been appointed in accordance with the virtual non-disclosure contract, is automatically requested to confirm the completeness and consistency of the data and the completeness and accuracy of the metadata. In the event of deficiencies in the data quality or an absence of complete metadata, the project’s internal data quality officer is authorised to request a revision of the data set from the data creator. This process is intended to guarantee the integrity of the data and to ensure comprehensive documentation within the project.
  • Review of confidentiality agreement: In instances where a request for rework and a release at release level ‘World’ are not forthcoming, the automated query of the project’s internal data security officer specified in the virtual non-disclosure contract is initiated. This query also occurs in instances of requests from external scientists or project members who did not originally receive approval. At this point, the conceptual solution is expanded from the clique sharing method by the addition of sharing on request. If the data security officer does not grant approval, the data are stored in a secure internal project memory. The data security officer is assisted by the virtual non-disclosure contract, which assigns the data to the specifications of the confidentiality agreement.
  • Review of blocking period: If the data are released by the data security officer, an automated check of assigned blocking periods is performed, provided that these have been set in the release template or in the virtual contract.
  • Anonymisation and transformation of data on worldwide level: If the specified blocking period has elapsed or has not been established, the final, more stringent anonymisation and transformation of the data record will be executed in accordance with the parameters delineated in the ‘World’ release level from the release template. Subsequent to the completion of this process, the data will be disseminated to individuals who have submitted external requests for data records or, if stipulated by the release template, made publicly accessible. It is imperative that the data undergo transformation and anonymisation for external requests in the same manner as required for worldwide access, as once the data are accessible to an individual external to the project, there is no longer any control over the usage of these data.
  • Worldwide access via provider: As the publication process for open data is already sufficiently implemented in many RDM tools by linking to a publication platform, such as Zenodo, and taking into account the FAIR data principles (Findable, Accessible, Interoperable and Reusable), this process is not addressed further in this methodological approach to research data release.
  • Review of deletion period:
    The software will automatically initiate a verification process to ascertain whether a stipulated deletion deadline for public access to the data has been reached. This verification is conducted if such a stipulated deadline has been included within the release template or the virtual non-disclosure contract. If the stipulated deletion deadline is reached, members of the public will no longer be permitted to access the data, which will be stored in a secure internal workgroup repository for use by project members only.
In conclusion, the data release concept proposed within this study fulfils all the aforementioned requirements by incorporating the automation of requests and data transformation processes. Consequently, it effectively addresses the researchers’ requirements and challenges outlined in Section 1.2.

4. Implementation Details

In this section, the low-requirement realisability of the proposed concept will be demonstrated through the implementation of the transformation and aggregation of metadata into an established data repository and electronic lab notebook (ELN). In more detail, the automated anonymisation and transformation of metadata and the creation of release templates, which determine the extent of the aforementioned processes, have been implemented in the open-source software Kadi4Mat version 1.4.0 [7] via a plugin software.
Kadi4Mat is a generic research data infrastructure that combines the ability to store, structure and exchange research data and metadata with the capacity to analyse, visualise and edit data, with the focus on data that have not been published yet. The source code of the created release plugin is released under an open-source licence [29] and an active version is released in a public Git repository [30].
The typical user interface of Kadi4Mat with records is shown in Figure 2. It is important to note that records represent any kind of digital or digitised object, including, but not limited to, any data, samples, experimental devices, or processing steps. Each record comprises metadata, and these can be grouped into collections or linked to other records.

4.1. Requirements of the Implementation

One of the main requirements for the new release plugin was that it should not affect the source code. This is one reason why the implementation was limited to the main feature of the concept, namely automated anonymisation and transformation of data and metadata. Although the specification of the new release plugin required targeted modifications to the source code of the underlying framework—which raised concerns about the stability of the production environment—a detailed analysis was carried out in collaboration with the maintainers of Kadi4Mat. In the course of this analysis, the sections of code that needed to be changed were precisely identified, and the extent of these changes was carefully limited in order to minimise any potential impact on existing functionality.

4.2. Software Architecture

The implementation was achieved by developing a novel plugin for Kadi4Mat, which already contains a number of other plugins, including the Zenodo plugin for obtaining a Digital Object Identifier (DOI) and subsequently publishing data on Zenodo. Kadi4Mat works as a modular system, integrating plugins via Pluggy 1.5.0, a Python-based plugin management system. Pluggy enables seamless interaction between software components by providing hooks that extend functionality without modifying the core code. Within this framework, requests related to data transformation are routed through Kadi Core, which delegates execution to the plugin’s blueprints. All plugin-related data, including templates and operations, are stored in PostgreSQL tables alongside other Kadi4Mat entities. During editing or viewing operations, the core system communicates with the plugin via hooks, ensuring smooth integration. The plugin extends Kadi4Mat by introducing anonymisation features while using Redis for caching, Celery for background tasks, and Elasticsearch for search operations. It seamlessly integrates with the system’s database, application programming interface (API), and user interface (UI), demonstrating the flexibility of Kadi4Mat’s plugin architecture. This architecture allows the development of feature-rich extensions without compromising modularity. The plugin workflow adheres to the following steps:
  • First, the user sends a request to view or manipulate the stored data table to the Kadi4Mat core web framework.
  • The framework identifies the command registered by the plugin and delegates the execution to the plugin and its appropriate logic (e.g., anonymisation).
  • The subsequent steps follow the big picture of the plugin architecture in Figure 3. The plugin processes the data according to the command using Kadi4Mat’s shared infrastructure (SQLAlchemy, Redis, and Celery) via hooks and returns the results to the web framework application.
  • The user further interacts with the anonymisation user interface of the plugin created with Vue.js.

4.3. User Interface Overview and Process Workflow

Figure 4 illustrates the process of defining the anonymisation and aggregation of metadata. At the top of the interface, the Template section displays a JSON structure containing rules for anonymising or transforming various metadata fields. Below this section is the “Add Field” block, where users specify the field name (“Field Name”) and select the type of processing (“Field Type”) — for example, hiding (“Hide”) or replacing (“Replace”). Clicking the button “Add Field” appends the appropriate rule to the JSON structure.
Subsequent to the configuration of the rules, the system implements them and updates the metadata upon activation of the ‘Update Preview’ button. As demonstrated in Figure 5 and Figure 6, it is evident that the ‘Original Resource’ and ‘Anonymized Resource’ are distinct: specific fields are concealed or substituted in accordance with the stipulated rules. This mechanism functions as a ‘filter’ by sequentially processing each indicated field and either granting access to selected data or substituting them if the user does not have the requisite permissions.

5. Exemplary Use Case

Next, the potential savings of the proposed methodological approach will be estimated for an exemplary use case. A typical collaborative research project in the engineering domain has been chosen as the use case. The collaborative project consists of four academic partners and two industrial partners in the field of materials science and manufacturing and lasts three years. The overall objective of the project is to develop a novel carbon fibre-reinforced polymer (CFRP) composite. This will require a continuous exchange of experimental data on material properties, manufacturing parameters, and failure analysis between all project partners. The confidentiality agreement stipulates that only the companies’ raw data will remain private for a six-month embargo period, and that the remainder will be shared directly with the project partners after anonymisation of personal data (to comply with data protection laws). In addition, public release of the data will require anonymisation of proprietary manufacturing details and a 12-month embargo. Two datasets per organisation and per year will be generated. The following stages show how the use of the proposed solution approach could improve the project:
  • Stage: Educating data release management
    Because of the defined methodological approach, researchers do not need to learn about data release management. This approach has the potential to be very cost-effective in terms of time and can have a significant impact on the outcomes of such projects. According to the results of a survey, only 33% of researchers claim to be proficient in RDM [31]. The researchers surveyed were from the neurosciences, but it can be assumed that the figure would be similar for materials and engineering scientists, as RDM is often not part of their education. Consequently, for the exemplary project, there is a potential for four researchers to breach confidentiality agreements, personal data security, or protection of sensitive data, or to fail to share expected data.
  • Stage: Data creation and initial release (‘Private’ to ‘Internal’)
    An employee of one of the companies uploads manufacturing data to a repository and shares it only internally so that colleagues can further optimise the manufacturing parameters, but it remains proprietary. To achieve this, the employee selects a pre-configured GDPR-compliant sharing template that masks the time the employee spent working on the machine. All members of the company are immediately given access to the anonymised dataset. A manual review of the non-disclosure agreement to identify what needs to be anonymised and a manual anonymisation of the data are not required.
  • Stage: Project-level release (‘Internal’ to ‘Project’)
    After 6 months, the retention period for the companies’ raw data expires. The repository, which fully implements the concept, automatically further aggregates the dataset by anonymising the names of the performing persons and the machine serial numbers according to a pre-configured release template in line with the GDPR and the confidentiality agreement. In addition, the repository triggers a notification to the involved group leader to verify the data transfer. This process ensures compliance with the confidentiality agreement and is less error-prone than manually scheduling the end of the retention period. After verification, the manufacturing company’s employees can still edit the data, while all other project members have read-only permissions via role-based access. In addition, the project-wide data quality officer is automatically notified and asked to check the data for metadata completeness and data consistency. For non-manufacturing data, this process starts immediately after the data are released to the internal level. These automated requests are less prone to error.
  • Stage: Public release (‘Project’ to ‘World’)
    In the meantime, until the end of the 12-month blocking period for publishing the project data, the project-wide data security officer can check whether the data are suitable for publication. In this example, the data security officer requests that a particular polymer mixing ratio should not be made publicly available and asks the relevant data creator to modify the release template for the ‘World’ release level. At the end of the 12 month blocking period, the system flags the datasets that do not contain manufacturing details for public release and applies the final anonymisation of the specific mixing ratio to the corresponding dataset. Following this process, the updated dataset can be published on data repositories like Zenodo, assigned a DOI, and tagged with FAIR-compliant metadata.
Based on their experiences in such collaborative projects, the authors have estimated the time savings for each operation and for each dataset within the project term in case the proposed solution concept is utilized. These estimates are given in Table 2. In total, the project can save approximately 13.5 h of administrative work. More importantly, it is much less error-prone compared to a fully manual workflow due to its 100% compliance with blacklist/embargo periods, an automated anonymisation process, and a standardised and intuitive workflow.

6. Discussion

The proposed conceptual solution entails a predefined release process along the data flow, wherein releases are executed in stages in accordance with the organisational and contractual framework of collaborative research projects.
As illustrated in Table 3, all previously delineated data release requirements from Section 1.2 are incorporated within the delineated release workflow. Additionally, the exemplary use-case in Section 5 demonstrates how the presented workflow could enforce contractual and regulatory requirements, integrate quality assurance through reviews within the release process, and reduce manual intervention to save time and prevent errors within the release process. These features align with the research question from Section 1.3.
Nevertheless, a user-friendly methodological procedure cannot be used as a substitute for the comprehensive documentation of data for researchers and the drawing up of non-disclosure agreements in advance of the project. The documentation is of particular importance and must be comprehensive to ensure the interoperability and reusability of data by other project members or the worldwide research community.
Furthermore, it was demonstrated that for the components of the methodological concept that have been implemented in a plugin to date (release template, anonymisation), no specific hardware or software requirements are necessary. The authors estimate that the remaining implementation also would not require any special hardware or software because a few features of the concept are extensions of existing Kadi4Mat features, user requests, comparison of dates, or the repetition of anonymisation and transformation processes. This ensures that all potential users have straightforward access.
By centring on projects with industrial partners, the outlined solution should be especially well suited to the domains of material development and engineering. However, it is largely independent of domain-specific issues and therefore readily transferable to other disciplines. The approach is innovative in its efforts to release the research data generated in a quality-assured and timely manner.
The implementation of the methodological approach was confined to the creation of release templates and the automated anonymisation and transformation of metadata. This limitation was due to the time constraints of the project and the fact that modifying roles and data requires alterations to the source code, a process that should be undertaken exclusively by the developer of the research data infrastructure. Consequently, at the present implementation stage, the remaining processes, excluding anonymisation, must be conducted manually.
It is evident that when the aforementioned benefits of this approach are taken into consideration, this work could serve as a foundational starting point for research data infrastructures to automate their release processes. Furthermore, it could assist research institutions in conducting a more thorough evaluation of researchers’ needs within the release process.

7. Conclusions

This contribution presents an organisational and strategic solution to standardise, automate, and simplify the release process of research data with different levels of confidentiality. The result is a methodological concept (procedure model, reference to guidelines and standards, suggestions for implementation) for the release of research data. The focus of the proposed concept is on linking existing methods, workflows, and rules to facilitate and standardise the process of sharing research data. In addition, a modular software solution was developed, in which the transformation and aggregation of metadata as part of the methodological concept was implemented as a prototype to prove low-requirement realisability.

7.1. Benefits/Added Value

  • The release process can be significantly accelerated by a defined, methodical approach with a high degree of automation. Such a process is much more robust against errors (e.g., unauthenticated releases and content errors) than a release process without a defined methodology or agreements made verbally. Moreover, this workflow enhances user-friendliness by providing researchers with clear guidance on subsequent actions concerning the management of research data.
  • With this concept and the plugin, anonymisation and pseudonymisation are possible, which is necessary to comply with laws on the protection of personal data, such as the General Data Protection Regulation (GDPR) in European legislation. It also allows the data creator to choose which data are shared and with whom, according to the German Copyright Act, though researchers can be required to share data according to confidentiality agreements.
  • The quality of the research data that are released can be assured by the release process itself.
  • After release, the research data are accessible and can be reused by other researchers in the project group. If the data are published by research groups worldwide in a citable form (e.g., via DOI), the data are in accordance with the FAIR Data principles.
  • The technical linking of contractual documents in collaborative projects with the IT functionality of rights management of data sets ensures compliance with contractual conditions for the handling of data. In particular, this promotes the development of trust between research partners.
  • The prototypical implementation of the transformation and aggregation of metadata in Kadi4Mat demonstrates the viability of the proposed procedure. In conjunction with the documentation of the release process, the method can also be transferred to other research data management frontends/electronic lab notebooks (ELNs).
  • As the proposed concept contains many automated steps, it can definitely relieve scientists of administrative tasks and make the data management more user-friendly for researchers.
  • The proposed solution is largely generic in nature, with the potential to be utilised across disciplinary boundaries with minor adaptations.

7.2. Limitations

Since the concept has not been fully implemented in a research data repository and has yet to be applied in a real-world collaborative research project, the limitations are yet to be determined by user surveys, which should be the scope of future research.

7.3. Relevance

As the material science and engineering community often carries out research projects in cooperation with industry, the proposed concept is of high relevance for the material science and engineering communities. The concept of compliant data exchange is an unavoidable prerequisite, the assurance of which helps to build trust. The integration of a component of the proposed solution into the open-source software Kadi4Mat is of direct benefit to both the NFDI4Ing consortium and the Kadi4Mat community.

7.4. Future Work

Whilst the present study provides a foundational methodological concept and a prototype for facilitating the release of research data, there are several areas that require further exploration in order to enhance and extend its applicability.
In the current state of implementation, the arrangements from the non-disclosure agreement must be transferred manually to the release templates. The subsequent development of the implementation process should aim to automate this process in the first instance, and to implement fully automatic processes at a later stage. After fully implementing the concept in a research data infrastructure, it is necessary to evaluate the concept by case studies or by applying it in real-world collaborative research projects and conducting user surveys to provide evidence of its effectiveness, identify potential challenges, and refine the solution. A further key avenue for exploration is a deeper investigation into the standardisation and reuse aspects of data management practices. This should include an evaluation of the potential for existing vocabularies and taxonomies to enhance the efficiency of the data documentation step, and a determination of their capacity to facilitate consistent metadata generation and interoperability. A further automatisation of the process could be an automated data quality check by artificial intelligence and a chatbot for discussing the modifications with the researcher. A comprehensive understanding of the limitations and opportunities presented by these vocabularies will inform refinements in documentation workflows.
A further critical aspect involves examining the integration of the proposed infrastructure with existing services and platforms. Future research should assess whether repositories and other data-holding entities can effectively adopt this methodological concept. This involves identifying the necessary precautions, such as compliance with institutional policies and technical constraints, to ensure seamless adoption while maintaining data integrity and security. While implementing this concept, data repositories have to ensure user-friendliness throughout the entire workflow with a intuitive user interface and help services.
By addressing these questions, future efforts can build upon this work to create a more comprehensive, scalable, and adaptable solution for research data management and release.

Author Contributions

Conceptualization, K.F. and H.W.; Data curation, T.O. and K.F.; Formal analysis, K.F.; Funding acquisition, K.F., H.W. and S.I.; Investigation, T.O. and K.F.; Methodology, T.O. and K.F.; Project administration, T.O., K.F. and H.W.; Resources, T.O. and K.F.; Software, T.O. and K.F.; Supervision, H.W. and S.I.; Validation, T.O. and K.F.; Visualization, T.O.; Writing—original draft preparation, T.O. and K.F.; Writing—review and editing, T.O., K.F., H.W. and S.I.; All authors have read and agreed to the published version of the manuscript.

Funding

The German Federal Ministry for Economic Affairs and Climate Protection (BMWK) has provided funding for this research project based on decisions made by the German Bundestag within the joint research projects ‘SWaT’ (grant number 20M2112F) and ‘LaSt’ (grant number 20M2118F). Additionally, the BMWK has provided funding for the project through the funding guideline ‘Digitization of the vehicle manufacturers and supplier industry’ within the funding framework ‘Future investments in vehicle manufacturers and the supplier industry’, which is financed by the European Union and supervised by the project sponsor VDI Technologiezentrum GmbH within the joint research project ‘Werk 4.0’ (grant number 13IK022K). Furthermore, the authors would like to thank the Federal Government and the Heads of Government of the Länder, as well as the Joint Science Conference (GWK), and the German Research Foundation (DFG) for their funding and support within the framework of the NFDI4Ing consortium (grant number 442146713).

Data Availability Statement

The source code of the created release plugin is released under an open source licence [29] and an active version is released in a public Git repository [30].

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Byrd, J.B.; Greene, A.C.; Prasad, D.V.; Jiang, X.; Greene, C.S. Responsible, practical genomic data sharing that accelerates research. Nat. Rev. Genet. 2020, 21, 615–629. [Google Scholar] [CrossRef] [PubMed]
  2. United States Geological Survey. The Data Release Process. Available online: https://www.usgs.gov/sciencebase-instructions-and-documentation/data-release-process (accessed on 20 February 2025).
  3. forschungsdaten.info. RDM Guidelines and Policies: What Are They Good for? Available online: https://forschungsdaten.info/praxis-kompakt/english-pages/guidelines-and-policies/ (accessed on 20 February 2025).
  4. AristaFlow GmbH. Process Template Editor. Available online: https://docs.aristaflow.com/14/process-template-editor/ (accessed on 20 February 2025).
  5. Algorri, M.; Cauchon, N.S.; Abernathy, M.J. Transitioning Chemistry, Manufacturing, and Controls Content with a Structured Data Management Solution: Streamlining Regulatory Submissions. J. Pharm. Sci. 2020, 109, 1427–1438. [Google Scholar] [CrossRef] [PubMed]
  6. Weinmeister, P. Building Effective Approval Processes. In Practical Salesforce Development Without Code; Apress: Berkeley, CA, USA, 2019; pp. 201–227. [Google Scholar] [CrossRef]
  7. Brandt, N.; Griem, L.; Herrmann, C.; Schoof, E.; Tosato, G.; Zhao, Y.; Zschumme, P.; Selzer, M. Kadi4Mat: A Research Data Infrastructure for Materials Science. Data Sci. J. 2021, 20, 8. [Google Scholar] [CrossRef]
  8. O’Brian, T. eLabFTW—A Free and Open Source Electronic Lab Notebook. Available online: https://www.elabftw.net/ (accessed on 20 February 2025).
  9. Carpi, N.; Minges, A.; Piel, M. eLabFTW: An open source laboratory notebook for research labs. J. Open Source Softw. 2017, 2, 146. [Google Scholar] [CrossRef]
  10. Microsoft Corporation. Microsoft SharePoint—Securely Collaborate, Sync, and Share Content. Available online: https://www.microsoft.com/en/microsoft-365/sharepoint/collaboration?market=af (accessed on 20 February 2025).
  11. Research Space. RSpace is an Open-Source Platform that Orchestrates Research Workflows into FAIR Data Management Ecosystems. Available online: https://www.researchspace.com/ (accessed on 20 February 2025).
  12. SciNote LLC. Electronic Lab Notebook & Lab Inventory Management Software. Available online: https://www.scinote.net/ (accessed on 20 February 2025).
  13. labforward GmbH. The Intuitive Way to Record Your Findings. Available online: https://labfolder.com/ (accessed on 20 February 2025).
  14. Kanza, S.; Willoughby, C.; Gibbins, N.; Whitby, R.; Frey, J.G.; Erjavec, J.; Zupančič, K.; Hren, M.; Kovač, K. Electronic lab notebooks: Can they replace paper? J. Cheminform. 2017, 9, 31. [Google Scholar] [CrossRef] [PubMed]
  15. Adam, B.; Lindstädt, B. ELN Guide: Electronic Laboratory Notebooks in the Context of Research Data Management and Good Research Practice—A Guide for the Life Sciences. Available online: https://repository.publisso.de/resource/frl:6425772 (accessed on 20 February 2025).
  16. Rubacha, M.; Rattan, A.K.; Hosselet, S.C. A Review of Electronic Laboratory Notebooks Available in the Market Today. J. Assoc. Lab. Autom. 2011, 16, 90–98. [Google Scholar] [CrossRef] [PubMed]
  17. Dresden, T.U. MS SharePoint. Available online: https://tu-dresden.de/zih/dienste/service-katalog/zusammenarbeiten-und-forschen/groupware/sharepoint?set_language=en (accessed on 21 March 2025).
  18. Reinberg, S. SharePoint Universität Hamburg. Available online: https://www.rrz.uni-hamburg.de/services/kollaboration/sharepoint.html (accessed on 21 March 2025).
  19. Heidelberg, U. SharePoint—A Web-Based Collaboration Platform for Internal Cooperation. Available online: https://www.urz.uni-heidelberg.de/en/service-catalogue/collaboration-and-digital-teaching/sharepoint (accessed on 21 March 2025).
  20. Göttingen, G.A.U. Instructions for Intranet Access (SharePoint). Available online: https://www.uni-goettingen.de/en/199549.html (accessed on 21 March 2025).
  21. Tübingen, E.K.U. Relaunch der Website des E-Learning-Portals (ELP)—Neue Plattform und Neues Layout. Available online: https://uni-tuebingen.de/en/einrichtungen/zentrum-fuer-datenverarbeitung/aktuelles/archiv-aktuelles/archivfullview-aktuelles/article/relaunch-der-website-des-e-learning-portals-elp/ (accessed on 21 March 2025).
  22. labforward GmbH. Signature Workflows. Available online: https://labfolder.com/signature-workflows/ (accessed on 20 February 2025).
  23. Boeckhout, M.; Zielhuis, G.A.; Bredenoord, A.L. The FAIR guiding principles for data stewardship: Fair enough? Eur. J. Hum. Genet. 2018, 26, 931–936. [Google Scholar] [CrossRef] [PubMed]
  24. Dunning, A.; De Smaele, M.; Boehmer, J. Are the FAIR Data Principles fair? Int. J. Digit. Curation 2018, 12, 177–195. [Google Scholar] [CrossRef]
  25. Bundesamt für Justiz. Urheberrechtsgesetz vom 9. September 1965 (BGBl. I S. 1273), das Zuletzt Durch Artikel 25 des Gesetzes vom 23. Juni 2021 (BGBl. I S. 1858) Geändert Worden ist. Available online: https://www.gesetze-im-internet.de/urhg/index.html#BJNR012730965BJNE010306360 (accessed on 20 February 2025).
  26. Eckert, K. Handout: Rechtliche Fragen bei der Bereitstellung von Forschungsdaten. Available online: https://www.forschungsdaten.uni-mainz.de/files/2019/02/Handreichung-rechtliche-Fragen-Forschungsdatenbereitstellung.pdf (accessed on 20 February 2025).
  27. Putnings, M.; Neuroth, H.; Neumann, J. (Eds.) Praxishandbuch Forschungsdatenmanagement; De Gruyter: Berlin, Germany, 2021. [Google Scholar] [CrossRef]
  28. EGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation). Available online: https://eur-lex.europa.eu/eli/reg/2016/679/oj (accessed on 20 February 2025).
  29. Opatz, T.; Feldhoff, K.; Wiemer, H.; Ihlenfeldt, S. Kadi4Mat Data Release Process Plugin. Available online: https://zenodo.org/doi/10.5281/zenodo.15100062 (accessed on 28 March 2025).
  30. Opatz, T. kadi-Data-Release-Process-Plugin. Available online: https://gitlab.com/opatztim/kadi-data-release-process-plugin (accessed on 28 March 2025).
  31. Klingner, C.M.; Denker, M.; Grün, S.; Hanke, M.; Oeltze-Jafra, S.; Ohl, F.W.; Radny, J.; Rotter, S.; Scherberger, H.; Stein, A.; et al. Research Data Management and Data Sharing for Reproducible Research—Results of a Community Survey of the German National Research Data Infrastructure Initiative Neuroscience. eNeuro 2023, 10. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Data flow chart of the data release process. Arrows of the main strand of the data flow are highlighted.
Figure 1. Data flow chart of the data release process. Arrows of the main strand of the data flow are highlighted.
Data 10 00053 g001
Figure 2. User interface of the research data infrastructure Kadi4Mat.
Figure 2. User interface of the research data infrastructure Kadi4Mat.
Data 10 00053 g002
Figure 3. Big picture of the plugin architecture used in Kadi4Mat upon invocation of the plugin.
Figure 3. Big picture of the plugin architecture used in Kadi4Mat upon invocation of the plugin.
Data 10 00053 g003
Figure 4. Input screen of the plugin for metadata transformation.
Figure 4. Input screen of the plugin for metadata transformation.
Data 10 00053 g004
Figure 5. Metadata before transformation.
Figure 5. Metadata before transformation.
Data 10 00053 g005
Figure 6. Metadata after transformation.
Figure 6. Metadata after transformation.
Data 10 00053 g006
Table 1. Sharing options with an estimation of the utilisation barriers and individuals with access to the shared data.
Table 1. Sharing options with an estimation of the utilisation barriers and individuals with access to the shared data.
Release MethodBarriersBarriersRelease to
for Creators for Users
Sharing on requestvery muchmuchinquiring individuals
Clique sharingmoderatefewauthorised individuals
Controlled-access sharingmoderatemoderateeverybody
Public sharingfewfeweverybody
Table 2. Estimated time savings in the exemplary use case by utilising the data release concept during project time.
Table 2. Estimated time savings in the exemplary use case by utilising the data release concept during project time.
OperationTime Savings PerAffectedTime Savings Per
Dataset [min] Datasets Operation [min]
Educating release management2024480
Anonymising for level ‘Internal’336108
Anonymising for level ‘Project’336108
Requesting group leader13636
Requesting data quality officer13636
Requesting data security officer13636
Anonymising for level ‘World’313
Total time savings 807
Table 3. Comparison of requirements and their implementation in the concept.
Table 3. Comparison of requirements and their implementation in the concept.
IdRequirementImplementation Method
1Methodological procedure and user-friendlinessCreated data flow chart with highly automated work steps, compare Section 3
2Adjusting metadataAlready included in most data infrastructures with permissions to edit data
3Automated anonymisation and pseudonymisationImplemented before each release to the next release level
4Automated transformation of dataImplemented before each release to the next release level
5Role rights managementSupplemented by a data quality officer and a data security officer
6Automated requestsImplemented for quality control and data security control requests
7File version managementAlready part of most data infrastructures
8Automated regulation checksImplemented for deletion and blocking periods for publication
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Opatz, T.; Feldhoff, K.; Wiemer, H.; Ihlenfeldt, S. Sharing Research Data in Collaborative Material Science and Engineering Projects. Data 2025, 10, 53. https://doi.org/10.3390/data10040053

AMA Style

Opatz T, Feldhoff K, Wiemer H, Ihlenfeldt S. Sharing Research Data in Collaborative Material Science and Engineering Projects. Data. 2025; 10(4):53. https://doi.org/10.3390/data10040053

Chicago/Turabian Style

Opatz, Tim, Kim Feldhoff, Hajo Wiemer, and Steffen Ihlenfeldt. 2025. "Sharing Research Data in Collaborative Material Science and Engineering Projects" Data 10, no. 4: 53. https://doi.org/10.3390/data10040053

APA Style

Opatz, T., Feldhoff, K., Wiemer, H., & Ihlenfeldt, S. (2025). Sharing Research Data in Collaborative Material Science and Engineering Projects. Data, 10(4), 53. https://doi.org/10.3390/data10040053

Article Metrics

Back to TopTop