Next Article in Journal
Supporting Learning Analytics Adoption: Evaluating the Learning Analytics Capability Model in a Real-World Setting
Previous Article in Journal
Analysis of the Factors Influencing the Trailing Infrared Characteristics of Underwater Vehicles under Surge Conditions Using the Orthogonal Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Brief Report

Docker Vectorization, a Cloud-Native Privacy Agent—The Analysis of Demand and Feasibility for Era of Developing Complexity of Privacy Management

1
Center for Data-Driven Science and Artificial Intelligence, Tohoku University, Aoba-ku, Sendai 980-8576, Japan
2
Information and Society Research Division, National Institute of Informatics, Chiyoda-ku, Tokyo 101-0003, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(5), 3235; https://doi.org/10.3390/app13053235
Submission received: 15 January 2023 / Revised: 22 February 2023 / Accepted: 1 March 2023 / Published: 2 March 2023

Abstract

:

Featured Application

Cloud-based large-scale genome and health database which can support dynamic consent and digital protection of derivative analysis result to secure the privacy of the data subject.

Abstract

Currently, a large amount of biological information is accumulated, such as the area of genome sequencing as well as high-precision biometric information stored in wearable terminals and a growing database of health, medication, and medical information. The development of AI (artificial intelligence) and machine learning has increased its analytical power overwhelmingly. It is becoming more difficult to take measures against the leakage of personal information, and it is becoming difficult to determine privacy risks in advance. In this paper, we review those problems and propose a new method of managing private data. To solve such problems, we look at concepts of dynamic consent and privacy agents, which are drawing growing interest. In particular, efficient and broadly applicable technical means to support such concepts have been proposed. We considered using the current cloud platforms as an effective solution to this problem. We designed an architecture named Docker Vectorization and carried out a comprehensive analysis of the demand and feasibility of such a system in an era of increasing privacy management complexity. We believe we provided sufficient explanations for why Docker Vectorization of privacy agents in the cloud will be a powerful tool for providing sustainable and scalable privacy controls for data subjects.

1. Introduction

Big data and their use by AI machine learning are expected to grow further in the future, but they are expected to raise complex issues in the protection of personal information. Wrist-type wearable sensors can acquire a large amount of biological information with high accuracy. On the other hand, it is also known that health conditions and daily habits can be estimated by machine learning from heart rate and acceleration [1]. While it is expected that the accumulation of such a large amount of personal data will provide various information useful for medical care and the public interest, it is difficult to completely rule out anxiety about privacy in providing data.
The doubts about whether personal information can be collected on smart speakers (voice command devices) were somewhat of a barrier to adoption. While various devices can collect human body data, and their value is increased by the use of those big data, concerns over the opaque use of personal data are growing, delaying the effective use of such devices and the data. Given the value of information derived from sharing and analyzing data, it is desirable to increase the data shared by minimizing such concerns and risks.
Especially in genome information processing, there is a rapid development of the technologies and several ongoing standardization activities [2,3,4]. National HER (nation-scale electronic health records) is also considered in Japan [5]. However, there is also a strong need for privacy protection which also can be seen in the development of regulations [6,7,8,9,10]. There is substantial concern about the privacy of genomic information [11,12,13], and there are related studies on privacy risks and protection methods [14,15,16]. As a method to handle the uncertainty of privacy risk, the concept of dynamic consent was proposed and tested in previous studies [17,18]. Those relevant activities and reports show an urgent and strong need to use genome data with an optimal balance of risks and benefits, as we reported in several relevant papers [19,20].
In this paper, we primarily focus on the support of dynamic consent. In the field of medical information, the need for dynamic consent, which can dynamically change the permission in accordance with the analysis technology that advances day by day, has been recognized.
Therefore, Nakagawa proposed a personal AI agent that collects, manages, and protects personal data [21]. Personal AI agents manage the personal data of the data subject and the terms of use on behalf of the data subject. By allowing personal AI agents to act on behalf of data subjects for advanced judgment, the conditions for the use of personal data are better than the conventional framework. It is an attractive proposal because it can be specified flexibly.
However, these ideas are about the goals of privacy management, and the means to technically protect such privacy management have not been discussed so far. Therefore, we would like to propose Docker vectorization.
Docker vectorization proposes the same idea from the aspect of the concrete implementation method, which consists of the following three main points:
  • Realized by cloud and Docker container mechanism;
  • Include dynamic consent in the provision of the analyzed result;
  • Provide multilateral security (zero trust security).
In this report, the feasibility, merits, and issues of 1~3 are discussed in more detail.

2. Adaptation to the Cloud and Docker Technologies

Many information processing systems to date realize distributed sharing of computing resources in a configuration of cloud platforms and Docker containers [22].
The Docker container is overwhelmingly lighter than the conventional sharing of computing resources using a virtual machine (VM) because containerized applications can share kernel processes. Distributed sharing of computing resources was realized. Running a Docker containerized application only starts a process but not memory allocation. The overhead for the startup is very small [23].
Docker container technology has already been implemented as a standard in cloud platforms and can be used in common with various cloud systems. Docker containers also come with basic security mechanisms, and we believe that multilateral security, which is briefly referred to in this paper and useful for privacy management, can be rapidly introduced if required.
Therefore, we think the cloud environment and Docker containers are the ideal environments to implement advanced privacy protection with a big attraction.

3. Need for Dynamic Consent of Personal Data

3.1. A Concept of Sherlock Threat

In this section, as a means of protecting personal data that should be provided to a data processor, we explain a “dynamic right of self-determination of data provision” that is one step stronger than the “static right of self-determination of data provision”. It is also referred to as dynamic consent. Additionally, we explain how it differs from static self-determination and why it is desirable.
In the case of personal data contained in wearable devices, it is difficult to predict what kind of statistical analysis will be possible in the future. The situation is similar to the one scene in Sherlock Holmes.
Sherlock Holmes, created by the writer Conan Doyle, told his client: “Beyond the obvious facts that he has at some time done manual labor, that he takes snuff, that he is a Freemason, that he has been in China, and that he has done a considerable amount of writing lately, I can deduce nothing else”. Those present were amazed and felt immeasurable how much Holmes could deduce.
In the same way, uncertainty arises because the estimation ability of AI machine learning will continue to advance rapidly. They never know when AI might reveal something about them. The data subject cannot help but feel exactly the same anxiety as the male client of Sherlock Holmes. With the rapid expansion of big data and the development of AI and machine learning, the “Sherlock threat” is becoming a challenge for personal data protection.

3.2. Linear Model of Sherlock Threat

By what kind of mechanism can better-trained machine learning be a threat to the privacy of the previously provided personal data? A simple linear model can explain such a phenomenon. It is possible to explain why a trained neural network can extract more personal identity and personal characteristics from previously provided personal data.
When we obtain one attribute of the person, we can always identify the person by that attribute if precision is infinite. Therefore, by theoretical information, anonymization can be explained as the limitation of information by obtaining attributes belonging to the record corresponding to the individual. In order to identify an individual in population N, necessary entropy is H1 as expression (1), and where the size of the population is N and would like to ensure k anonymization, the entropy of the attribute needs to be less than expression (2). If so, the entropy of individual individuals is less than the entropy for distinguishing individuals in groups with granularity smaller than k.
H 1 = log 2 N   [ bit ]
H 2 < log 2 N log 2 k   [ bit ]
However, there is a possibility of an unexpected way to obtain the ability to identify an individual by the support of external statistical data.
In Figure 1, attribute x1 and attribute x2 are given, and associate attributes y1 and y2 can be calculated by given statistical data, as shown in (a) and (b). In this case, the distribution of (y1, y2) given by (a) and (b) is broad, and individual a, b, c, and d (in the graph) cannot be identified.
However, if some external statistics had become available, as shown in Figure 2a,b, the delivery of the statistical distribution of (y1,y2) would be more narrowed, and the person would be identified in this case.
( y 1 y K ) = A B v = A B ( x 1 x L n 1 n M )
We can build a mathematical model of Sherlock threat by analyzing information delivery between the information source and database.
Figure 2c explains that entropy H is necessary to identify a person, and identification of a person is possible when entropy H is sufficient.
Expression (3) explains the relationship between information source x1 and xL and derives attribute y1 to yk. The additional noise n1..nM exists to make it difficult to identify a person by reducing the information available by Abv, where A and B are matrices and v is a vector consisting of information x corresponding to the individual and noise n. By simple observation, information on Bv looks small because Bv includes noise, or, in other words, the signal is covered by noise. Even in such a case, if we know matrix B, we can remove noise by providing appropriate A to derive y. Therefore, in this linear model, we assumed that Bv shows there is no sufficient information available in Bv to identify an individual. Therefore, the entropy of Bv is small enough to satisfy k-anonymization. However, there are possibilities of finding matrix A to remove noise and still identify a person by exploiting the statistical structure of matrix B.
Therefore, even with personal data noisy enough that it does not appear to contain sensitive information or sufficient statistical information to reveal an individual’s identity, it is possible to extract hidden personal information with less noise by using additional statistical data provided by a third party to reduce noise.

3.3. Dynamic Consent and Privacy Agentof Use of Analyzed Result

By the mechanism explained above, if the information that can be extracted from personal data can expand due to machine learning or subsidiary statistics, consent to provide personal data must also change dynamically. Therefore “dynamic consent” is already proposed. However, the choice of consent is also growing and becoming complex rapidly. Therefore there is a need for the “privacy agent” to assist the data subject in managing such consent.
Nakagawa’s Personal AI agent might be helpful for the data subject to manage personal data. Those AI agents are not actually necessary to be AI as long as they represent the benefit of the data subject. However, it is not sufficient to manage the provision of personal data. Even if they are employed to manage the provision of the data subject, if the resulting information of the analysis once becomes public for permanent use, uncertainty about extracting privacy data from the public domain information does not go away.
However, if the license to use personal data waits until the time of use of derivative information obtained from the provided data instead of the permission to provide data, and “dynamic consent” is made when it is clear what kind of derivative information is to be provided, to whom, and what kind of analysis, there is minimum uncertainty as to whether the results will be used; additionally, if the data subject feels uncomfortable providing such information, they can stop providing the data at any time. Therefore, the data subject can reduce anxiousness about their data from the use of unexpected analysis results in the future.

4. Configuring Docker Vectorized Personal Agents

4.1. Technical Means of Dynamic Consent and Privacy Agent

The system, satisfying the requirements in Section 3 and including appropriate technical means, can be designed. A block diagram of the technical safeguards governing dynamic consent and privacy agents is shown in Figure 3. The data subject specifies the permitted scope of use of personal information to the privacy agent by means of dynamic consent. The privacy agent manages whether or not personal information can be used. The central database is always kept with only permitted data. The database is analyzed by various analyzers. Accessibility of analysis results is also managed by digital right management (DRM; the system digitally controls access using crypto technologies). Therefore, the data subject has control over which analysis results are permitted to be used and can stop using the analysis results at any time.

4.2. Modularization with Docker Containers

A block diagram of the Docker containerized system is shown in Figure 4.
The privacy agent, analyzers, and DRM are Docker containerized. Therefore, personal data and its analysis results are always managed “inside” the Docker containerized service and provided to the outside according to a defined authentication procedure. In this way, personal data and its analysis results are always managed by the data subject and hidden from the outside. It can also be viewed as personal data, and its analysis results are securely encapsulated objects that are circulated and exchanged.
Intuitively, such a mechanism seems to require an unnecessarily large amount of overhead and the cost of data processing, but in fact, it is not.
Current data processing is almost always conducted online in real time. For example, the number of infected people infected with corona fluctuates constantly, but various aggregate analysis systems do not process data only once when data are provided, but every time the data are updated or an inquiry occurs, or it is aggregated and analyzed again at regular intervals based on the latest data.
Therefore, periodic or on-demand reaggregation can be based on an up-to-date database of personal data and does not constitute new overhead in most cases. It can be performed economically and efficiently in the case of most data processing applications.
In this recalculation, the Docker vectorizes privacy agent aggregates and analyzes only the data that have been agreed to in order to provide personal information at that time based on the new permission.
Moreover, the Docker container is not very large (only a few hundred megabytes), and the Docker host does not always load the images contained in the Docker container and can share the same image if it already exists in memory, so a huge composed of the same set of images. Even if the Docker container is started and run, its memory and computer costs can be reduced to a very small extent.
Therefore, the configuration of a Docker container attached to each personal data may seem somewhat redundant unless the structure is well understood, but in practice, it can be executed very efficiently.

4.3. Docker Vectorization

Even the Docker container has preferable basic characteristics for dynamic privacy protection, and it is not necessarily a panacea. The size of the Docker container can be a few hundred megabytes, even though the container image is reasonably compact. A Docker container of a few hundred megabytes as descriptive data for a genome information body of at least several gigabytes cannot be too heavy. However, data from wearable devices can only be obtained; for example, hourly body temperatures are about the size of a few kilobytes. An overhead of several megabytes for a few kilobytes of private data is too heavy. Therefore, we propose Docker vectorization.
Docker vectorization is shown in Figure 5. In this scheme, instead of treating the Docker container as it is, the descriptor PPPD (Privacy Protection Policy Descriptor) is used. A PPPD is attached to each personal data. A simple JSON (JavaScript Object Notation) format might be sufficient to describe PPPD. Each PPPD does not include Docker containers but contains a list of specifications of docker containers and parameters given to them. By using those, each PPPD specifies all privacy and security requirements.
It might be questioned that such a mechanism requires different configurations for personal data, so the overhead of processing massive amounts of personal data will be costly. However, the process will be carried out quite efficiently. The Docker repository contains all necessary Docker images. Please note that, in the Docker platform, loading Docker images is necessary only once, and instantiation of the Docker container is lightweight. Only a limited number of docker images are necessary to process all personal data. Most personal data are assumed to share the same configurations. By aggregating such personal data and processing those at once, the overhead of configuring docker containers to process different personal data will be minimum.
If only a small number of variations in PPPDs are expected, PPPD can be an index to the dictionary of actual PPPD. In that case, we can assume PPPD is a fixed-length field. This means that in the Docker container that realizes the privacy agent as an identifier, the encoding efficiency of PPPD is greatly improved. If we reserve 128 bits for PPPD, we can define one new type of privacy agent per second for all mankind, and we do not have to worry about the code space being exhausted at all.

4.4. Multilateral Security and Zero Trust Security

Kaneko (one of the authors) showed a method for evaluating the reliability of the components that make up the system component by means of a multilateral security framework [24].
Figure 6 shows the framework, including service provider S, secure component SC#1 and SC#2, evaluator A, system providers #1 and #2, and service user. The solid arrow from service provider to SCs means the use of a security module. System providers #1 and #2 actually implemented secure components, and they are implementers of the security of SCs. The evaluator evaluates the security of SCs as a third party. The service user uses services provided by SCs.
We did not explain detail because it is already explained in the referred paper, but with this framework, the reliability of SCs is highly dependent on the switching cost of SCs. Therefore, in general, it is very important to be able to use replaceable SCs in the system, evaluated by a third party, and it can be replaced whenever possible. This is a result of a game theoretical analysis of the multilateral security framework.
Mutual authentication allows the verifier to perform mutual security authentication. Verifiers are incentivized to improve their verification performance by rewarding them according to their verification performance. And from game theory, if there is such a certain incentive and the exchange cost of verifiers is zero due to compatibility of verifiers, it is a Nash equilibrium solution that verifiers will do their best to detect security failures with each other. In other words, since the verifiers each do their best to detect security flaws, the possibility of security flaws being overlooked can be reduced. That is, only in such cases can the chances of missing a security flaw be reduced, as each verifier will do its best to detect it.
Next, we considered what should be added mechanically to realize multilateral security in cloud and Docker environments.
Such a component execution environment did not exist in the past as a commonly available function on popular platforms, and a very large initial investment was required to develop it. If this problem was not solved, multilateral security would be only a theoretical idea. Cloud environment and Docker container technology are available on multiple platforms. Therefore, those platforms and Docker containers are replaceable with zero cost, as required in the multilateral security framework.
Maru et al. attempted to provide unified security and privacy multilateral security architecture for cloud services stack using Docker containers in specific [25]. The report was not intending to show the Dynamic consent we have explained here, but it explained well how Docker containers could be used for multilateral security.
Therefore, the execution environment necessary for realizing multilateral security in the cloud environment is already provided in the cloud service. For the Docker vectorized privacy agent to provide multilateral security, it is only necessary to have multiple certification authorities and appointment of certification authorities in the PPPD.

5. Realization of the Subscription Model

Finally, we pointed out that a subscription model using personal data can be realized by Docker vectorization and described its advantages.
In the proposed method, the Docker vectorized privacy agent functions as a web service that provides the use of personal data. It is conceivable to distribute it directly to the provider of personal data.
In the past, if some cost was incurred to acquire personal data, it was necessary for the person receiving the data to bear the cost once, but if the provided personal data are considered to have great value in the future, it is considered that it is possible to collect data at a lower cost by collecting them in consideration of future distribution.
If the party receiving the data appreciates the value of the provision of personal data rather than the data subject, it is necessary to pay some consideration at the time of receiving the data provided as before, and if the data subject highly evaluates, the data provided can be obtained in anticipation of future payment, and compensation for the provision of personal data makes it possible to have flexibility in the method.

6. Summary

The feasibility, merits, and issues of the Docker vectorized privacy agent were summarized based on their characteristics. We explained that the proposed method allows various cloud platforms to be used as execution environments without special additional systems. We explained that the major advantage is that it can be quickly introduced and executed efficiently by the cloud platform that is popular today. Next, we explained the necessity of delayed licensing of personal data, and then we explained the configuration of Docker vectorized personal agents. We explained again how to realize it in the container execution environment and showed that it could be realized on the cloud platform without any problems.
However, we are currently working on the level of conceptual design and basic evaluation of the performances. We are continuing discussions on the academic field as well as standardization organization. It is not yet a phase to develop a real system for a major application field. The system is only useful with the support of critical mass in the application field. In order to reach such a critical mass, the first step would be the proposal of standardization and activity to encourage other members to support such architecture.
From our side, it is probably necessary to develop PoC implementation, and such PoC implementation will help people to understand the actual design and confirm basic performance.
In this paper, we did not clearly show how privacy agents decide to manage the private data of data subjects. Such a decision is a very complex problem, whether it is performed by AI as Nakagawa proposed or by human experts. It is still remaining question if it is practical to let the privacy agent decide for each data subject.
Currently, we are also disseminating the concept to the standardization society and trying to push this concept to be a further step in development.

Author Contributions

Conceptualization, I.K., E.Y. and H.O.; methodology, I.K.; software, I.K.; validation, E.Y. and H.O.; formal analysis, I.K.; investigation, I.K.; resources, I.K.; data curation, I.K.; writing—original draft preparation, I.K.; writing—review and editing, I.K. and E.Y.; visualization, I.K.; supervision, E.Y.; project administration, E.Y.; funding acquisition, H.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the 2022 National Institute of Informatics Joint Research (Strategic Research Open Call Research) and ROIS NII Open Collaborative Research 2023 (22S0101).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was supported by JST-Mirai Program (JPMJMI19B4).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yuda, E.; Furukawa, Y.; Yoshida, Y.; Hayano. Association between Regional Difference in Heart Rate Variability and Inter-prefecture Ranking of Healthy Life Expectancy: ALLSTAR Big Data Project in Japan. In Proceedings of the 7th EAI International Conference on Big Data Technologies and Applications (BDTA), Chung-ang University Seoul, Seoul, Republic of Korea, 17–18 November 2017. [Google Scholar]
  2. MPEG-G Group MPEG-G. Available online: https://mpeg-g.org/ (accessed on 30 May 2020).
  3. ISO/IEC JTC 1/SC 29/WG 11 MPEG-G. Available online: https://www.mpeg.org/standards/MPEG-G/ (accessed on 7 February 2023).
  4. European Commission. Project Information GenCoder Grant Agreement ID: 827840. The First MPEG-G Compliant Software Tools for Efficient Compression, Storage, Transport and Analysis of Genomic Data Enabling Systems Interoperability. Available online: https://cordis.europa.eu/project/rcn/218180/factsheet/es (accessed on 30 May 2020).
  5. Yoshihara, H. gEHR Project: Nation—Wide EHR Implementation in Japan. Kyoto Smart City Expo. Available online: https://expo.smartcity.kyoto/2016/doc/ksce2016_doc_yoshihara.pdf (accessed on 30 May 2020).
  6. Wikipedia, General Data Protection Regulation. Available online: https://en.wikipedia.org/wiki/General_Data_Protection_Regulation (accessed on 30 May 2020).
  7. Shabani, M.; Borry, P. Rules for processing genetic data for research purposes in view of the new EU General Data Protection Regulation. Eur. J. Hum. Genet. 2018, 26, 149–156. [Google Scholar] [CrossRef] [PubMed]
  8. Itakura, Y.; Fujimura, A.; Kameishi, K.; Magata, F. Legal position of Secure multi-party computation in foreign countries. Inf. Process. Soc. Jpn. 2019, 2019-EIP-83(1), 2188–8647. [Google Scholar]
  9. Itakura, Y.; Terada, M. Present Situation and Prospect of the EU’s “Adequacy Decision”—A Study Based on the Opinions of the European Data Protection Board (EDPB). Inf. Process. Soc. Jpn. 2019, 2019-EIP-83(2), 1–4. [Google Scholar]
  10. Terada, M.; Itakura, Y. Current Situation and Issues over the Draft e-Privacy Regulation in the EU―Based on its relation to the GDPR (General Data Protection Regulation). Inf. Process. Soc. Jpn. 2019, 2019-EIP-83(3), 1–6. [Google Scholar]
  11. NIH. Privacy in Genomics. 2019. Available online: https://www.genome.gov/27561246/privacy-in-genomics/ (accessed on 19 March 2019).
  12. NIH. Privacy in Genomics. 2019. Available online: https://www.genome.gov/about-genomics/policy-issues/Privacy (accessed on 23 April 2019).
  13. Gymrek, M.; McGuire, A.L.; Golan, D.; Halperin, E.; Erlich, Y. Identifying Personal Genomes by Surname Inference. Science 2013, 339, 321–324. Available online: https://science.sciencemag.org/content/339/6117/321 (accessed on 30 May 2020). [CrossRef] [Green Version]
  14. ISO/IEC, ISO/IEC 20889:2018—Privacy Enhancing Data de-Identification Terminology and Classification of Techniques. Available online: https://www.iso.org/obp/ui/#iso:std:iso-iec:20889:ed-1:v1:en (accessed on 30 May 2020).
  15. Samarati, P.; Sweeney, L. 2017 Protecting Privacy When Disclosing Information: K-Anonymity and Its Enforcement through Generalization and Suppression. Harvard Data Privacy Lab. Available online: https://dataprivacylab.org/dataprivacy/projects/kanonymity/paper3.pdf (accessed on 12 April 2017).
  16. ISO. ISO 25237:2017 Health Informatics—Pseudonymization. Available online: https://www.iso.org/standard/63553.html (accessed on 14 January 2023).
  17. Pattaro, C.; Gögele, M.; Mascalzoni, D.; Melotti, R.; Schwienbacher, C.; De Grandi, A.; Foco, L.; D’Elia, Y.; Linder, B.; Fuchsberger, C.; et al. The Cooperative Health Research in South Tyrol (CHRIS) study: Rationale, objectives, and preliminary results. J. Transl. Med. 2015, 13, 348. [Google Scholar] [CrossRef] [PubMed]
  18. Mascalzoni, D.; Melotti, R.; Pattaro, P.; Pramstaller, P.; Ggele, M.; De Grandi, A.; Biasiotto, R. Ten years of dynamic consent in the CHRIS study: Informed consent as a dynamic process. Eur. J. Hum. Genet. 2022, 30, 1391–1397. [Google Scholar] [CrossRef] [PubMed]
  19. Kaneko, I.; Yuda, E.; Okada, H. Docker, a cloud-native privacy agent Vectorization the feasibility, benefits and challenges of the subscription model. Inf. Process. Soc. Jpn. 2022, 2022-EIP-97, 1–4. [Google Scholar]
  20. Kaneko, I. On the genomic information processing in the privacy regulation of various countries and security API of MPEG genomic coding. Inf. Process. Soc. Jpn. 2019, 2019-EIP-84, 1–4. [Google Scholar]
  21. Nakagawa, H. Trends in AI Ethics Guidelines and Personal AI Agents. J. Inf. Commun. Policy 2020, 3, 1–23. [Google Scholar]
  22. Docker Overview. Available online: https://docs.docker.com/get-started/overview/ (accessed on 7 February 2023).
  23. Wilson, B.; Khandelwal, S. How to Reduce Docker Image Size: 6 Optimization Methods. Available online: https://devopscube.com/reduce-docker-image-size/ (accessed on 7 February 2023).
  24. Kaneko, I. Probabilistic multi-lateral security model for ubiquitous multimedia services. In Proceedings of the IEEE 24th International Conference on Distributed Computing Systems Workshops, Tokyo, Japan, 24 August 2004. [Google Scholar]
  25. Manu, A.R.; Patel, J.K.; Akhtar, S.; Agrawal, V.K.; Murthy, K.N.B. Docker container security via heuristics-based multilateral security-conceptual and pragmatic study. In Proceedings of the IEEE 2016 International Conference on Circuit, Power and Computing Technologies, Nagercoil, India, 18–19 March 2016. [Google Scholar]
Figure 1. Identification of individuals without Sherlock threat: Attribute x1 and attribute x2 are given, and associate attributes y1 and y2 can be calculated by given statistical data, as shown in (a,b) and mapped distribution (y1, y2) is shown in (c). In this case, the distribution of (y1, y2) given by (a,b) is broad, and individual p, q, r, and s (in the graph (c)) cannot be identified.
Figure 1. Identification of individuals without Sherlock threat: Attribute x1 and attribute x2 are given, and associate attributes y1 and y2 can be calculated by given statistical data, as shown in (a,b) and mapped distribution (y1, y2) is shown in (c). In this case, the distribution of (y1, y2) given by (a,b) is broad, and individual p, q, r, and s (in the graph (c)) cannot be identified.
Applsci 13 03235 g001
Figure 2. Identification of individuals with Sherlock threat: In this case, with statistical distributions (a,b), statistical distribution of (y1, y2) in the mapped space (c) will be narrowed for corresponding x1 and attribute x2. Individuals p, q, r, and s (in the graph (c)) can be identified in this case.
Figure 2. Identification of individuals with Sherlock threat: In this case, with statistical distributions (a,b), statistical distribution of (y1, y2) in the mapped space (c) will be narrowed for corresponding x1 and attribute x2. Individuals p, q, r, and s (in the graph (c)) can be identified in this case.
Applsci 13 03235 g002
Figure 3. Technical measure for dynamic consent and privacy agent.
Figure 3. Technical measure for dynamic consent and privacy agent.
Applsci 13 03235 g003
Figure 4. Modularization using Docker Container.
Figure 4. Modularization using Docker Container.
Applsci 13 03235 g004
Figure 5. Docker vectorization.
Figure 5. Docker vectorization.
Applsci 13 03235 g005
Figure 6. Mutual authentication of the multilateral security model.
Figure 6. Mutual authentication of the multilateral security model.
Applsci 13 03235 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaneko, I.; Yuda, E.; Okada, H. Docker Vectorization, a Cloud-Native Privacy Agent—The Analysis of Demand and Feasibility for Era of Developing Complexity of Privacy Management. Appl. Sci. 2023, 13, 3235. https://doi.org/10.3390/app13053235

AMA Style

Kaneko I, Yuda E, Okada H. Docker Vectorization, a Cloud-Native Privacy Agent—The Analysis of Demand and Feasibility for Era of Developing Complexity of Privacy Management. Applied Sciences. 2023; 13(5):3235. https://doi.org/10.3390/app13053235

Chicago/Turabian Style

Kaneko, Itaru, Emi Yuda, and Hitoshi Okada. 2023. "Docker Vectorization, a Cloud-Native Privacy Agent—The Analysis of Demand and Feasibility for Era of Developing Complexity of Privacy Management" Applied Sciences 13, no. 5: 3235. https://doi.org/10.3390/app13053235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop