Next Article in Journal
Co-Creation for Sign Language Processing and Translation Technology
Previous Article in Journal
Finding Your Voice: Using Generative AI to Help International Students Improve Their Writing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cloud Security Assessment: A Taxonomy-Based and Stakeholder-Driven Approach

1
Information Systems, Herberger Business School, St. Cloud State University, St. Cloud, MN 56301, USA
2
College of Computer Science and Engineering, University of Jeddah, Jeddah 23890, Saudi Arabia
3
Department of Computer Science, Bowie State University, Bowie, MD 20715, USA
4
The Department of Computer Science, University of Idaho, Moscow, ID 83844, USA
5
Department of Computer Science, The University of Memphis, Memphis, TN 38152, USA
*
Author to whom correspondence should be addressed.
Information 2025, 16(4), 291; https://doi.org/10.3390/info16040291
Submission received: 27 January 2025 / Revised: 31 March 2025 / Accepted: 1 April 2025 / Published: 4 April 2025
(This article belongs to the Special Issue Internet of Things (IoT) and Cloud/Edge Computing)

Abstract

:
Cloud adoption necessitates relinquishing data control to cloud service providers (CSPs), involving diverse stakeholders with varying security and privacy (S&P) needs and responsibilities. Building upon previously published work, this paper addresses the persistent challenge of a lack of standardized, transparent methods for consumers to select and quantify appropriate S&P measures. This work introduces a stakeholder-centric methodology to identify and address S&P challenges, enabling stakeholders to assess their cloud service protection capabilities. The primary contribution lies in the development of new classifications and updated considerations, along with tailored S&P features designed to accommodate specific service models, deployment models, and stakeholder roles. This novel approach shifts from data or infrastructure perspectives to comprehensively account for S&P issues arising from stakeholder interactions and conflicts. A prototype framework, utilizing a rule-based taxonomy and the Goal–Question–Metric (GQM) method, recommends essential S&P attributes. Multi-criteria decision-making (MCDM) is employed to measure protection levels and facilitate benchmarking. The evaluation of the implemented prototype demonstrates the framework’s effectiveness in recommending and consistently measuring security features. This work aims to reduce consumer apprehension regarding cloud migration, improve transparency between consumers and CSPs, and foster competitive transparency among CSPs.

1. Introduction

The emerging success of cloud computing (CC) in the commercial internet landscape has opened new doors for attackers to exploit businesses and industries. This vulnerability is largely due to the ubiquitous connectivity provided by CC platforms, wherein an attacker can inflict damage from almost any geographical location, independent of the cloud service location. Notable incidents on Amazon AWS, Adobe, Dropbox, Google Drive, and iCloud indicate that the leaders in the CC market are not immune to becoming victims of fatal security issues [1,2,3,4,5,6,7]. These incidents and similar ones have caused significant damage to governments, private enterprises, and the general public from financial loss, data confidentiality breaches, and reputational harm.
Despite the attempts of service providers in the market and the efforts of researchers and the industry to protect cloud services, CC stakeholders are yet to adopt security approaches that maintain availability, elasticity, expandability, and, at the same time, improve security and privacy (S&P). The following subsections explore the practical reasons for this lack.

1.1. Cloud Computing Landscape

The complex cloud computing landscape necessitates sophisticated security measures. The National Institute of Standards and Technology (NIST) [8] offers widely accepted definitions of cloud computing [9], its stakeholders [10], and security concerns [11]. NIST defines CC as a model enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned with minimal management effort.
NIST categorizes cloud services into three types based on the service model: Software as a Service (SaaS), like Facebook and Twitter, where consumers access cloud applications hosted on a provider’s infrastructure; Platform as a Service (PaaS), like Google AppEngine, allowing customers to develop and deploy applications using provided tools; and Infrastructure as a Service (IaaS), such as Amazon EC2, offering an infrastructure for customers to deploy and build applications.
According to NIST, cloud computing comprises four deployment models: Private, for single organizations; Public, intended for the general populace; Community, for groups with common interests; and Hybrid, combining multiple models for flexibility.
NIST identifies five major CC stakeholders [10]: cloud consumers, service providers, auditors, brokers, and carriers. It highlights their roles and interactions in the ecosystem.
The extensive CC landscape complicates S&P issues [11], attributed to the diversity of services and deployment options, the dynamic internet environment, and varying S&P features from cloud service providers (CSPs). These complexities underline the need for a structured, expandable approach to update solutions. This approach should highlight the diverse and complex S&P solutions available, considering the inherent costs and risks of each [12]. A quantitative method is valuable for navigating these challenges, aiming to address market complexities and diversified S&P solutions.

1.2. The Stakeholder Conundrum

Cloud computing’s complexity extends beyond simple user-centric security, requiring a stakeholder-centered approach due to the varied roles involved (providers, customers, developers, etc.) [13]. Responsibility for security is often unclear across different service models (SaaS, PaaS, IaaS) and multiple providers, diluting stakeholder control and raising liability questions [11]. Issues like “insufficient due diligence” [13], accidental data exposure [4], and stakeholder apathy exacerbate these problems. The need for a dynamic, comprehensive cloud model with clear responsibilities and quantifiable security and privacy (S&P) metrics is crucial [14]. This paper introduces a framework for cloud security assessment, aiming to provide measurable S&P attributes for informed decision-making.

1.3. Motivation and Scope

Cloud computing services have a frustrating history of failure in various forms (e.g., service outage, data loss, exposure to the public, etc.). The cloud technology sector has been plagued throughout the past decade by catastrophic S&P incidents, ensnaring even the industry’s leading companies [3,5,6,7,15].
To date, efforts to mitigate the negative impact of cloud S&P issues have focused on two main strategies: (1) safeguarding consumer data and (2) fortifying cloud infrastructure. Yet, there has been a lack of consideration for stakeholders in proactively recommending and evaluating the security measures of cloud services.
Despite numerous efforts to enhance cloud security, many potential customers remain reluctant to migrate their systems to the cloud. This hesitation often stems from the complexities of selecting appropriate S&P features, a task for which they may not have adequate experience or knowledge. Despite significant investments in deploying cloud services and advancing security technologies, solutions to security challenges have been inadequate. For instance, A recent survey revealed that 75% of businesses consider cloud security their top concern. Additionally, 39% of businesses experienced a data breach in their cloud environment in the past year, up from 35% the previous year. Alarmingly, only 22% of IT professionals reported encrypting more than 60% of their sensitive cloud data. Furthermore, over 77% of respondents feel unprepared to deal with security threats in their cloud environments [16,17,18,19].
Existing CC security solutions lack methods that quantitatively support decision-making, specifically regarding the necessary and sufficient S&P features and the reasons for their necessity. Consumers of different CC services are interested in security solutions not only for their security features but also for the degree of security goals they provide. Because “to measure is to know” [14], quantitatively assessing S&P qualities enables consumers of cloud services to make well-informed decisions and compare options. Similarly, CSPs need these quantitative assessments of the offered security features to develop their offerings and better compete with other providers. They can also use these assessments to calculate the price of their security features.
Furthermore, CC-related crimes have surged in the past decade, prompting the rise of digital forensics as a crucial tool for gathering evidence in criminal investigations. This evidence relies on quantitative assessments for thorough auditing and analysis. Given the inherent complexity of cloud environments, researchers in CC digital forensics require detailed quantitative insights encompassing CC security architectures, components, and support for analytical processes.
Finally, organizations have diverse CC security goals and priorities, meaning they vary in their requirements, assets, public exposure, and tolerance for security risks. The extent to which an organization invests in detecting, preventing, and responding to S&P issues in CC depends on the value of the assets involved, the organization’s budget, and the criticality and sensitivity of the S&P concerns. Organizations must, therefore, determine the level of security necessary to meet their specific needs and objectives. This determination, among others, requires quantitative assessments to facilitate informed decision-making.
This paper tackles the cloud security problem by addressing the stakeholder conundrum and proposing a cloud security recommender and assessment solution. The introduction provides background on cloud computing, outlines security concerns, and defines the motivation and scope of the research. The related works section reviews existing cloud security assessment methods and recommendation models, identifying gaps in the literature. The solution to the problem is then introduced, detailing its role in assessing and recommending security measures based on catalog-based criteria in Section 4, Section 5, and Section 6. The analytical evaluation section provides quantitative and qualitative assessments of the framework’s effectiveness. Finally, the conclusion summarizes key findings, discusses limitations, and outlines future research directions.
Cloud adoption necessitates relinquishing data control to CSPs, involving diverse stakeholders with varying S&P needs and responsibilities. Building upon previously published work [1,20], this paper addresses the persistent challenge of a lack of standardized, transparent methods for consumers to select and quantify appropriate S&P measures. The primary contributions of this work are as follows:
  • Development of new classifications and updated considerations, along with tailored S&P features designed to accommodate specific service models, deployment models, and stakeholder roles.
  • Refinement of the framework to differentiate S&P attribute considerations based on stakeholder type, recognizing that attributes like authentication and identity management have distinct requirements for SaaS users versus PaaS or IaaS developers.
  • Implementation of a prototype that provides a granular assessment of the degree of protection offered by the cloud consumption scenario, considering attribute functionality (threat detection, prevention, response) and the protected security domains (client, interface, network, virtualization, governance, compliance, legal, data). The prototype also shows the degree of deterrence the S&P attributes have.
This novel approach shifts from data or infrastructure perspectives to comprehensively account for S&P issues arising from stakeholder interactions and conflicts. A prototype framework, utilizing a rule-based taxonomy and multi-criteria decision-making (MCDM) is employed to measure protection levels and facilitate benchmarking. Evaluation of the implemented prototype demonstrates the framework’s effectiveness in recommending and consistently measuring security features.

2. Related Work

Despite the widespread adoption of cyber and physical security assessments, there is an ongoing effort among academic and industry professionals to develop new and improved techniques for evaluating security. These advancements aim to address a range of uncertainties critical to both research and practical applications. The literature review summarized in Table 1 provides an insightful overview of the diverse strategies and significant findings within cloud computing security and service selection, illustrating the breadth of research and practical applications addressed by scholars and industry experts.
The summary of existing works in S&P evaluation within cloud computing reveals they have developed effective methodologies, yet they face limited practicality, reliability, and scope. Key areas of concern include the following:
  • Generalizability: Current research predominantly focuses on specific aspects of cloud security, such as individual deployment models, service models, cloud components (e.g., hypervisors, networking), or particular cloud applications (e.g., multimedia, storage services). This specificity limits the broader applicability of findings across the diverse landscape of cloud computing.
  • Comprehensiveness and Consistency: Many studies concentrate on a limited set of S&P attributes, often neglecting the comprehensive spectrum of security needs. Additionally, the reliance on outdated security standards, like ISO 27001:2022 [39] or the Common Vulnerability Scoring System (CVSS), for evaluating cloud services does not fully accommodate the unique challenges presented by cloud computing. The inconsistency in prioritizing S&P qualities, with an over-reliance on performance metrics or user feedback, further complicates a holistic assessment.
  • Extensibility and Expandability: A significant gap in current methodologies is their lack of adaptability to emerging technologies or evolving S&P issues. The rigidity of existing frameworks hinders their ability to incorporate new developments or phase out obsolete practices.

3. Catalog-Based Cloud Security Recommender and Assessment

In CC models, scenarios serve as the foundation for interactions, encompassing diverse stakeholders such as consumers, providers, legal experts, insurance companies, brokers, hardware and software manufacturers, auditors, service partners, and carriers. These stakeholders engage with various service models—SaaS, PaaS, or IaaS—deployed across multiple cloud environments, including public, private, hybrid, and community configurations, as in Figure 1. We refer to this model as a scenario.
A CC model consists of three core elements: stakeholders, who interact with and utilize the service; the service model, which defines the type of cloud services provided; and the deployment model, which categorizes the cloud environment based on accessibility, ownership, and scale. These three elements collectively determine how cloud services are delivered and consumed.
While cloud computing provides numerous benefits, including scalability, flexibility, and cost efficiency, each CC model also introduces significant S&P challenges. These challenges range from traditional issues such as user access control, network security, and authentication to more complex concerns related to the trustworthiness and accountability of cloud stakeholders. The shared and multi-tenant nature of cloud environments further complicates these issues, exposing stakeholders to many threats, as discussed in the introduction.
Addressing these challenges requires a detailed and nuanced understanding of the roles, responsibilities, and requirements of each stakeholder involved in cloud service interactions. Stakeholders’ S&P concerns vary depending on their roles, making it critical to develop a tailored cloud security strategy for each. This strategy involves educating stakeholders about potential S&P risks and helping them identify the essential S&P attributes relevant to their specific scenarios. For example, a system administrator might prioritize strong access controls, while a data analyst might focus on encryption to ensure data confidentiality.
A thorough evaluation of available cloud services is vital to this strategy. By comparing services based on their security features and aligning them with stakeholder needs, stakeholders can make informed decisions that enhance the overall security posture of the cloud environment. This approach clarifies the distribution of security responsibilities between providers and consumers and establishes well-defined S&P objectives for all parties.
Ultimately, this perspective emphasizes the importance of viewing cloud security from the stakeholder’s point of view. By focusing on their unique requirements and responsibilities, cloud computing ecosystems address S&P challenges more effectively, ensuring secure and trustworthy interactions within the cloud.

3.1. Scenario-Based Cloud Security from a Stakeholders’ Perspective

Historically, security measures for single-tenant computer systems had a single user in mind, encompassing infrastructure, platform, and software under the exclusive control of one entity. These systems ensured centralized management and security tailored to the needs of an individual organization. However, the advent of cloud computing has fundamentally transformed this landscape. In cloud environments, computational resources are owned by cloud service providers (CSPs) but shared among a diverse set of consumers. These consumers, including application developers, system administrators, and third-party software providers, engage with the cloud differently based on their roles and objectives.
After defining the scenario, the next step identifies the vulnerabilities and their corresponding S&P features. The framework systematically analyzes the scenario by evaluating the extent of the stakeholder’s interaction with the cloud environment. It identifies potential S&P risks with this analysis and recommends appropriate S&P features to mitigate them.
The framework ensures that it addresses the specific vulnerabilities inherent in a stakeholder’s cloud usage scenario with tailored security measures. This structured approach integrates the components seamlessly to deliver a comprehensive, scenario-specific cloud security solution.
For example, developers may leverage cloud platforms for the entire application development lifecycle, from design to deployment. In contrast, system administrators focus on creating and managing Infrastructure-as-a-Service (IaaS) offerings to ensure scalability and operational efficiency. This diversity extends to their interaction with cloud components such as hardware, software, and configurations. Each interaction brings unique security objectives, challenges, and potential solutions, highlighting the need for tailored S&P measures.
This paper explores the S&P of cloud services through the lens of these varied stakeholders, aiming to secure their specific interaction scenarios. It addresses a broad spectrum of cloud consumers. Each stakeholder type interacts differently with the cloud environment, requiring distinct S&P strategies to address their specific needs.
In a CC model, the service inherently includes S&P attributes, categorized as follows:
  • Default Attributes: Built-in S&P features, such as access control and authentication, safeguard cloud services by default. These provide basic protection without requiring additional user configuration.
  • Non-default Attributes: Optional, enhanced S&P features available upon request, often at an additional cost. Examples include advanced encryption, backup services, or increased restoration bandwidth, which offer superior security measures beyond the standard provisions.
Cloud consumers can enhance their security posture by requesting non-default attributes directly from their CSPs or third-party providers. These attributes often include multiple versions with distinct properties and associated costs. For instance, disaster recovery plans might vary in backup speed, storage volume, and redundancy levels, offering consumers flexibility in choosing a solution that aligns with their security needs and budget.
This framework employs a strategic approach to address S&P challenges that identifies distinct cloud usage scenarios, pinpoints relevant S&P challenges for each, and tailors S&P attribute recommendations accordingly. This approach emphasizes the need for a nuanced understanding of the complex interplay between various S&P attributes and their implications for the overall security posture of a CC model. The framework focuses on custom solutions for specific scenarios to ensure security strategies align with stakeholder requirements.

3.2. Mathematical Definition of the Model

Cloud computing introduces diverse operational scenarios where stakeholders interact with various service models (SaaS, PaaS, or IaaS) across multiple deployment types (public, private, hybrid, or community). These scenarios, collectively represented as S C = S C 1 , S C 2 , S C 3 , , S C S , define how a specific combination of stakeholders, service models, and deployment types interact with s > 1 .
For example, a scenario such as SC = (Developer, IaaS, Public) represents a developer utilizing IaaS in a public cloud, necessitating distinct S&P attributes based on the unique risks involved.
Each stakeholder interaction within a given scenario is potentially impacted by a set of S&P challenges, denoted as I S S U E P S C = { i s s u e 1 S C ,   i s s u e 2 S C , i s s u e 3 S C , , i s s u e P S C } , where P ≥ 1. Specific scenarios involve subsets of these challenges, represented as I S S U E P S C = { i s s u e 1 S C ,   i s s u e 2 S C , i s s u e 3 S C , , i s s u e P S C } , such that I S S U E P S C I S S U E . These issues encompass a variety of risks, such as data breaches, unauthorized access, and system vulnerabilities, depending on the operational context. For example, vulnerabilities like buffer overflows or SQL injection may pose significant threats in some deployment models.
The framework addresses these issues by employing tailored S&P attributes, mathematically defined as A T T { a t t 1 , a t t 2 , a t t 3 , , a t t Q } , where Q ≥ 1.
For a given scenario, the relevant attributes are selected as A T T q I S S U E P S C { A T T 1 I S S U E P S C , A T T 2 I S S U E P S C , A T T 3 I S S U E P S C , , A T T q I S S U E P S C } , such that A T T q I S S U E P S C A T T .
Examples of attributes include encryption for data confidentiality, network security to prevent unauthorized access, and service monitoring for continuous threat detection. The framework, illustrated in Figure 2, operates through three interconnected components: the Cloud Service Security Recommender (CSSR), the Cloud Service Catalog (CSC), and the Cloud Service Security Assessor (CSSA). The CSSR identifies the possible S&P issues within a given scenario and recommends a set of attributes to address these issues. For instance, in the SC = (Developer, IaaS, Public) scenario, the CSSR might prioritize attributes like encryption and disaster recovery. The process ensures that it matches each scenario’s unique challenges with appropriate security measures.
The CSC maps the recommended attributes to the features offered by various cloud service providers (CSPs). It represents CSPs as C S P i S C = { C S P 1 S C , C S P 2 S C , C S P 3 S C , , C S P i S C } , where i ≥ 1. For instance, Amazon EC2 with public IaaS aligns with SC = (Developer, IaaS, Public). The CSC categorizes CSP features into default attributes (e.g., built-in access control) and non-default attributes (e.g., advanced encryption) and allows stakeholders to customize their security options.
The CSSA quantitatively assesses the effectiveness of the recommended attributes in addressing the identified issues. Predefined considerations guide this evaluation, represented as C O N Q C = {   c o n Q 1 ,   c o n Q 2 ,   ,   c o n Q C } , where C ≥ 1. For example, it might evaluate the effectiveness of an encryption attribute based on compliance with standards like AES-256, data coverage (at rest and in transit), and key management protocols.
These components (as in Figure 2) together form a cohesive workflow. The CSSR identifies the challenges and proposes attributes, the CSC links these attributes to CSP offerings, and the CSSA evaluates the adequacy of these offerings. This integrated process ensures stakeholders can confidently select and implement security solutions tailored to their scenarios, effectively addressing their S&P challenges.

4. CSSR Architecture and Implementation

Cloud security involves traditional concerns (e.g., network security, authentication) and emerging issues, like VM attacks, requiring integrated countermeasures. While commercial solutions (e.g., CloudPassage Halo [40], CipherCloud [41], and CloudLock [42]) and standards (e.g., CSA Cloud Control Matrix [43]) offer some safeguards, they have limitations. Thus, adopters often rely on CSPs or brokers due to the complexity of overlapping controls [44]. Existing cloud service recommenders lack a focus on security, prompting the development of a security-specific recommender tool to educate stakeholders, streamline feature selection, and address these gaps inspired by rule-based expert systems [45,46].

4.1. CSSR Architecture

CSSR is a web-based tool designed to assist stakeholders in understanding their CC models and identifying potential S&P issues by analyzing possible attack surfaces. CSSR provides stakeholders insights into security challenges by listing operational and informational impacts and recommending defensive actions aligned with corresponding security attributes.
CSSR incorporates four features to achieve its objectives. First, it employs a scenario-based approach, using hypothetical scenarios to help stakeholders address complex problems and evaluate usage patterns and operational contexts. This method ensures stakeholders focus on CC models matching their operational profiles. It promotes informed decision-making and enhances cloud security accountability. Second, it adopts a taxonomical framework, systematically organizing components of CC models to improve understanding of use cases and support future extensions and upgrades. Third, CSSR aligns with NIST standards [9], adhering to established practices in the CC domain to ensure reliability and standardization. Finally, CSSR is stakeholder-oriented, prioritizing the role of stakeholders in defining and addressing S&P issues. It shifts the focus from traditional data ownership approaches to a stakeholder-driven methodology, addressing the “lack-of-trust” dilemma in cloud environments by promoting accountability and tailored security recommendations.
CSSR introduces a unified process for securing CC environments by emphasizing stakeholder involvement in defining security needs, enhancing transparency and accountability, and encouraging collaboration between consumers and providers. CSSR clearly defines the security responsibilities of each stakeholder to foster trust and competitiveness among cloud service providers.
The primary objectives of CSSR are twofold: (1) to identify potential S&P issues within a given scenario—comprising a consumer, a service, and a deployment model—and (2) to recommend essential S&P attributes tailored to stakeholders’ goals, such as maximizing security gains or minimizing costs. CSSR achieves these objectives using three taxonomies, as in Figure 3, which provide detailed classifications and granularity to its analysis.
These taxonomies allow CSSR to systematically examine attack surfaces, operational impacts, and security attributes. They offer stakeholders a comprehensive and fine-grained understanding of their cloud computing environments. This detailed approach makes CSSR a robust framework for improving S&P in CC.
CSSR leverages the concept of consumption scenarios to address S&P issues in cloud computing environments. A consumption scenario represents the interaction between a stakeholder (e.g., application developer, system administrator, or end-user), a service model (e.g., SaaS, PaaS, or IaaS), and a deployment model (e.g., public, private, hybrid, or community). For a service to require protection, it must first be consumed. This foundational concept enables CSSR to identify and address S&P challenges effectively.
Each consumption scenario is prone to specific vulnerabilities. CSSR identifies these vulnerabilities and maps them to appropriate security attributes. These attributes may be default, such as authentication and access control, or non-default, such as encryption and backup, which require configuration or additional costs. For example, an application developer creating a SaaS application on a public cloud interacts with multiple service and deployment models. This scenario comprises three distinct scenarios: consuming IaaS and PaaS to build and deploy the application and offering the SaaS to end-users. CSSR analyzes each scenario independently to recommend tailored security measures.
To achieve this result, CSSR employs a taxonomy-driven framework comprising three layers of analysis. The Scenario Taxonomy divides the consumption scenario into smaller, manageable components, enabling a detailed examination of potential S&P issues and corresponding security attributes. The Attack Surface Taxonomy [47] further refines the analysis by identifying six attack surfaces within the cloud ecosystem, including interactions such as (1) service-to-user, (2) cloud-to-user, (3) user-to-service, (4) user-to-cloud, (5) cloud-to-service, and (6) service-to-cloud. Each attack surface highlights specific vulnerabilities, such as SQL injection or phishing, that may arise from user, cloud, or service interactions. Lastly, the Attack Taxonomy, inspired by the AVOIDIT framework [48], classifies attacks using a structured approach. This taxonomy identifies potential attack vectors, assesses operational and informational impacts, and maps defensive actions to real-world S&P attributes. It extends its scope to include cyber and physical threats, ensuring comprehensive coverage.
CSSR’s methodology culminates in two outputs for each consumption scenario: a detailed list of potential S&P issues and a corresponding set of security attributes required to mitigate these risks. This approach enhances visibility into vulnerabilities and equips stakeholders with actionable insights for securing cloud environments. By systematically analyzing scenarios, dividing them into components, and addressing vulnerabilities at each level, CSSR offers a robust framework for safeguarding cloud computing services while fostering accountability and informed decision-making.

4.2. Scenario Information Extraction

This section describes the methodology employed in CSSR to extract and identify S&P issues, operational and informational impacts, and defensive methods using reports from regulatory bodies such as ENISA, CSA Top Cloud Threats [49], and NIST. The analysis initially considers 40 possible consumption scenarios, combining stakeholders, service models (SaaS, PaaS, IaaS), and deployment models (public, private, hybrid, or community).
To streamline the process, CSSR assumes community clouds are equivalent to private clouds due to their limited and identifiable user base, which reduces vulnerabilities and attack surfaces. Conversely, it treats hybrid clouds as public clouds because they involve a broader, anonymous user base and larger attack surfaces. These assumptions reduce the total number of scenarios to 20, simplifying the analysis.
CSSR uses a stakeholder-driven approach to analyze these scenarios by addressing key questions, such as the stakeholder role, the parts of the cloud they interact with, their deployment model, and their S&P challenges and goals. For example, PaaS stakeholders like developers interact with operating systems, virtual machines, and networking tools. They face challenges such as intellectual property protection, scalability, and securing development tools. Their S&P goals might include secure software procurement, disaster recovery, encryption, and regulation compliance.
The CSSR tool, shown in Figure 4a, accepts user input in the form of a consumption scenario, including the service model, deployment model, and stakeholder type. Based on the input, the tool retrieves a list of potential S&P issues and recommends corresponding security attributes to address these issues, as shown in Figure 4b.
By leveraging these features and integrating pre-extracted information, CSSR offers a practical, stakeholder-focused solution for identifying and addressing S&P issues in diverse cloud computing scenarios. The tool provides tailored recommendations to enhance the security of cloud environments.

5. CSC Architecture and Implementation

This section introduces a framework component focused on identifying and categorizing S&P attributes [50] in CC services. It explores the use of these attributes to evaluate and compare cloud services from the stakeholder’s perspectives.
This paper identified 25 attributes for inclusion in the CSP Catalog. We accompanied each attribute with considerations (i.e., polar questions designed to help consumers evaluate the level of S&P provided). These considerations support informed decision-making when comparing cloud service providers. Table 2 will identify and include additional attributes.
A cloud service offered by a cloud service provider (CSP) consists of a set of S&P attributes designed to secure services and mitigate threats. These attributes define the S&P measures available to consumers. CSPs may provide multiple options for the same attribute (e.g., single-factor or multi-factor authentication) or allow consumers to integrate third-party solutions. When selecting cloud services, consumers often face the challenge of determining which S&P attributes are essential and evaluating the level of security each attribute provides.
This work identifies 25 key S&P attributes relevant to the three main cloud service models—SaaS, PaaS, and IaaS. It outlines critical aspects of each attribute, referred to as “considerations,” that consumers should evaluate when researching CSPs. These considerations include over 200 polar (Yes/No) questions using the Goal–Question–Metric (GQM) approach [51]. This methodology helps consumers to assess whether their S&P goals align with the services offered by a CSP. Consumers can consult CSP websites to gather and log information about security, privacy, and service-level policies to answer these questions and determine if they meet their S&P requirements.
We streamlined this process by creating an online tool (developed using PHP/MySQL) [20,52]. The tool includes a list of attributes and corresponding considerations, allowing users to save their evaluations, view results through informative charts, and create a comprehensive service catalog. This catalog benefits future consumers by enabling them to compare and select CSPs based on their specific needs. The tool also supports evaluation across multiple CSPs to facilitate informed decision-making. Figure 5 demonstrates the tool’s main interface and functionality, showcasing how users can interact with attribute details and view scored results.
The tool classifies the identified attributes to reveal additional information to stakeholders. The following are the classification criteria. Table 3 depicts the various classifications for the S&P attributes. These calculations are defined as follows:
  • Tangibility: shows the attribute’s tangible nature. Tangible attributes are composed of an algorithm, instrument, etc., measured in terms of use and cost (e.g., backup, encryption, etc.). Intangible ones deal with organizational and behavioral measures (e.g., insider trust).
  • Service Model: shows the attribute’s applicability to a cloud service model (i.e., SaaS, PaaS, or IaaS attributes).
  • Functionality: shows the attribute’s functionality as detection (D), prevention (P), and incident response (IR).
  • Protectability: shows the type of S&P issue(s) from which an attribute can protect the cloud service. Protectability classes are client, interface, network, virtualization, governance, compliance, legal aspects, and data S&P issues. Refer to [8] for examples of S&P issues and types.
  • Default: by default, cloud services have monitors for service health. However, consumers can purchase advanced monitors at their own expense. This classification shows whether an attribute is included by default.
Table 3. Attribute-based S&P assessment for CSP1 and 2: weights and scores.
Table 3. Attribute-based S&P assessment for CSP1 and 2: weights and scores.
Scenario (IaaS, System Admin, Public)
Attribute W i U W i S CSP1
Scores
CSP2
Scores
SAS1CSP1SAS1CSP2SAS2CSP1SAS2CSP2
(1) Encryption0.0420.045.004.000.2000.1600.2100.168
(2) Backup0.0500.045.834.170.2330.1670.2920.209
(3) Authentication and Identity Management 0.0420.044.285.710.1710.2280.1800.240
(4) Dedicated Hardware0.0330.044.002.000.1600.0800.1320.066
(5) Data Isolation0.0330.044.006.000.1600.2400.1320.198
(6) Disaster Recovery0.0500.046.255.000.2500.2000.3130.250
(7) Hypervisor Security0.0330.046.924.610.2770.1840.2280.152
(8) Client-Side Protection0.0420.047.145.710.2860.2280.3000.240
(9) Service Monitoring0.0420.046.258.750.2500.3500.2630.368
(10) Access Control and Customizable Security Profiles0.0420.045.834.160.2330.1660.2450.175
(11) Secure Data Center Location0.0330.045.718.570.2280.3430.1880.283
(12) Standards and Certifications0.0420.0410.008.890.4000.3560.4200.373
(13) Data Sanitization0.0330.047.144.290.2860.1720.2360.142
(14) SLA Guarantee and Conformity0.0420.044.003.330.1600.1330.1680.140
(15) Secure Scalability 0.0330.044.004.000.1600.1600.1320.132
(16) Secure Service Composition0.0330.045.005.000.2000.2000.1650.165
(17) Software and Hardware Procurement0.0330.043.335.000.1330.2000.1100.165
(18) Insider Trust0.0330.043.753.120.1500.1250.1240.103
(19) Technology Change0.0420.046.006.000.2400.2400.2520.252
(20) Service Self-healing0.0420.046.006.000.2400.2400.2520.252
(21) Service Availability0.0420.048.186.360.3270.2540.3440.267
(22) Risk Management0.0420.046.003.000.2400.1200.2520.126
(23) Security Awareness0.0420.048.885.550.3550.2220.3730.233
(24) Secure Networking0.0420.048.758.500.3500.3400.3680.357
(25) Security Insurance0.0500.043.333.330.1330.1330.1670.167
Average5.82285.2425.8435.221

Empirical Evaluation of CSC

We tested the CSP Catalog by evaluating publicly available information from two IaaS providers to assess their S&P features. Data were from FAQ pages and other accessible sources, with each S&P attribute evaluated using a scoring system: a 1 for a “Yes”, −1 for a “No”, or 0 if the information was unavailable. We normalized these scores on a scale of 1–10 to calculate an overall attribute score. The subsequent component introduced in this paper, CSSA, facilitates a systematic assessment and comparison of multiple CSPs across various dimensions. It enables a comprehensive evaluation of their S&P attributes from diverse perspectives.
The CSP Catalog provides a structured tool to help consumers assess and compare CSPs based on their security needs. While CSPs may not voluntarily disclose detailed S&P data, tools like the CSP Catalog encourage transparency and competitiveness in the cloud industry.
The CSP Catalog provides a comprehensive list of S&P attributes designed to help cloud service consumers enhance the security of their cloud environments. To the best of our knowledge, it is the only published tool specifically developed to evaluate the degree of S&P offered by cloud services. Its primary objective is to equip current and prospective cloud consumers with clear and structured evaluation criteria, addressing their uncertainties and enabling informed decision-making.

6. CSSA Architecture and Implementation

The primary purpose of the Cloud Service Security Assessor (CSSA) is to provide a structured evaluation of the S&P features applied to a CC model. Each CC model consists of a service model, a stakeholder, and a deployment model, so every stakeholder requires a tailored list of recommended S&P attributes, resulting in distinct assessments for each scenario. Before conducting these assessments, it is essential to define the goals of the evaluation.
Using the GQM approach, metrics in the proposed framework are derived from S&P attributes. GQM is a widely used method for identifying metrics linked to system goals and is particularly effective in assessing security. For instance, a cloud service consumer seeking to evaluate the adequacy of S&P features in a cloud environment would pose specific questions to CSPs. Each S&P attribute serves as a goal, supported by questions designed to validate and measure the attribute. The answers help to quantify the S&P of the service provided by the CSP.
The CSSA process begins with inputs, such as a list of recommended S&P attributes generated by the CSSR component, serving as a baseline for creating security metrics for CSP services, as in Figure 6. This input provides a unified view of the S&P features available across providers. Additionally, CSSA incorporates another input from the CSP Catalog, a repository of service descriptions detailing the S&P attributes and service-level policies offered by CSPs. Users can collect the required information directly from the provider’s website if the CSP Catalog lacks specific service details. The collected data help evaluate the degree of S&P for each attribute, summing the results to quantify the overall S&P. This process is widely used and known as multiple criteria decision-making (MCDM). Figure 7 depicts how considerations guide the assessment of S&P attributes across providers, ensuring a comprehensive evaluation process.
Stakeholders assess considerations for each attribute, and the CSSA calculates the cumulative security level as the Service Assessment Score (SAS). The SAS formula is as follows:
S A S = i = 1 n W i U   .   W i   S   .   S c o r e i
where W i U is the user-defined weight, W i   S is the system-defined weight, and S c o r e i is the normalized value of the ith attribute based on stakeholder input.
Derived metrics provide more granular assessments. The Service Assessment Score by Functionality S A S F u n c evaluates attributes classified by their functionality:
S A S F u n c = i = 1 n W F u n c i U   .   W F u n c i   S   .   S c o r e F u n c i
Similarly, the Service Assessment Score by Protectability S A S p r o t measures protection against specific categories of security issues:
S A S p r o t = i = 1 n W p r o t i U   .   W p r o t i   S   .   S c o r e p r o t i
These classifications allow stakeholders to evaluate security attributes by their functionality and protectability against defined categories of security threats. This structured, quantitative approach ensures a comprehensive understanding of cloud service security, enabling stakeholders to prioritize and address critical aspects effectively.

Empirical Evaluation of CSSA

To evaluate the CSSA framework, we selected the CC consumption model (IaaS, system administrator, and public) previously used. S&P attributes necessary for this scenario appear in the first column of Table 3. We calculated the Service Assessment Score (SAS) for two cloud service providers (CSPs) in two iterations. In the first iteration, we assumed that users had no specific preferences regarding the importance (weights) of attributes, and we used the system-assigned weights W i   S . This approach treats all attributes as equally important. Looking forward, CSSA could be enhanced to assign weights based on rankings from published S&P incident reports and guidelines, such as those by ENISA, CSA’s Notorious Nine, and NIST. In the second iteration, we computed SAS by incorporating user-defined weights, reflecting the user’s preferences for attribute importance.
After identifying the required S&P attributes using the CSSR tool, we selected potential cloud services, with two real CSPs chosen for this study. Information from their websites, service-level agreements (SLAs), and FAQs helped answer attribute-specific questions within the CSP Catalog, as described in Chapter 6. We calculated SAS values for CSP1 and CSP2 using Equation S A S = i = 1 n W i U   .   W i   S   .   S c o r e i and presented them in the fourth and fifth columns of Table 3. SAS1 scores represent system-assigned weights W i   S , while SAS2 scores reflect user-defined weights.
In addition to computing SAS, CSSA generates visual aids to support decision-making. Figure 8a compares S&P assessments for individual attributes across the two CSPs, highlighting that CSP2 outperforms CSP1 in five key areas: authentication and identity management, data isolation, service monitoring, access control with customized profiles, and insider trust. Figure 8b allows CSSA users to compare CSP1 and CSP2 based on the functionality of their S&P attributes, such as detection, prevention, and incident response. In this comparison, CSP1 demonstrates superior performance over CSP2. Also, CSSA offers various perspectives to support decision-making, including classifications based on attribute-associated fees, attribute functionality, and default versus non-default attributes.

7. Analytical Evaluation of the Framework

This section comprehensively evaluates the framework components, including CSSR, CSC, and CSSA. This evaluation validates the framework’s correctness, consistency, and applicability using real-world examples, empirical testing, and formal analytical methods.

7.1. Validation of CSSR Completeness and Coverage

The correctness of the CSSR is validated through real-world examples of S&P incidents, demonstrating how CSSR could have recommended essential S&P attributes to prevent such events.
In 2014, Code Spaces, a code-hosting and software collaboration platform, suffered a DDoS attack that served as a diversion for system infiltration, ultimately leading to the company’s shutdown [53]. The absence of disaster recovery and backup mechanisms was a critical vulnerability. CSSR would have identified Code Spaces as a public IaaS consumer and recommended implementing disaster recovery and backup attributes to meet minimum S&P requirements.
In 2016, a large-scale DDoS attack using the Mirai malware targeted Dyn, a DNS provider, disrupting numerous cloud services, including Netflix [54]. The attack exploited vulnerabilities in service availability and network infrastructure. CSSR would have advised stakeholders to adopt attributes such as service availability, self-healing capabilities, disaster recovery, and secure networking infrastructure to mitigate such risks.
In 2024, researchers identified a vulnerability in AWS’s Application Load Balancer (ALB) from customer implementation issues [55]. This misconfiguration allowed attackers to potentially bypass access controls and compromise web applications. The root cause was the inadequate configuration of authentication mechanisms within the ALB. CSSR would have recommended thorough security configuration reviews, proper implementation of authentication and authorization controls, and regular security assessments to prevent such vulnerabilities.
These scenarios underscore CSSR’s capability to recommend appropriate S&P features tailored to stakeholder needs, thereby reducing the complexity of selecting security attributes and preventing future incidents. By analyzing past failures, CSSR guides stakeholders toward more resilient and robust security strategies.

7.2. CSSR Correctness Validation

To measure the accuracy of CSSR, we utilized well-established metrics from artificial intelligence and expert rule-based systems. These metrics assess the system’s ability to produce results close to the true value. Eighteen professional cybersecurity experts evaluated CSSR. Participants split into six groups of three, with each group assigned the following:
  • A specific cloud consumption scenario;
  • A list of potential S&P risks relevant to their scenario;
  • And a list of 25 S&P attributes used by CSSR, along with detailed descriptions of each attribute.
Each group analyzed one stakeholder per service model, covering six distinct scenarios, as detailed in Table 4.
The groups followed a systematic approach to evaluate CSSR’s recommendations. First, each group identified two commercial cloud service providers (CSPs) offering free trials for their assigned scenario. They subscribed to the CSPs’ services and analyzed their S&P features using resources such as FAQ pages, live chat, and customer support. Next, the groups mapped the CSP-provided S&P features to the 25 attributes defined by CSSR, ensuring comprehensive analysis.
The results from this stage were categorized into four groups: (a) relevant attributes recommended by both CSSR and the study group, (b) irrelevant attributes recommended by CSSR but not by the group, (c) relevant attributes identified by the group but missed by CSSR, and (d) attributes not recommended by either CSSR or the study group. Table 5 summarizes these classifications.
These classifications help calculate accuracy, recall, and precision metrics to assess CSSR’s performance. Accuracy measured the proportion of correct recommendations to total recommendations, recall represented the proportion of relevant attributes retrieved by CSSR to the total relevant attributes, and precision reflected the proportion of relevant attributes retrieved to total retrieved attributes. The equations are as follows:
A c c u r a c y = n u m b e r   o f   s u c c e s s f u l   r e c o m m e n d a t i o n s T o t a l   n u m b e r   o f   r e c o m m e n d a t i o n s
R e c a l l = a a + c
P r e c i s i o n = a a + b
These metrics provided valuable insights into CSSR’s ability to identify relevant attributes while minimizing irrelevant recommendations, demonstrating its overall effectiveness in the study.
The results in Table 6 demonstrate CSSR’s high accuracy, averaging 96.9%, with recall at 96.7% and precision at 97.3%.
While CSSR performed strongly, there were discrepancies. For instance, Group 2 missed one relevant attribute and recommended one irrelevant attribute, reducing accuracy to 90%. Group 4 faced the most significant issue, missing two relevant attributes, resulting in a recall of 90.4%. Such issues highlight areas for improvement, which can be addressed by updating CSSR’s attribute database.
The study faced several limitations. Using graduate students as participants may introduce bias due to their limited expertise compared to professionals (conclusion threat). Inconsistencies in students’ efforts during the experiment present an instrumentation threat. Also, the study covered only six scenarios, which may not generalize to all 20 potential scenarios. Variations in SaaS recommendations based on application types and domains were also not considered but will be addressed in future research. These results underscore CSSR’s strong potential as a reliable recommender system for S&P attributes while identifying opportunities for refinement and expansion.

7.3. Weyuker Properties Analysis for CSC and CSSA Evaluation

Weyuker properties [56] are a set of nine axioms commonly used for validating and evaluating software complexity metrics, each representing a desired characteristic of effective metrics. These properties serve as a formal analytical approach and are applied to evaluate the CSSA. The framework ensures the reliability and usefulness of its security evaluations by demonstrating that CSSA assessments satisfy these properties.
CSSA properties align with Weyuker properties using service security assessments like evaluating software complexity metrics. The decision to use Weyuker properties stems from their established rigor and widespread recognition, making them ideal for analyzing the S&P assessments in cloud computing. The primary goal is to validate that CSSA assessments are consistent, accurate, and dependable.
Applying Weyuker properties ensures that CSSA consistently collects accurate data, alternate evaluation methods yield identical results, repeated assessments using the same method produce consistent scores, and there is no overcounting or undercounting of metrics. Ultimately, this approach verifies the high reliability of S&P measurements generated by CSSA. It inspires confidence in its ability to assess cloud service security effectively. The following sections thoroughly explore these desirable properties to affirm CSSA’s robustness and applicability.
This section begins our examination of the desirable properties for evaluating the consistency of the CSSA:
  • Language Property A: In any given scenario involving a service model, deployment model, and a stakeholder, non-zero coefficients will be assigned to the same attribute within a predefined set across all service configurations.
  • Language Property B: The size of the recommended attribute set for a compound service will be at least equal to the total size of the attribute sets recommended for each component service.
The following properties further define the evaluation criteria:
  • Property 1: Different configurations for satisfying a scenario will result in varying assessment values.
  • Property 2: An assessment score corresponds to a subset of configurations with values equal to or exceeding that score.
  • Property 3: Two scenario configurations may have the same assessment value.
  • Property 4: Even if two scenario configurations share the same attributes, their assessment values do not necessarily need to be identical.
For compound services:
  • Property 5: The attribute set of the compound service configuration is greater than or equal to the combined attribute sets of its component configurations.
  • Property 6: The assessment value of the compound scenario configuration cannot exceed the highest assessment value among its component configurations.
  • Property 7: Assessment values for a configuration will change if attribute priorities (e.g., usability, security, or both) are weighted differently.
  • Property 8: The assessment value will remain consistent for the same scenario configuration with identical priorities.
  • Property 9: In compound services where a scenario configuration spans multiple CSPs, one must consider an additional attribute for network security between CSPs, resulting in one more parameter in the assessment value.
These properties ensure that CSSA delivers consistent, reliable, and adaptable evaluations of S&P attributes for a wide range of scenarios and configurations.
The following shows the proof of each property as applied to the CSSA assessment of S&P:
(1)
Language Property A Proof:
What it means: The property asserts that the given scenario will be related to a unique set SA of attributes. Each of the ith service configuration’s options will have a set of attributes S A i for which there will be a non-zero score, as given by the stakeholder’s evaluation. Then, S A i S A .
Ideally, a given service configuration must have a score of “1” for all the attributes in S A . However, in practice, the configurations may lack one or more attributes, and the security configurations for those present may not be up to the mark.
Why it is true: Let the set of attributes prescribed = S A [as provided by the recommendation system] S a = a 0 , a 1 , a 2 , , a n , where n is the number of attributes prescribed, and m is the total list of attributes in our recommendation system, which is 25 at present.
n   m
a i is the ith attribute that is relevant for the security of the service in the given scenario.
For each of the jth service configurations, the stakeholders will evaluate by checking the scores for the attributes in S A in terms of the considerations. The boundary conditions for each attribute will obtain a value greater than zero and ideally equal to 1.
In the upper bound, a j S a , a j S a i , such that score { a j } > 0; thus, S a i S a holds.
With the lower bound being a totally insecure service configuration with none of the relevant/prescribed attributes having a value greater than zero, then
S a i is a null set, and S a i S a . So, S a i S a ; thus, S A i S A is satisfied.
(2)
Language Property B: (Additive Property)
What it means: The property asserts that when a compound service is configured using two or more component services, there is a relation between the sizes of the set of attributes recommended for the component and the compound services.
Let S a i be the set of attributes recommended for the compound service.
Let S a i 0 ,   S a i 1 ,   S a i 2 ,   S a i n be the set of attributes of the component services that make up S a i .
Then S a i   S a i 0   S a i 1   S a i 2 ,   S a i n   , where n is the number of component services making up the compound service. The compounding of services will not reduce the complexity. The attributes essential for securing each component service are still necessary to secure the compound service.
Why it is true: Let a j 0 , a j 1 , a j 2 , , a j k S a i j , where k is the number of attributes in the set S a i j . Let a i be an attribute belonging to the set of attributes of the jth component service.
a 0 ,   a 1 ,   a 2 ,   ,   a i     S a i , where a 0 , a 1 , a 2 , , a i are attributes of the compound service with maximum i attributes.
When conjoining the component services to construct the compound service, we may encounter intermediate communication and other previously not considered attributes becoming relevant. However, in this case, the relative attributes will not decrease. Some attributes may be relevant to more than one compound service. Such repetitions merge during the union operation.
(3)
Property 1 Proof:
What it means: The assessment will not deliver the same value for all the different service configurations. Though theoretically possible, the service features offered by different CSPs will have various degrees of security. It will lead to different assessment scores for multiple configurations. Otherwise, the purpose of relative measurement is defeated.
Why it is true: In the worst-case scenario, each service configuration has one value for each consideration of each attribute as populated by stakeholders. Then, the cumulative score would be unequal unless the evaluation values, the user, and the system’s weights based on their priorities yield an identical total, which is highly unlikely given the differences in the history of CSPs and users’ preferences. Thus, this property will hold except in some rare cases when there may be a tie, which can be broken by considering non-security economics or other considerations to make the selection.
(4)
Property 2 Proof:
What it means: Given a particular assessment value, there will be only a subset of service configurations with that assessment value. This crucial property has a practical use. By increasing this threshold value, we can shortlist the service configurations to choose the preferred one.
Once we identify an acceptable subset of configurations satisfying the assessment value, other non-security considerations, like cost and extra functionalities, can help us to choose a service configuration.
Why it is true: Let S c o r e i be the assessment score of the ith configuration. Let the sequence of scores for all configurations be S c o r e 0 , S c o r e 1 , S c o r e 2 , , S c o r e n . This sequence can be ordered descendingly.
S c o r e i j = S c o r e i 0 , S c o r e i 1 , S c o r e i 2 , , S c o r e i n
Since the highest score possible is 1, and the lowest is 0, if we keep the threshold increasing toward 1 and ensure the resulting subsequence satisfying the assessment value is non-empty, we can reduce the size of the resulting subsequence, and the resulting subsequence is always a subsequence of the original sequence. And, if we neglect the order, the resulting set will always be a subset of the original set as no new element (i.e., configuration) can be added.
(5)
Property 3 Proof:
What it means: Since the evaluation score—due to considerations and weights by the user and system—determines the assessment score, two service configurations can have the same score, even with different evaluation scores, due to varying considerations.
Why it is true: Referring to the equation, S A S = i = 1 n W i U . W i S . S c o r e i ( j ) , where n is the number of attributes, W i U is the weight for the ith attributed provided by user, W i S is the weight for the ith attribute provided by the system and S c o r e i j is the evaluation score from consideration for the ith attribute in the jth service configuration.
Consider two service configurations, 0 and 1, with different evaluation scores but the same weights by the user and the system. Then, the assessment score will be i = 1 n W i U . W i S . S c o r e i 0 and i = 1 n W i U . W i S . S c o r e i ( 1 ) and their difference = i = 1 n W i U   .   W i   S   .   S c o r e i   0 S c o r e i   1 . Since W i U and W i S cannot be zero for all i, then i = 1 n S c o r e i 0 S c o r e i 1 = 0 . This result holds in two cases: Case 1, when the delta (difference) in the score for each attribute is zero, and Case 2, when their sum is zero. Since all evaluation scores are assumed to not be zero, Case 2 must satisfy. This result is plausible because the 0th configuration may have a higher score for some attributes, while the first configuration has a higher score for others. While the magnitude of the differences is the same, with half of them with the opposite sign, we can obtain 0 as the sum.
(6)
Property 4 Proof:
What it means: The assessment values of two configurations quantitatively differentiate them.
Why it is true: This is not true only when Property 3 is applicable for the considered case; otherwise, this will be true automatically.
(7)
Property 5 Proof:
The size of the set of attributes of the compound service configuration is greater than or equal to the size of the set of attributes of each component service configuration.
What it means: When creating a compound service configuration, no attributes are removed because each component service must operate independently and securely.
Why it is true: Based on Language property 2: S a i S a i 0 S a i 1 S a i 2 , S a i n . Thus, j = o   t o   n i j S a 0 i j R . H . S because during the union operation, there is no reduction in elements from any set S a 0 i j . So , S a i   S a i j ;   0 j n ; this is hence proved.
(8)
Property 6 Proof:
What it means: “The strength of the chain is not greater than the strength of its weakest link.” Given the security assessment score of the attribute of a particular component service with the lowest value, the value of the security assessment cannot be higher for the compound service configuration for that attribute. The security can be compromised based on the attacks on a specific weak/vulnerable component as if during its lone operation.
Why it is true: While many component services may have some attributes, performing the union operation to determine the score for the whole compound service uses the lowest score.
(9)
Property 7 Proof:
What it means: The final score for a particular service configuration depends on the weights assigned by the user and system.
Case 1: When two users evaluate the attributes differently, the resulting value will differ. The configuration with less backup security is used by two users, one with critical data and the other with less critical data. The data loss due to backup failure would affect each differently. Thus, their scores reflect this difference because of unequal user weights for this attribute.
Why it is true: The proof of Property 3 works here by exchanging the parameters for weights instead of the evaluation score.
(10)
Property 8 Proof:
What it means: If two stakeholders consider the same scenarios, and the weights given by the user and the system are the same, then the resulting assessment score will be the same. Over time, as the security features and vulnerability exploitation vary for a service provided by a particular stakeholder, the system’s weight varies. The intended operation by the users evaluates attributes differently, yielding different user weights. When these parameters are the same, the assessment score will also be the same.
Why it is true: Referring to Equation, S A S = i = 1 n W i U . W i S . S c o r e i j , where the consideration is the jth scenario configuration.
When these parameters, W i U , W i S , and S c o r e i j , are the same, the assessment score must be the same as it is completely defined by these three parameters.
(11)
Property 9 Proof:
What it means: Conjoining component services to make a compound service configuration may introduce a new attribute because of communication between the components.
Why it is true:
Let S a i 0 , S a i 1 be two sets of attributes for two component services. As Property 2 states, S a i [set of attributes of the compound service] can be at the least the union of S a i 0 and S a i 1 , but any new functionality facilitating the conjoining will be reflected in new attributes.
This process is the approach used in the Service Assessment Score to evaluate the S&P of cloud services that satisfy the nine Weyuker properties. It proves the consistency and fairness of the approach in extracting the data for the assessment and computing the assessment score.

8. Conclusions

Cloud computing offers significant benefits such as cost reductions, improved scalability, and enhanced flexibility. However, security concerns remain a critical barrier to broader adoption. This research introduces a comprehensive framework to address these challenges by enabling stakeholders to identify and assess their security needs. The framework comprises three essential components: the Cloud Service Security Recommender (CSSR), the Cloud Service Catalog (CSC), and the Cloud Service Security Assessor (CSSA). Together, these tools educate users about security challenges, recommend tailored S&P attributes, and support informed decision-making when selecting cloud services.
CSSR employs a stakeholder-driven approach, shifting focus from infrastructure-centric concerns to user-specific interactions, effectively recommending security attributes to mitigate risks. The CSP Catalog organizes 25 fundamental security attributes into an accessible format, aiding users in evaluating cloud services and assessing their security readiness. Finally, CSSA integrates these components into a user-friendly tool, enabling stakeholders to compare services based on goals such as maximizing security or minimizing costs. This framework enhances transparency and fosters healthy competition among CSPs, driving improvements in service quality.
While the framework effectively addresses critical security challenges, limitations remain. The diversity of SaaS applications, reliance on publicly available CSP data, and inconsistencies in attribute naming create challenges. Additionally, CSSR’s reliance on post-incident updates to address new attack taxonomies highlights areas for enhancement. Future work will focus on refining CSSR’s taxonomy, incorporating emerging security attributes, and expanding CSSA’s evaluation parameters.
In conclusion, this research supplies stakeholders with practical tools to navigate cloud security complexities, fostering trust and enabling more informed decision-making. By addressing key security concerns, the proposed framework contributes to a more robust cloud computing ecosystem and supports a broader adoption of cloud services.

Author Contributions

Conceptualization, A.A. and S.S.; methodology, A.A., S.S. and F.A.; software, A.A.; validation, A.A. and V.S.; formal analysis A.A. and V.S.; investigation, A.A., F.A., V.S. and S.S.; resources, A.A.; data curation, A.A.; writing—original draft preparation, A.A.; writing—review and editing, A.A., F.A. and F.S.; visualization, A.A.; supervision, S.S. and F.S.; project administration, A.A.; funding acquisition, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Institutional Review Board approval was waived for this study as it involved minimal risk and did not include any physical interaction or intervention with participants. The study solely involved voluntary participation in a questionnaire-based survey.

Informed Consent Statement

Patient consent was waived due to the anonymous nature of the survey and minimal risk involved. Participants were informed about the purpose of the study and their voluntary participation. No identifiable personal data were collected.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors thank the anonymous reviewers for their insightful comments that helped to improve the quality of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hussein, A.E.A. Pragmatic Framework for Cloud Security Assessment: A Stakeholder-Oriented and Taxonomical Approach. Ph.D. Thesis, University of Memphis, Memphis, TN, USA, 2017. [Google Scholar]
  2. Biggest Data Breaches in US History (Updated 2025)|UpGuard. Available online: https://www.upguard.com/blog/biggest-data-breaches-us (accessed on 26 January 2025).
  3. Dyn Analysis Summary of Friday October 21 Attack|Dyn Blog. Available online: https://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/ (accessed on 26 January 2025).
  4. Arkin, B. Important Customer Security Announcement. Available online: https://blog.adobe.com/en/publish/2013/10/03/important-customer-security-announcement (accessed on 26 January 2025).
  5. Salcedo, H. Google Drive, Dropbox, Box and iCloud Reach the Top 5 Cloud Storage Security Breaches List. Available online: https://web.archive.org/web/20160304081904/https://psg.hitachi-solutions.com/credeon/blog/google-drive-dropbox-box-and-icloud-reach-the-top-5-cloud-storage-security-breaches-list (accessed on 26 January 2025).
  6. Yasani, R. Massive Cyber Attack on AWS Cloud Environment with 230 Million Unique Targets. Available online: https://cybersecuritynews.com/massive-aws-cyber-attack-230-million-environments/ (accessed on 23 February 2025).
  7. Ex-Amazon Employee Convicted Over Data Breach of 100 Million CapitalOne Customers|TechCrunch. Available online: https://techcrunch.com/2022/06/21/amazon-paige-thompson-capitalone-breach/ (accessed on 23 February 2025).
  8. NIST|National Institute of Standards and Technology. Available online: https://www.nist.gov/national-institute-standards-and-technology (accessed on 26 January 2025).
  9. Mell, P.; Grance, T. The NIST Definition of Cloud Computing; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2011.
  10. Badger, L.; Bernstein, D.; Bohn, R.; Vaulx, F.D.; Hogan, M.; Mao, J.; Messina, J.; Mills, K.; Sokol, A.; Tong, J.; et al. High-Priority Requirements to Further USG Agency Cloud Computing Adoption; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2011.
  11. Jansen, W.; Grance, T. Sp 800-144: Guidelines on Security and Privacy in Public Cloud Computing; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2011.
  12. Wang, S.; Zheng, Z.; Sun, Q.; Zou, H.; Yang, F. Cloud model for service selection. In Proceedings of the 2011 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Shanghai, China, 10–15 April 2011; pp. 666–671. [Google Scholar]
  13. Top Threats to Cloud Computing 2024|CSA. Available online: https://cloudsecurityalliance.org/artifacts/top-threats-to-cloud-computing-2024 (accessed on 23 February 2025).
  14. Lord Kelvin Quotations. Available online: http://zapatopi.net/kelvin/quotes/ (accessed on 26 January 2025).
  15. Encryption Can Make Cloud Computing Safer. Available online: https://www.usatoday.com/story/cybertruth/2013/05/31/cloud-security-hacking-encryption/2375689/ (accessed on 26 January 2025).
  16. Basu, S. 68 Cloud Security Statistics to Be Aware of in 2025. Available online: https://www.getastra.com/blog/security-audit/cloud-security-statistics/ (accessed on 23 February 2025).
  17. 2023 Cloud Security Report Shows Many Data Breaches—Press Release. Available online: https://cpl.thalesgroup.com/about-us/newsroom/2023-cloud-security-cyberattacks-data-breaches-press-release (accessed on 23 February 2025).
  18. 7 February 2024 The State of Cloud Data Security in 2023. Available online: https://www.paloaltonetworks.com/resources/research/data-security-2023-report (accessed on 23 February 2025).
  19. Cloud Security Alliance Survey Finds 77% of Respondents Feel. Available online: https://cloudsecurityalliance.org/press-releases/2024/02/14/cloud-security-alliance-survey-finds-77-of-respondents-feel-unprepared-to-deal-with-security-threats (accessed on 23 February 2025).
  20. Abuhussein, A.; Shiva, S.; Sheldon, F.T. CSSR: Cloud Services Security Recommender. In Proceedings of the 2016 IEEE World Congress on Services (SERVICES), San Francisco, CA, USA, 27 June–2 July 2016; pp. 48–55. [Google Scholar]
  21. Jouini, M.; Aissa, A.B.; Rabai, L.B.A.; Mili, A. Towards quantitative measures of Information Security: A Cloud Computing case study. Int. J. Cyber-Secur. Digit. Forensics IJCSDF 2012, 1, 248–262. [Google Scholar]
  22. Definition of METRIC. Available online: https://www.merriam-webster.com/dictionary/metric (accessed on 23 May 2017).
  23. Jaquith, A. Security Metrics: Replacing Fear, Uncertainty, and Doubt; Addison-Wesley: Upper Saddle River, NJ, USA, 2007; ISBN 978-0-321-34998-9. [Google Scholar]
  24. Radack, S. Security metrics: Measurements to support the continued development of information security technology. Computer Security Division, Information Technology Laboratory, National Institute of Standards and Technology, White Paper 2010. Available online: https://csrc.nist.gov/files/pubs/shared/itlb/itlbul2010-01.pdf (accessed on 23 February 2025).
  25. Wong, C. Security Metrics, a Beginner’s Guide, 1st ed.; McGraw-Hill Education: New York, NY, USA, 2011; ISBN 978-0-07-174400-3. [Google Scholar]
  26. Pauley, W. Cloud Provider Transparency: An Empirical Evaluation. IEEE Secur. Priv. 2010, 8, 32–39. [Google Scholar] [CrossRef]
  27. Ristov, S.; Gusev, M.; Kostoska, M. A new methodology for security evaluation in cloud computing. In Proceedings of the 2012 35th International Convention MIPRO, Opatija, Croatia, 21–25 May 2012; pp. 1484–1489. [Google Scholar]
  28. TPC-Homepage V5. Available online: http://www.tpc.org/ (accessed on 24 May 2017).
  29. Kossmann, D.; Kraska, T.; Loesing, S. An Evaluation of Alternative Architectures for Transaction Processing in the Cloud. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, Indianapolis, IN, USA, 6–11 June 2010; ACM: New York, NY, USA, 2010; pp. 579–590. [Google Scholar]
  30. Barker, S.K.; Shenoy, P. Empirical Evaluation of Latency-sensitive Application Performance in the Cloud. In Proceedings of the First Annual ACM SIGMM Conference on Multimedia Systems, Phoenix, AZ, USA, 22–23 February 2010; ACM: New York, NY, USA, 2010; pp. 35–46. [Google Scholar]
  31. Zeng, W.; Zhao, Y.; Zeng, J. Cloud Service and Service Selection Algorithm Research. In Proceedings of the First ACM/SIGEVO Summit on Genetic and Evolutionary Computation, Shanghai, China, 12–14 June 2009; ACM: New York, NY, USA, 2009; pp. 1045–1048. [Google Scholar]
  32. Rehman, Z.U.; Hussain, F.K.; Hussain, O.K. Towards Multi-criteria Cloud Service Selection. In Proceedings of the 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, Seoul, Republic of Korea, 30 June–2 July 2011; pp. 44–48. [Google Scholar]
  33. Han, S.-M.; Hassan, M.M.; Yoon, C.-W.; Huh, E.-N. Efficient Service Recommendation System for Cloud Computing Market. In Proceedings of the 2Nd International Conference on Interaction Sciences: Information Technology, Culture and Human, Seoul, Republic of Korea, 24–26 November 2009; ACM: New York, NY, USA, 2009; pp. 839–845. [Google Scholar]
  34. Ruiz-Alvarez, A.; Humphrey, M. An Automated Approach to Cloud Storage Service Selection. In Proceedings of the 2Nd International Workshop on Scientific Cloud Computing, San Jose, CA, USA, 8 June 2011; ACM: New York, NY, USA, 2011; pp. 39–48. [Google Scholar]
  35. WS-DREAM: Towards Open Datasets and Source Code for Web Service Research. Available online: http://wsdream.github.io/ (accessed on 24 May 2017).
  36. Alnemr, R.; Pearson, S.; Leenes, R.; Mhungu, R. COAT: Cloud Offerings Advisory Tool. In Proceedings of the 2014 IEEE 6th International Conference on Cloud Computing Technology and Science, Singapore, 15–18 December 2014; pp. 95–100. [Google Scholar]
  37. Lei, C.; Dai, H.; Yu, Z.; Li, R. A service recommendation algorithm with the transfer learning based matrix factorization to improve cloud security. Inf. Sci. 2020, 513, 98–111. [Google Scholar] [CrossRef]
  38. Modic, J.; Trapero, R.; Taha, A.; Luna, J.; Stopar, M.; Suri, N. Novel efficient techniques for real-time cloud security assessment. Comput. Secur. 2016, 62, 1–18. [Google Scholar] [CrossRef]
  39. ISO/IEC 27001:2022; Information Security, Cybersecurity and Privacy Protection—Information Security Management Systems—Requirements. International Organization for Standardization: Geneva, Switzerland, 2022.
  40. Automated Security and Compliance. Available online: https://www.cloudpassage.com/ (accessed on 6 June 2017).
  41. Your Business is in the Clouds. Protect what Matters with CipherCloud. Available online: https://cpl.thalesgroup.com/partners/ciphercloud (accessed on 2 April 2025).
  42. CASB and Cloud Cybersecurity Solutions|Cisco Cloudlock. Available online: https://www.cisco.com/site/us/en/products/security/cloudlock/index.html (accessed on 2 April 2025).
  43. Cloud Controls Matrix: Cloud Security Alliance. Available online: https://cloudsecurityalliance.org/group/cloud-controls-matrix/ (accessed on 26 January 2025).
  44. Dan Morrill, “CloudPassage Cloud Security Survey. Available online: https://web.archive.org/web/20220804124303/https://www.cloudave.com/25217/cloudpassage-cloud-security-survey/ (accessed on 26 January 2025).
  45. Bauer, D.S.; Koblentz, M.E. NIDX-an expert system for real-time network intrusion detection. In Proceedings of the [1988] Proceedings. Computer Networking Symposium, Washington, DC, USA; 1988; pp. 98–106. [Google Scholar]
  46. Jackson, K.; DuBois, D.; Stallings, C. An Expert System Application for Network Intrusion Detection. In Proceedings of the National Computer Security Conference, Washington, DC, USA, 1–4 October 1991. [Google Scholar]
  47. Gruschka, N.; Jensen, M. Attack Surfaces: A Taxonomy for Attacks on Cloud Services. In Proceedings of the 2010 IEEE 3rd International Conference on Cloud Computing, Miami, FL, USA, 5–10 July 2010; pp. 276–279. [Google Scholar]
  48. Simmons, C.; Ellis, C.; Shiva, S.; Dasgupta, D.; Wu, Q. AVOIDIT: A Cyber Attack Taxonomy; Technical Report; University of Memphis: Albany, NY, USA, 2014. [Google Scholar]
  49. Cloud Security Alliance Releases Top Threats to Cloud. Available online: https://cloudsecurityalliance.org/press-releases/2024/08/06/cloud-security-alliance-releases-top-threats-to-cloud-computing-2024-report (accessed on 23 December 2024).
  50. Joint Task Force Transformation Initiative. SP 800-53 Rev. 3. Recommended Security Controls for Federal Information Systems and Organizations; National Institute of Standards and Technology, Gaithersburg, MD, USA. 2009. Available online: https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-53r3.pdf (accessed on 2 April 2025).
  51. van Solingen, R.; Basili, V.; Caldiera, G.; Rombach, H.D. Goal Question Metric (GQM) Approach. In Encyclopedia of Software Engineering; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2002; ISBN 978-0-471-02895-6. [Google Scholar]
  52. Abuhussein, A.; Alsubaei, F.; Shiva, S.; Sheldon, F.T. Evaluating Security and Privacy in Cloud Services. In Proceedings of the 2016 IEEE 40th Annual Computer Software and Applications Conference (COMPSAC), Atlanta, GA, USA, 10–14 June 2016; Volume 1, pp. 683–686. [Google Scholar]
  53. Code Spaces Forced to Close Its Doors After Security Incident|CSO Online. Available online: http://www.csoonline.com/article/2365062/disaster-recovery/code-spaces-forced-to-close-its-doors-after-security-incident.html (accessed on 26 January 2025).
  54. Moss, S. Major DDoS Attack on Dyn Disrupts AWS, Twitter, Spotify and More. Available online: http://www.datacenterdynamics.com/content-tracks/security-risk/major-ddos-attack-on-dyn-disrupts-aws-twitter-spotify-and-more/97176.fullarticle (accessed on 26 January 2025).
  55. The Hunt for ALBeast: A Technical Walkthrough|Miggo. Available online: https://www.miggo.io/resources/uncovering-auth-vulnerability-in-aws-alb-albeast (accessed on 24 February 2025).
  56. Weyuker, E.J. Evaluating Software Complexity Measures. IEEE Trans Softw. Eng. 1988, 14, 1357–1365. [Google Scholar] [CrossRef]
Figure 1. Taxonomy of cloud computing consumption scenarios.
Figure 1. Taxonomy of cloud computing consumption scenarios.
Information 16 00291 g001
Figure 2. Framework components.
Figure 2. Framework components.
Information 16 00291 g002
Figure 3. CSSR’s three taxonomies: (A), (B), and (C).
Figure 3. CSSR’s three taxonomies: (A), (B), and (C).
Information 16 00291 g003
Figure 4. CSSR sample screenshots: (a) CSSR landing page shows client input; (b) CSSR represents S&P issues (Attack Vector) and recommended attributes (Defense).
Figure 4. CSSR sample screenshots: (a) CSSR landing page shows client input; (b) CSSR represents S&P issues (Attack Vector) and recommended attributes (Defense).
Information 16 00291 g004
Figure 5. Cloud Service Catalog (CSC) screenshots. (a) CSC screenshot showing example evaluation of S&P attributes for IaaS CSP; (b) CSC showing encryption attribute considerations.
Figure 5. Cloud Service Catalog (CSC) screenshots. (a) CSC screenshot showing example evaluation of S&P attributes for IaaS CSP; (b) CSC showing encryption attribute considerations.
Information 16 00291 g005
Figure 6. Framework illustrating CSSR, CSC, and CSP with user interaction, inputs, and outputs.
Figure 6. Framework illustrating CSSR, CSC, and CSP with user interaction, inputs, and outputs.
Information 16 00291 g006
Figure 7. Framework for evaluating S&P attributes across CSPs using considerations, scores, and weights.
Figure 7. Framework for evaluating S&P attributes across CSPs using considerations, scores, and weights.
Information 16 00291 g007
Figure 8. Sample CSC output screenshots: (a) Compares CSP1 and CSP2 by the degree of S&P; (b) CSP1 versus CSP2 Cloud Service Protectability.
Figure 8. Sample CSC output screenshots: (a) Compares CSP1 and CSP2 by the degree of S&P; (b) CSP1 versus CSP2 Cloud Service Protectability.
Information 16 00291 g008
Table 1. Summary of literature on security recommenders, service selection, and security assessment.
Table 1. Summary of literature on security recommenders, service selection, and security assessment.
Research/AreaKey ContributionsMetrics/Models UsedObjectiveCitation
Cyber and Physical Security AssessmentSecurity model using cybersecurity metrics to mitigate cloud threats.MTTF, MTBF, MTTE, MTTD, MFCEnhance understanding and mitigation of cloud threats.[21]
Transparency EvaluationEmpirical evaluation for cloud provider transparency.Scorecard for security, privacy, SLAsHelp businesses assess cloud provider transparency.[22]
Security Comparison ModelCompared security in on-premises vs. cloud solutions.ISO 27001:2005Compare security across different deployments.[23]
QoS and Cloud Service PrioritizationSMICloud framework for QoS measurement and service prioritization.CSMIC SMI, AHPProvide comparative evaluation and selection of cloud services.[24]
Open Source Cloud Security AssessmentSecurity threat analysis in multi-tenant clouds, focusing on OpenStack.Nessus 5, CVSSIdentify vulnerabilities and advocate for network segregation.[25]
Cloud Service MetricsNIST’s draft on developing cloud service metrics.Service agreements, service measurementOffer a framework for measuring cloud services.[26]
Service Selection and RecommendationFrameworks for cloud service selection.Fuzzy system, performance scoresGuide customers through service selection process.[27]
Cloud Transaction Processing EvaluationEvaluated transaction processing in the cloud.TPC-W Benchmark [28]Measure and compare the performance and cost of cloud services, aiding in the selection process.[29]
Latency-sensitive Cloud Applications Evaluation Efficiency evaluation for latency-sensitive applications on cloud platforms.Performance interference analysisAssess performance impact from shared cloud resources.[30]
Cloud Service Selection Based on CostTwo-step algorithm for service selection.Two-step algorithmHelp consumers select the best service based on cost and gains.[31]
Multi-criteria Cloud Service SelectionMulti-criteria methodology for service selection.Multi-criteria methodologyPerform detailed comparison and selection process for services.[32]
Cloud Service Recommendation FrameworkRecommender system for matching services with user requirements.Recommender system, QoS analysisAssist users in selecting optimal cloud services.[33]
Automated Cloud Storage SelectionAutomated approach for selecting cloud storage services.XML schema, performance and cost estimatesAutomate cloud storage service selection for efficiency.[34]
QoS-aware Service SelectionEfficient selection based on QoS using mixed integer programming.Mixed integer programming, datasetOptimize service selection based on QoS attributes.[35]
Cloud Offerings Advisory Tool (COAT)Cloud brokering system comparing service offerings.Privacy and security requirementsEnable selection based on a comprehensive attribute set.[36]
Transfer Learning for Recommendation SystemsFramework leveraging transfer learning and LDA for recommendations.Transfer Learning, LDA, word2vecEvaluate transfer learning in addressing data scarcity in recommendations.[37]
Cloud Security Assessment MethodologiesDeveloped two methodologies for real-time security assessment: fQHP and MIP.fQHP, MIPEnable rapid assessment of CSPs by CSCs.[38]
Table 2. Classification and sample considerations for CSC-identified S&P attributes.
Table 2. Classification and sample considerations for CSC-identified S&P attributes.
NoAttributeService TypeClassificationsSample Consideration Questions
SaaSPaaSIaaSTangibleDefaultFeeServiceFunctionProtectability 1
DetectPreventResponse12345678
1Backup
  • Are backups encrypted and stored in secure facilities?
2Encryption
  • Is data encrypted by default during transfer, storage, and processing?
3Authentication and Identity Management
  • Does the CSP enforce multi-factor authentication (MFA)?
4Dedicated Hardware
  • Are physical servers dedicated and managed for optimal security and availability?
5Data Isolation
  • Is tenant data isolated through encryption during transit and storage?
6Disaster Recovery
  • Is the disaster recovery process automated and compliant with SLAs?
7Virtualization Security
  • Does the hypervisor protect against side-channel and VM-jumping attacks?
8Client-Side Protection
  • Does the CSP enforce secure communication protocols like HTTPS and VPN?
9Service Monitoring
  • Are threat notifications provided in real-time along with mitigation guidance?
10Access Control and Customizable Profiles
  • Are access control lists (ACLs) fully configurable by customers?
11Datacenter Location
  • Does the CSP obscure exact data center locations to prevent unauthorized discovery?
12Security Standards and Certification
  • Does the CSP possess relevant certifications like ISO 27001 or SOC 2?
13Media Sanitization
  • Is customer data permanently deleted upon service termination?
14SLA Guarantee and Conformity
  • Is the SLA customizable to accommodate unique business requirements?
15Secure Scalability
  • Does the CSP provide additional security measures during scaling operations?
16Secure Service Composition
  • Does the CSP ensure secure integration of multi-vendor services?
17Software and Hardware Procurement
  • Does the CSP verify that all procured components meet security standards?
18Insider Trust
  • Are CSP employees required to sign strict confidentiality agreements?
19Technology Change
  • Can the CSP adapt security measures to emerging technologies without disrupting services?
20Service Self-Healing
  • Does the CSP provide automated self-healing mechanisms for service disruptions?
21Service Availability
  • Does the CSP ensure equitable resource distribution among tenants?
22Risk Management
  • Are risk management plans regularly updated and audited?
23Security Awareness
  • Does the CSP provide detailed recommendations for improving customer security?
24Secure Networking Infrastructure
  • Does the CSP prevent unauthorized access to virtual and physical networks?
25Security Insurance
  • Does the CSP offer customizable insurance plans for different use cases?
1 Protectability: protects cloud environment from the following: 1 = Client Security, 2 = Interface Issues, 3 = Network Security, 4 = Virtualization Security, 5 = Governance Security, 6 = Compliance Security, 7 = Legal Issues, 8 = Data Security.
Table 4. Study group assignment.
Table 4. Study group assignment.
GroupTasks
Group 1Scenario 1 (SaaS, End User, Public)
Group 2Scenario 2 (SaaS, End User, Private)
Group 3Scenario 3 (PaaS, App Developer, Public)
Group 4Scenario 4 (PaaS, App Developer, Private)
Group 5Scenario 5 (IaaS, System Admin, Public)
Group 6Scenario 6 (IaaS, System Admin, Private)
Table 5. CSSR recommended attributes classification.
Table 5. CSSR recommended attributes classification.
RelevantIrrelevant
Recommendedab
Not Recommendedcd
Table 6. Accuracy of CSSR recommendations.
Table 6. Accuracy of CSSR recommendations.
GroupabcdAccuracyRecallPrecision
Group 120014100%95.2%100%
Group 21811690%94.7%94.7%
Group 32210395.6%100%94.7%
Group 419024100%90.4%100%
Group 525000100%100%100%
Group 62310195.8%100%95.8%
Average----96.9%96.7%97.3%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abuhussein, A.; Alsubaei, F.; Shandilya, V.; Sheldon, F.; Shiva, S. Cloud Security Assessment: A Taxonomy-Based and Stakeholder-Driven Approach. Information 2025, 16, 291. https://doi.org/10.3390/info16040291

AMA Style

Abuhussein A, Alsubaei F, Shandilya V, Sheldon F, Shiva S. Cloud Security Assessment: A Taxonomy-Based and Stakeholder-Driven Approach. Information. 2025; 16(4):291. https://doi.org/10.3390/info16040291

Chicago/Turabian Style

Abuhussein, Abdullah, Faisal Alsubaei, Vivek Shandilya, Fredrick Sheldon, and Sajjan Shiva. 2025. "Cloud Security Assessment: A Taxonomy-Based and Stakeholder-Driven Approach" Information 16, no. 4: 291. https://doi.org/10.3390/info16040291

APA Style

Abuhussein, A., Alsubaei, F., Shandilya, V., Sheldon, F., & Shiva, S. (2025). Cloud Security Assessment: A Taxonomy-Based and Stakeholder-Driven Approach. Information, 16(4), 291. https://doi.org/10.3390/info16040291

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop