Next Article in Journal
A Systematic Review on Hybrid AI Models Integrating Machine Learning and Federated Learning
Previous Article in Journal
Denial-of-Service Attacks on Permissioned Blockchains: A Practical Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework for Compliance with Regulation (EU) 2024/1689 for Small and Medium-Sized Enterprises

by
Sotirios Stampernas
and
Costas Lambrinoudakis
*
Department of Digital Systems, University of Piraeus, 18532 Piraeus, Greece
*
Author to whom correspondence should be addressed.
J. Cybersecur. Priv. 2025, 5(3), 40; https://doi.org/10.3390/jcp5030040
Submission received: 13 May 2025 / Revised: 22 June 2025 / Accepted: 25 June 2025 / Published: 1 July 2025

Abstract

The European Union’s Artificial Intelligence Act (EU AI Act) is expected to be a major legal breakthrough in an attempt to tame AI’s negative aspects by setting common rules and obligations for companies active in the EU Single Market. Globally, there is a surge in investments to encourage research, development and innovation in AI that originates both from governments and private firms. The EU recognizes that the new Regulation (EU) 2024/1689 is difficult for start-ups and SMEs to cope with and it announced the release of tools, in the near future, to ease that difficulty. To facilitate the active participation of SMEs in the AI arena, we propose a framework that could assist them to better comply with the challenging EU AI Act during the development life cycle of an AI system. We use the spiral SDLC model and we map its phases and development tasks to the legal provisions of Regulation (EU) 2024/1689. Furthermore, the framework can be used to promote innovation, improve their personnel’s expertise, reduce costs and help the companies avoid the proposed substantial fines described in the Act.

1. Introduction

On April 2018, the European Commission started the process for the European strategy on artificial intelligence (AI) in its initial communication titled “Artificial Intelligence for Europe” [1]. Its aim, as described in the document, was to make the European Union (EU) the global leader in trustworthy AI. In December 2018, this communication was followed by the “Coordinated Plan on Artificial Intelligence” [2], updated in 2021, providing a more detailed strategy. After a long period of negotiations between the EU institutions and other interested parties, the initial proposal of the Artificial Intelligence Act (EU AI Act or Act) came into its final draft [3] in May 2024. On 13 June 2024, the Act was signed, and on 12 July 2024 it was published in the European Union’s Official Journal (Regulation (EU) 2024/1689) and it entered into force and application 20 days later, on 1 August 2024. Its purpose, as stated in Article 1 [4], is to ensure, among other things, that AΙ is safe for the public and respects fundamental rights and values, supporting simultaneously the proper functioning and the promotion of innovation in the Single Market. It is the European reaction to global concerns raised regarding AI and its responsible development and use, setting at the same time common rules in the EU area for companies. As expected, the EU AI Act incorporates articles that refer to potential fines and others that impose specific requirements for the final product according to its risk level and for the whole supply chain. These can be challenging for companies to comply with, especially for small and medium-sized enterprises (SMEs), due to their limited resources and expertise.
Based on the definition in the EU recommendation of 2003 [5], SMEs are companies that count less than 250 staff and have a turnover of equal or less than EUR 50 million. Our focus is on SMEs, as they represent the majority of active companies worldwide and 99% of the companies in the EU market, according to Eurostat [6], based on 2020 data. These companies are challenged more when the EU demands some sort of compliance within a certain regulatory framework. Compliance is a key notion when introducing a new regulation, such as the EU AI Act, which affects the development process of new AI systems, as well as its deployment and distribution, imposing limitations and specifications. The results of compliance are twofold: additional supply-chain costs and possible fines for active or potentially active companies, and conformity of the product with certain safety standards for society. In the case of the EU AI Act, lack of compliance may lead to fines up to EUR 35 million or 7% of the previous year’s worldwide turnover, whichever is higher.
According to the EU AI Watch Index of 2021 [7], the largest number of AI firms are US-based, followed by China, then the EU and the UK. Apart from China, in all other countries, most companies that are AI or AI-related are focused on selling goods and services rather than filing patent applications. Thus, they are not actively involved in the actual development cycle of AI or AI-related technology. The report observes that the largest firms in the field are established in the US (233 companies) and in China (226 companies), while the EU (43 companies) lags behind them. One possible interpretation of this observation is that it is more challenging for several reasons for SMEs in the EU Single Market to innovate in the AI industry. The AI Act issues new obligations that could either discourage them or make it more difficult for them to engage in the development of AI systems and compete with companies around the globe.
This paper seeks to answer the following research question: How can SMEs that develop or plan to develop AI systems more effectively comply with the EU AI Act throughout the lifecycle of their products? By “more effectively,” we refer to minimizing the expenditure of time and resources—often limited for SMEs—during the compliance process. The purpose of this research is to provide an easy-to-use framework for SMEs that allows them to overcome their main and common constraints. These include:
  • Limited resources and expertise of personnel, as in most cases there is not enough specialized and adequate human capital.
  • Narrow financing capabilities.
These constraints, combined with a demanding legal framework such as Regulation (EU) 2024/1689 that involves a significant workload, simultaneously with the development process, result in a discouraging business environment for SMEs. This creates a concurrent, three-fold problem for an SME to resolve: the limited resources that deter companies from investing in any additional need except for their main functions, the complexity of the Regulation (EU) 2024/1689 that demands a lot of back-office work and concurrent management of the main development lifecycle of an AI system.
The proposed framework aims to address the compliance challenge by integrating the AI system’s development process and regulatory requirements into a single, unified model. It is built around two key attributes:
  • Compliance-Driven Design: A core focus on ensuring alignment with the provisions of Regulation (EU) 2024/1689.
  • Development Lifecycle Integration: The incorporation of a Software Development Life Cycle (SDLC) model that guides the AI system’s development stages.
Each phase of the SDLC is mapped to specific articles of the Regulation, linking every development task to its corresponding legal obligations. This parallel structuring allows compliance activities to be embedded directly into the development workflow. As a result, organizations—particularly SMEs—can manage compliance more efficiently, minimizing costs while maintaining momentum in AI development. A significant benefit of this approach is that it allows SMEs to concentrate their limited resources on their core business activities. Moreover, it supports the development of internal expertise by involving personnel directly in the compliance process.
The novelty of this framework lies in its systematic mapping of SDLC phases to the legal provisions of the EU AI Act—marking the first time such an alignment has been proposed. This offers a clear, practical pathway for simultaneously achieving regulatory compliance and technological development. It aims to guide and prepare SMEs that develop the previously mentioned systems, to cope with the Act, reducing its complexity and compliance costs. However, due to the general approach of the framework, companies active in another part or parts of the supply chain of AI systems can use it as well, to serve their special needs. Thus, in each phase of the framework we include tasks and actions covering the obligations mentioned not only for providers but also for authorized representatives, importers, distributors and deployers, too.
This paper is structured as follows: The first section introduces the reader to the underlying process that resulted in Regulation (EU) 2024/1689, including the research question this paper intends to answer and its main components. The second section presents a literature review and the related sources used to build the research question, focusing on the spiral model on which we based our framework. The third section describes the proposed framework and its components, addressing the original goal of this study. The fourth section discusses the work’s results, while the fifth section concludes the research, summarizes the research and proposes possible future work.

2. Methods

The data, the reports and the papers used in this article were collected from various sources including the official communications of the EU Commission, scientific journals, international databases and organizations such as the OECD, EU AI Watch and the European Statistics Office. The latter were used to understand the market size and other aspects of SMEs that are active in the AI industry. Some of the chosen papers from journals deal with a variety of SDLC models and their features in order to provide the reader with a brief overview of what we consider as the most important. Others relate to existing technical and legal standards and are comparable to our research. The rest come from the communications of the EU Commission that resulted in the EU AI Act. Our framework consists of two main components, the EU Regulation and an SDLC model. To begin with, we used as a reference the published Regulation (EU) 2024/1689 in the European Official Journal. The proposed compliance framework was built based on an analysis of the obligations coming from Regulation (EU) 2024/1689 embedded in the process of the spiral model. This strategy, in our opinion, minimizes the complexity of the whole process and increases its possibilities for success. The first component was briefly analyzed previously and in the next paragraphs we will present the second one.
According to [8], as a generic definition, SDLC is a process to create or alter a system. The National Institute of Standards and Technology (NIST), in NIST SP 800-218 [9], defines SDLC as a methodology used to design, create and maintain software. The International Organization for Standardization (ISO) has developed an international standard for SDLC which is ISO/IEC 12207 [10]. According to [11] the models have certain characteristics that make them suitable for certain types of projects. We can divide these models and their variations into two main categories, the traditional ones and their evolution, the agile ones. The most popular in the first category are the waterfall, the V-shaped, the incremental/iterative and the spiral model. The second category includes the agile methodology and its variations like extreme programming, scrum and DevOps.
The waterfall model is the oldest of all the above-mentioned SDLCs as it was developed in 1970. It was ideal for large projects of the past that were well-defined and its success made it popular in the field of software development. Its main structure comprises five sequential steps, the requirements analysis, the design, the implementation, the integration and the maintenance. The main idea behind it is that each step must be completed to move to the next one [12,13,14].
The V-shaped model is often considered an extension of the waterfall model and pairs each development phase with a testing phase. All processes are arranged in pairs in a V-shape comprising the three phases of the model, the validation, the coding and the verification. On the left side of the V-shape, you have validation phase that has several independent tasks. Once completed, it is followed by the coding phase positioned at the bottom of the V-shape. Moving to the right side of the V-shape, you have the steps of the verification phase. To move to the next phase, as in the waterfall model, we must complete the previous one [12,13].
The incremental/iterative model gathers requirements in every phase. To be able to utilize the results of each phase the project is divided into smaller components. After each increment, new functionalities are added based on the requirements until the final project is delivered [14].
The basic idea behind agile models is flexibility and time, as the focus of the models is on the speed of a project’s adaptation to change. This family of SDLC models is useful in small projects and teams where there is no need for documentation and time is the critical factor [14]. These core characteristics make it unsuitable for AI-related projects that are complex by their nature and demand full documentation in the context of the EU AI Act.
XP, which stands for extreme programming, is one of the main implementations of agile used to develop small-size projects. It concentrates on the development aspects rather than the managerial ones and it is designed so that companies can adopt all or part of its methodology [15].
Scrum, the second main implementation of agile, focuses on project management, in cases where initial planning is a difficult task to complete, delivering independent and small-sized features. Its framework activities consist of four main activities, requirements, analysis, design, evolution and delivery [13,16].
DevOps is an SDLC that combines, as its name states, development and operations to achieve improved delivery times for projects. In cases where quick releases are needed, DevOps promises to address challenges and implement short release cycles [17]. It is a time-sensitive model that is ideal for short-cycle projects, unsuitable for the development of AI systems that by nature take time to develop.
The spiral model, first introduced by Boehm in 1988, is a risk-driven software development process that integrates principles from both linear and iterative methodologies. The model inherited some of the characteristics of the waterfall model, but it is based on an iterative cyclical development focusing on risk assessment. According to [18], this model is suitable for large projects that involve risk assessment with high-risk factors, due to its risk-driven approach. The spiral model consists of four phases: the first one is planning, the second is risk analysis, the third is development and the final is evaluation. Each cycle, consisting of these four phases, produces a version of the prototype, until the final version comes up that is called the operational prototype.
Two main characteristics of the model make it suitable to become the basis of our framework for compliance with the EU AI Act that focuses on AI systems that fall within its scope, that is, for those categorized as high-risk systems after the applied risk assessment process. Both categories involve complex and time-consuming projects that demand compliance throughout the system’s lifecycle. The spiral model seems an ideal base upon which to build a framework for compliance with the Act, as risk-assessment is its inherent characteristic. In addition to that, the spiral model can be adapted and used for projects that do not end after the final product is delivered but extend throughout the life of the software [16]. These two characteristics make it ideal for the case of the EU AI Act, as the latter mandates that a high-risk AI system needs to be validated for its conformity throughout its lifetime.
Several standards in terms of technical and legal regulation exist or are emerging as AI evolves. Countries, international organizations and national agencies are investing in research related to AI development and how it can be canalized and controlled in order to avoid or reduce its negative effects.
The NIST AI Risk Management Framework (AI RMF) [19] is designed to help organizations manage risks associated with artificial intelligence systems in a trustworthy, responsible and effective manner. Among other things, it provides voluntary guidance for the development of any AI system, while the proposed framework is EU AI Act-oriented and refers to certain risk-based AI systems.
The ISO has adapted and developed standards that have a different orientation concerning the development of AI. The most relevant to this research are ISO 31000 and ISO/IEC 42001. ISO 31000 [20] is an international standard that provides principles, a framework and a process for risk management. Its primary purpose is to support organizations in developing a structured, systematic and enterprise-wide approach to managing risk that enhances decision-making, strengthens governance and improves operational performance. ISO/IEC 42001 [21] is the first international standard specifically developed for Artificial Intelligence Management Systems (AIMSs). It provides a framework for organizations to assist them in managing the risks and responsibilities of AI, while ensuring that AI technologies are developed and deployed in a transparent, ethical and accountable manner. While this is an AI-specific governance and management system and focuses on AI lifecycle governance, transparency, bias, explainability and oversight, compared to the proposed framework it does not refer to specific provisions of the EU AI Act and how they can be addressed during the development process, which requires a high level of expertise. This means that ISO 42001 needs adaptation to satisfy the provisions of the Act, while 31000 focuses only on specific articles concerning risk management.
Other frameworks such as FAIR (Factor Analysis of Information Risk) [22] are more specific in terms of purpose. It is a quantitative framework for understanding, analyzing and measuring information risk. FAIR enables organizations to express risk in financial units, making it easier to prioritize and justify decisions related to cybersecurity investments. It provides an evaluation for event loss exposure, transforming risk into monetary units, using as factors threat event frequency, vulnerability and impact. As such, it could be used in a complementary way during the development phase, when discussing the security and cybersecurity measures for the system. Our research is broader in application as it discusses the AI Act provisions during the lifecycle that expand further than the investment decisions in cybersecurity.
There are some EU-approved guidelines to help SMEs related to specific issues, such as the EDPS guidelines for data protection issues and the AI Act conformity tool for evaluating whether the company must conform with the AI Act. All of these are useful, but they are focused on specific articles of the AI Act and they do not deal with the development life cycle and all related articles.
The EDPS, in June 2024, issued a guidance paper [23] to provide some practical advice to EU institutions, bodies, offices and agencies (EUIs) on the processing of personal data in their use of generative AI systems. The aim is to ensure that they comply with their data protection obligations as set out in Regulation (EU) 2018/1725. However, these guidelines can be meaningful to SMEs that can use them to their benefit in the process of the AI Act’s adoption in the data protection task. The scope of our framework is broader than this paper; however, it could be a positive input during the data protection requirements as described in the articles of the AI Act.
The European DIGITAL SME Alliance’s “AI Act Conformity Tool” [24] is a useful tool to evaluate whether the AI system being developed/used by a company needs to comply with the AI Act. It is a form of questionnaire that provides initial feedback for SMEs on whether their product must conform with the AI Act. It does not provide a pathway to conformity and development compared to the proposed framework.
The OECD recently developed a methodology [25] to evaluate AI performance using a set of indicators compared to specific human attributes. Linking AI capabilities to human ones allows the assessment of AI’s potential role in education. Beyond education, the indicators provide a framework to enable discussion concerning the implications of AI in other sectors: employment, civic participation, leisure activities and everyday life. This methodology is performance-related as it focuses on impact and not on the development of AI systems.
All the above-mentioned standards, methodologies, tools and guidelines address different parts of AI such as terminology, lifecycle processes, risk, bias, trustworthiness, impact assessment and management systems. Each one is a useful tool for companies that deal with one or more parts of how AI systems can be developed but they are not aligned to the EU AI Act. They include parts of it, but they are not a clear and concise guide to compliance with the AI Act for an SME.
Several approaches that address issues ranging from organizational setup to general technical needs discuss the same subject but they view it from a different angle. In addition, these standards and guidelines are generic, both in terms of tasks and application, without focusing on the EU AI Act and the development process. The added value of this research is that it provides a unified framework that maps the development phases of a powerful SDLC model to the provisions of the new and complex Regulation (EU) 2024/1689. This allows the company to focus on its primary goal with a single framework, instead of studying and combining several methodologies and standards, and to succeed in better compliance and deeper integration while the development process is in progress. Thus, the suggested approach reduces compliance costs and minimizes failure, provides a clear development pathway hand in hand with the AI Act and improves human capital’s expertise by reducing related costs.

3. The EU AI Act Compliance Framework for SMEs—Results

The proposed framework provides a structured and simplified approach, assisting SMEs which are or intend to become actively involved in the development process of AI systems and fall within the scope of Regulation (EU) 2024/1689. It helps them to better comply with it, during all phases, and extends throughout the whole lifecycle of the system. Ideally, it should be used from the very beginning of the process, but it can work, if adapted properly, at any stage.
The phases and the tasks of the proposed framework are visualized in Figure 1, mapping them with certain articles of the EU AI Act.
Furthermore, we highlight its components in Table 1, where the first column represents the four iterative phases based on those of the spiral model. These four phases of the model are planning, risk analysis, development and evaluation. Each phase is divided into tasks or compliance areas, shown in the second column, which should be completed prior to the next phase according to the spiral model’s iterations. For the content of each task, we provide a short description in the third column. The company must take a set of actions, depending on its business type, to complete and move to the next one, explained in the fourth column. The final column matches the most relevant recitals, articles and annexes of Regulation (EU) 2024/1689, published in the official European Union’s Journal on 12 July 2024, to each task.
Below we provide a brief description of the phases, tasks and actions described in Table 1, referring to the articles of the AI Act.

3.1. Phase 1—Planning

This is the initial phase of the framework focused on identifying the AI systems and on analyzing the data used. It is the phase that begins by gathering the requirements to build a baseline spiral.

3.1.1. Identification of AI Systems

SMEs need to identify and register the AI systems they use or develop. The company should keep and update an inventory with these systems based on their purpose and functionality, ensuring that the datasets used are well documented. This task aligns with Article 2 (Scope), which outlines the scope of the AI Act, Article 5 (Prohibited AI practices), that prohibits certain AI practices, Article 7 (Amendments to Annex III) and Article 8 (Compliance with the requirements).

3.1.2. Data Management

This task aims to analyze the intended or already used data sources, and to ensure their quality, their integrity and their compliance with relevant data protection laws such as the GDPR. It is important to document data sources and record-keeping requirements in accordance with the applicable laws. Effective data management practices are essential to ensure the availability of high-quality data. Article 10 (Data and data governance) covers the issues of data governance, data quality and specific purpose requirements, such as bias detection. All the above intend to provide all the necessary documentation to the competent authorities.

3.2. Phase 2—Risk Analysis

This phase has only one task, which is central to the entire process, as the outcome will define the risk level and then categorize, based on the EU AI Act, the AI systems. For this reason, the entity should conduct a risk analysis and assessment based on a solid risk methodology. The company should be able to identify the potential impact on fundamental rights, the safety of the AI systems used or developed and the rest of the issues that the AI Act tries to cover. The result will affect the cost of compliance during the development process and the time of entry into the market for the AI system. Conducting risk assessment is crucial for categorizing AI systems as it is the tool for classification based on the Act. Article 9 (Risk management system) and Annexes III, VI and VII detail the classification and criteria for high-risk AI systems. Classification is important for the organization as it will affect the development process and its exposure to potential fines, too.

3.3. Phase 3—Development

This is the third phase of the framework, focused on designing and implementing at this point the compliance strategy, the policies and certain organizational and technical measures as mentioned in the EU AI Act.

3.3.1. Compliance Strategy

The aim of this task is to develop a compliance strategy tailored to the AI systems identified as high-risk. The key issue here is the establishment of roles and responsibilities for an AI governance model within the organization. Establishing a compliance strategy is important as the entity should have available the proper documentation and solid procedures required under the EU AI Act. Article 8 (Compliance with the requirements) points out what needs to be in place to comply with the legal requirements, as do Article 16 (Obligations of providers of high-risk AI systems) and Article 17 (Quality management system).

3.3.2. Documentation and Policies

This task should create or update policies related to AI ethics, data usage, quality management systems and risk management. At this point, documentation templates can be introduced for the AI system descriptions, risk assessments and compliance reports. Establishing governance structures for AI system oversight, including roles and responsibilities, is mentioned in Article 9 (Risk management system). Maintaining comprehensive documentation is necessary, as stated in Article 11 (Technical documentation) and Article 13 (Transparency and provision of information to deployers). Article 18 (Documentation keeping), Article 19 (Automatically generated logs) and Article 20 (Corrective actions and duty of information) include actions that should be taken under certain conditions.

3.3.3. Transparency

This area is related to the implementation of measures intended to strengthen algorithmic transparency and explainability to interested parties. Article 13 (Transparency and provision of information to deployers) and Article 50 (Transparency obligations for providers and users of certain AI systems) provide guidance throughout the process for different types of entities in the whole supply chain.

3.3.4. Accountability

Companies must implement measures to ensure appropriate human oversight over AI systems. Assigning specific roles and rights to humans aims to minimize the risks related to safety and rights. Oversight needs to ensure that humans have specific roles and responsibilities and the necessary knowledge to cope with the system, as per Article 14 (Human oversight), Article 26 (Obligations of deployers of high-risk AI systems) and Article 27 (Fundamental rights impact assessment for high-risk AI systems).

3.3.5. Reporting

An organization should implement measures to ensure algorithmic transparency, explainability and fairness. For this reason, it is important to establish processes for regular auditing and evaluation of the AI systems to detect biases and inaccuracies. It is important to be able to report any incidents and malfunctions of the AI system to authorities. Article 8 (Compliance with the requirements), Article 17 (Quality management system), Article 20 (Corrective actions and duty of information) and Article 73 (Reporting of serious incidents) refer to this obligation.

3.3.6. User Rights

Organizations should develop procedures to ensure user rights obligations are met based on the applicable regulations. User rights protection is a priority in EU law, which has a set of regulations, such as the GDPR, that stress the importance of data protection. Article 5 (Prohibited AI practices) and Article 59 (Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox) outline the rights of data subjects and the prohibitions for various kinds of systems that use certain types of data. Article 60 (Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes) and Article 61 (Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes) explain how the organization must respect user rights even during testing procedures in all environments, inside or outside a regulatory sandbox.

3.3.7. Security Measures

Organizations need to implement safety measures to mitigate some of the identified risks. Article 15 (Accuracy, robustness and cybersecurity), Article 22 (Authorised representatives of providers of high-risk AI systems), Article 23 (Obligations of importers) and Article 42 (Presumption of conformity with certain requirements) outline the obligations of organizations active in the supply chain of an AI model and refer to the general requirements in security and cybersecurity, that they should be in place during its life cycle.

3.3.8. System Integration and Testing

Organizations must integrate compliance measures into AI development and deployment workflows. Testing AI systems is important to make sure that they meet compliance requirements and do not pose unnecessary risks to the organization. Integrating compliance measures into AI development and conducting testing under certain rules is carried out either in controlled environments, as explained in Article 57 (AI regulatory sandboxes), or in the real world, as mentioned in Article 60 (Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes) and in Article 61 (Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes). These articles provide an overview of measures taken to encourage SMEs to innovate.

3.4. Phase 4—Evaluation

The final phase of the suggested framework focuses on the continuous training and awareness-raising of employees and the continuous post-market monitoring and improvement of the AI systems identified in phase 1, including the evaluation of the product from various stakeholders.

3.4.1. Continuous Monitoring

The organization should establish a monitoring system to keep up with the performance and compliance status of the AI systems. It can create feedback loops to continuously improve the AI systems and address any compliance issues that might occur. Version control and impact assessment of add-ons and updates is important. Article 72 (Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems) and Article 73 (Reporting of serious incidents) explain how post-market surveillance should work.

3.4.2. Training and Awareness-Raising

The aim of this task is to provide training for company staff to better understand the AI models and the risks related to their development and use. Companies should make sure that their personnel and others related to the supply chain of an AI system are sufficiently informed and trained to be able to deal with it. The provider and the deployer should provide training to all related stakeholders, without forgetting awareness-raising of the end users as explained in Article 4 (AI literacy).
After the finalization of phase 4, the process restarts from phase 1 until the final version of the prototype is created and one or more processes are completed.

4. Discussion

Regulation (EU) 2024/1689 poses a major compliance challenge for SMEs due to its complexity and their lack of resources and expertise. SMEs can overcome these constraints by using the proposed framework since its attributes include simple structure, flexibility and circularity, remaining adaptive to their diverse needs.
Beyond its clear value in supporting compliance with the regulation, the proposed framework presents broader positive implications for businesses, the EU AI Act itself and the EU Single Market as a whole. When combined with collaborative initiatives—such as the regulatory sandboxes promoted by the Act—it has the potential to accelerate innovation, particularly among small and medium-sized enterprises (SMEs), which are globally recognized as key drivers of innovation.
The true success of the EU AI Act will not be measured solely by adherence to its obligations or the enforcement of penalties. Instead, we believe its impact will be evaluated by how effectively it enhances the competitiveness of European SMEs in the global marketplace. By aligning regulatory compliance with development processes and fostering a supportive innovation environment, the framework contributes meaningfully toward achieving that objective.
Regulatory sandboxes can assist in the validation of the proposed framework, as these can be proposed to the companies participating in the initiative and results can be drawn from their application.

5. Conclusions

This paper addresses a research problem related to how SMEs, in the field of AI systems, can better comply with the provisions of the EU AI Act during the development life cycle, given their constraints, limited financing and human capital.
The proposed framework intends to resolve this problem by incorporating two main attributes. The first one is its focus on compliance with the Regulation’s provisions, which are mapped to the second attribute, the development lifecycle phases of the spiral SDLC model. The novelty of this paper compared to existing work is that it is the first application of any SDLC to the EU AI Act and the first detailed mapping of the AI Act’s provisions to the phases of the spiral model.
Key contributions for SMEs using this framework include better management of their limited resources, addressing the back-office workload caused by the complexity of Regulation (EU) 2024/1689, and the concurrent management of the main development lifecycle of the AI system, through the mapping of the provisions of the Act to the phases of the model. We tried to map all relevant provisions of the AI Act to the spiral SDLC model phases to provide a balanced, unified guide. The framework remains as generic as possible, based on the AI Act, which is applicable to all EU member states.
However, it remains to evaluate this framework for its effectiveness through an empirical study that should include field testing in a set of diverse SMEs in the AI industry. The results would lead to documenting the challenges and the successes during actual implementation. The feedback given on the usability and the comprehensiveness of the framework will allow for its refinement and improvement. Other limitations are related to the possible differences in national laws in member states, which may exist because of their specific needs preceding the application of the AI Act but which did not fall within the scope of this research. Some national laws were already in place in some European countries because of their specific needs and the lack of the AI Act. The AI regulation after its publication is on top of all national laws; however, it is not in the scope of the framework to deal with national needs rather than focusing on the provisions of the AI Act.
Future work may include an additional layer in this framework, dealing with auditing each task of the development phases vis-à-vis the corresponding provisions of the EU AI Act. A final thought about this paper is that it can lead to further research in performance metrics and to a solid framework for developing AI systems in accordance with the EU AI Act.

Author Contributions

Conceptualization, S.S. and C.L.; methodology, S.S.; writing—original draft preparation, S.S.; review and editing, C.L.; supervision, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been partially funded by the Research Center of the University of Piraeus.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions. In Artificial Intelligence for Europe, 25.4.2018 COM(2018) 237 Final; European Union: Brussels, Belgium, 25 April 2018; Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0237 (accessed on 1 August 2024).
  2. Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions. In Fostering a European Approach to Artificial Intelligence, COM(2021) 205 Final; European Union: Brussels, Belgium, 21 April 2021; Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021DC0205 (accessed on 1 July 2024).
  3. Proposal of Regulation of the European Parliament and of the Council, Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act); PE-CONS 24/24. 14 May 2024. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CONSIL:PE_24_2024_INIT (accessed on 1 August 2024).
  4. Regulation (EU) 2024/1689 of the European Parliament and of the Council, of 13 June 2024, Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 1 August 2024).
  5. Commission Recommendation of 6 May 2003. Concerning the Definition of Micro, Small and Medium-Sized Enterprises. C(2003) 1422. Available online: https://eur-lex.europa.eu/eli/reco/2003/361/oj/eng (accessed on 10 September 2024).
  6. Di Bella, L.; Katsinis, A.; Lagüera-González, J.; Odenthal, L.; Hell, M.; Lozar, B. Annual Report on European SMEs 2022/2023; Publications Office of the European Union: Luxemburg, 2023. [Google Scholar] [CrossRef]
  7. Righi, R.; Pineda León, C.; Cardona, M.; Soler Garrido, J.; Papazoglou, M.; Samoili, S.; Vázquez-Prada Baillet, M. AI Watch Index 2021; Publications Office of the European Union: Luxembourg, 2022; ISBN 978-92-76-53602-4. [Google Scholar] [CrossRef]
  8. Kute, S.S.; Thorat, S.D. A review on various software development life cycle (SDLC) models. Int. J. Res. Comput. Commun. Technol. 2014, 3, 778–779. [Google Scholar]
  9. National Institute of Standards and Technology (NIST). Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities; NIST Special Publication 800-218; NIST: Gaithersburg, MD, USA, 2022. [Google Scholar] [CrossRef]
  10. ISO/IEC/IEEE 12207:2017; Systems and Software Engineering—Software Life Cycle Processes. International Organization for Standardization (ISO): Geneva, Switzerland, 2017. Available online: https://www.iso.org/obp/ui/en/#iso:std:63712:en (accessed on 20 June 2025).
  11. Alshamrani, A.; Bahattab, A. A comparison between three SDLC models waterfall model, spiral model, and Incremental/Iterative model. Int. J. Comput. Sci. Issues (IJCSI) 2015, 12, 106. [Google Scholar]
  12. Prabowo, H.; Gaol, F.; Hidayanto, A.N. Comparison of the System Development Life Cycle and Prototype Model for Software Engineering. Int. J. Emerg. Technol. Adv. Eng. 2022, 12, 155–162. [Google Scholar]
  13. Arora, R.; Arora, N. Analysis of SDLC models. Int. J. Curr. Eng. Technol. 2016, 6, 268–272. [Google Scholar]
  14. Gurung, G.; Shah, R.; Jaiswal, D.P. Software development life cycle models-A comparative study. Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol. 2020, 5, 30–37. [Google Scholar] [CrossRef]
  15. Javed, M.; Ahmad, B.; Hussain, S.; Ahmad, S. Mapping the best practices of XP and project management: Well defined approach for project manager. arXiv 2010, arXiv:1003.4077. [Google Scholar]
  16. Pressman, R.S.; India, M.G. Software Engineering: A Practitioner’s Approach, 7th ed.; McGraw-Hill Education: New York, NY, USA, 2010. [Google Scholar]
  17. Wettinger, J.; Breitenbücher, U.; Leymann, F. Devopslang–bridging the gap between development and operations. In Service-Oriented and Cloud Computing: Third European Conference, Manchester, UK, 2–4 September 2014, Proceedings; Springer: Berlin/Heidelberg, Germany, 2014; pp. 108–122. [Google Scholar]
  18. Boehm, B.W. A Spiral Model of Software Development and Enhancement. IEEE Eng. Manag. Rev. 1995, 23, 69. [Google Scholar]
  19. National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework (AI RMF 1.0). 26 January 2023. Available online: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf (accessed on 18 June 2025).
  20. ISO 31000:2018; Risk Management—Guidelines. International Organization for Standardization (ISO): Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/65694.html (accessed on 18 June 2025).
  21. ISO/IEC 42001:2023; Artificial Intelligence Management System. International Organization for Standardization (ISO): Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/81230.html (accessed on 19 June 2025).
  22. Wong, C. Investigation of Information Risk Management Frameworks. 2022. Available online: https://www.researchgate.net/publication/364293344 (accessed on 14 June 2025).
  23. Available online: https://www.edps.europa.eu/system/files/2024-05/24-05-29_genai_orientations_en_0.pdf (accessed on 18 June 2025).
  24. European Digital SME Alliance. AI Act Conformity Tool. Available online: https://www.digitalsme.eu/ai-act-conformity-tool/ (accessed on 19 June 2025).
  25. OECD. Introducing the OECD AI Capability Indicators. 2025. Available online: https://www.oecd.org/en/publications/introducing-the-oecd-ai-capability-indicators_be745f04-en.html (accessed on 19 June 2025).
Figure 1. Visualization of the framework.
Figure 1. Visualization of the framework.
Jcp 05 00040 g001
Table 1. Components of the framework.
Table 1. Components of the framework.
PhaseTask-Compliance AreaDescriptionAction NeededEU AI Act Reference
Phase 1—PlanningIdentification of AI systemsProvide specific information about the AI systems involved in the entity’s activities.Inventory all AI systems related to the entity’s activities.Recitals 7, 9, 10, 11,13, 14, 15, 21–45, 47, 48, 49, 52, 63, 64, 69, 99, 100, 103
Articles 2, 5, 7, 8
Annexes III, VIII
Classify the systems, based on their purpose, functionality and type of data processed.
Data managementEnsure availability of high-quality data.Analyze quality and ensure integrity of data sources.Recitals 67, 68, 70
Article 10
Ensure compliance with the applicable data protection laws (for the EU, the GDPR and other related laws).
Document data sources and storage practices.
Phase 2—Risk AnalysisRisk analysisConduct risk assessments.Conduct risk analysis to identify potential impact on fundamental rights, safety and other issues.Recitals 48, 54 65, 66
Article 9
Annexes III, VI, VII
Categorize AI systems into risk levels as defined in the EU AI Act.
Phase 3—DevelopmentCompliance strategyMeet regulatory requirements.Develop a compliance strategy for the AI systems identified in phase 1.Recitals 36, 37
Articles 8, 16, 17
Annexes I, III, VIII, IX
Establish roles and responsibilities to support the AI governance model within the organization.
Rigorous testing of AI systems to ensure they meet compliance requirements.
Documentation and policiesMaintain technical documentation.Develop documentation templates for the description of AI systems, risk assessments and compliance reports.Recitals 65, 66, 71, 72, 81
Articles 9, 11, 13, 18, 19, 20
Annexes III, IV
Set up and update policies related to AI ethics, data usage and risk management.
TransparencyImplement measures for transparency of the algorithm and explainability for interested parties.Implement measures for transparency related to algorithms.Recitals 72, 132
Articles 13, 50
Support mechanisms to provide explainability to interested parties and authorities.
AccountabilityEnsure oversight by humans with specific roles and rights.Implement measures to ensure human oversight.Recitals 73, 91–96
Articles 14, 26, 27
Annex III
Provide access control functions.
ReportingReport incidents and malfunctions of the AI system.Create reports for major incidents.Recitals 63, 64, 69, 81, 155, 157
Articles 8, 17, 20, 73
Annex IV
Create reports for malfunctions.
User rightsRespect user rights and freedoms based on the EU legal framework.Develop procedures to ensure user rights obligations are met.Recitals 26–45, 140, 141
Articles 5, 59, 60, 61
Annexes III
Security measuresImplement robust security and cybersecurity measures.Implement safety measures and mitigate certain identified risks.Recitals 66, 74–78, 82, 83, 97, 122
Articles 15, 22, 23, 42
Annex IV
System integration and testingIntegrate compliance measures during AI development and deployment workflows. Testing AI systems is important, to make sure that they meet compliance requirements and reduce risks for the organization.Integrate compliance measures into AI development and deployment workflows and organize regular testing.Recitals 138, 139, 140, 141
Articles 57, 60, 61
Phase 4—EvaluationContinuous monitoringContinuous monitoring and evaluation of the AI systems.Establish a monitoring system to track performance and compliance with AI systems.Recitals 155, 157, 158
Articles 72, 73
Annexes III IV
Create feedback processes to continuously improve the AI systems and address compliance issues.
Training and awareness raisingInform and train sufficiently the personnel of the organization and others related to the process of creating an AI system to be able to deal with the AI system.Training sessions for personnel.Article 4
Training sessions for others related to the supply chain of an AI system.
Organize training sessions for users.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stampernas, S.; Lambrinoudakis, C. A Framework for Compliance with Regulation (EU) 2024/1689 for Small and Medium-Sized Enterprises. J. Cybersecur. Priv. 2025, 5, 40. https://doi.org/10.3390/jcp5030040

AMA Style

Stampernas S, Lambrinoudakis C. A Framework for Compliance with Regulation (EU) 2024/1689 for Small and Medium-Sized Enterprises. Journal of Cybersecurity and Privacy. 2025; 5(3):40. https://doi.org/10.3390/jcp5030040

Chicago/Turabian Style

Stampernas, Sotirios, and Costas Lambrinoudakis. 2025. "A Framework for Compliance with Regulation (EU) 2024/1689 for Small and Medium-Sized Enterprises" Journal of Cybersecurity and Privacy 5, no. 3: 40. https://doi.org/10.3390/jcp5030040

APA Style

Stampernas, S., & Lambrinoudakis, C. (2025). A Framework for Compliance with Regulation (EU) 2024/1689 for Small and Medium-Sized Enterprises. Journal of Cybersecurity and Privacy, 5(3), 40. https://doi.org/10.3390/jcp5030040

Article Metrics

Back to TopTop