Next Article in Journal
An In Vitro Comparison Study Regarding Retention Force Expressed by Thermoformed Aligners and 3D-Printed Aligners with Different Thickness and Design
Next Article in Special Issue
Large Language Model Based Intelligent Fault Information Retrieval System for New Energy Vehicles
Previous Article in Journal
An Analysis Method of Anticipated UX Evoked by Packaging Design Using the Evaluation Grid Method: A Case of the Analysis of Packages of Boxed Flour Confections and Sunscreen
Previous Article in Special Issue
Enhancing Conversational Agent Development Through a Semi-Automatization Development Proposal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Driven Innovations in Software Engineering: A Review of Current Practices and Future Directions

by
Mamdouh Alenezi
1,† and
Mohammed Akour
2,*,†
1
The Saudi Technology and Security Comprehensive Control Company (Tahakom), Riyadh 12435, Saudi Arabia
2
College of Computer and Information Sciences, Prince Sultan University, Riyadh 12435, Saudi Arabia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(3), 1344; https://doi.org/10.3390/app15031344
Submission received: 21 December 2024 / Revised: 26 January 2025 / Accepted: 27 January 2025 / Published: 28 January 2025
(This article belongs to the Special Issue AI in Software Engineering: Challenges, Solutions and Applications)

Abstract

:
The software engineering landscape is undergoing a significant transformation with the advent of artificial intelligence (AI). AI technologies are poised to redefine traditional software development practices, offering innovative solutions to long-standing challenges. This paper explores the integration of AI into software engineering processes, aiming to identify its impacts, benefits, and the challenges that accompany this paradigm shift. A comprehensive analysis of current AI applications in software engineering is conducted, supported by case studies and theoretical models. The study examines various phases of software development to assess where AI contributes most effectively. The integration of AI enhances productivity, improves code quality, and accelerates development cycles. Key areas of impact include automated code generation, intelligent debugging, predictive maintenance, and enhanced decision-making processes. AI is revolutionizing software engineering by introducing automation and intelligence into the development lifecycle. Embracing AI-driven tools and methodologies is essential for staying competitive in the evolving technological landscape.

1. Introduction

The evolution of software engineering practices has been both rapid and transformative, shaped by technological advancements and the increasing complexity of software demands [1,2]. Since the early days of software development, methodologies have undergone continuous improvements, progressing from the traditional waterfall model to more iterative and agile practices [3]. This evolution has aimed to address the growing need for efficiency, flexibility, and quality in software products. Over time, the adoption of DevOps, continuous integration, and test automation has redefined how software is built, tested, and deployed [4,5]. Despite these advances, certain challenges such as error-prone manual coding, delayed feedback loops, and resource allocation have persisted, necessitating further innovation.
The emergence of artificial intelligence (AI) presents a new frontier in addressing some of the long-standing challenges in software engineering [6]. AI’s relevance to software development is particularly significant due to its ability to learn patterns, make predictions, and automate complex tasks [7,8]. Technologies such as machine learning, natural language processing, and neural networks are becoming instrumental in enhancing software engineering processes. AI-powered tools are enabling smarter automation, adaptive problem-solving, and decision-making capabilities, offering a promising avenue to optimize both the efficiency and effectiveness of software development. The integration of AI is not only transforming traditional workflows but also redefining the skill sets required for modern software engineering professionals.
Recent years have seen a surge in AI-driven techniques for various stages of the software development lifecycle. Automated code generation has made significant strides, facilitated by large language models (LLMs) that can generate or suggest code snippets with minimal human intervention [9,10]. Tools such as GitHub Copilot leverage transformer-based models to provide context-aware code completions [11], while DeepMind’s AlphaCode has demonstrated the capability to solve programming challenges by generating solutions that closely match human-written code [12]. Beyond code generation, defect prediction and detection have benefited greatly from machine learning algorithms trained on historical bug data, enabling the early identification of potential issues [13,14]. Furthermore, advancements in automated testing have introduced intelligent test case generation and prioritization using AI-based approaches, reducing both testing time and human effort [15]. In the maintenance and refactoring domain, AI techniques now assist in suggesting optimal refactoring strategies to enhance code quality and performance, while predictive analytics tools help estimate development timelines and resource allocations more accurately.
In tandem with these advances, explainable AI (XAI) has emerged as a crucial subfield, offering techniques to improve the transparency and interpretability of AI-driven decisions in software engineering [16]. This is particularly significant in safety-critical and compliance-heavy industries, where understanding the rationale behind AI recommendations is vital. Moreover, the integration of continuous learning frameworks is allowing AI-driven tools to evolve dynamically with the changing software codebase, reducing model drift and enhancing overall reliability [17].
Despite these notable advancements, several gaps remain. First, while AI excels at specialized tasks (e.g., automated code suggestions and bug prediction), there is a lack of holistic frameworks that integrate these capabilities across the entire software development lifecycle. Existing approaches often operate in silos—code generation tools may not seamlessly share insights with defect prediction models, and vice versa, leading to fragmented development workflows [18]. Secondly, the trustworthiness and reliability of AI recommendations are ongoing concerns. Even with explainable AI techniques, ensuring that developers can fully understand and validate the outputs of AI systems remains challenging [16]. Thirdly, many current AI-based solutions are data-hungry and may struggle in domains where large datasets are unavailable or where data privacy is a concern [17]. Fourthly, ethical considerations, such as bias in AI models or the risk of over-reliance on automation, have yet to be comprehensively addressed, potentially impacting both code quality and human skill development [19].
AI’s transformative potential in software engineering is evident from tools like GitHub Copilot 1.7.4421. In a study, developers using Copilot completed coding tasks approximately 55% faster than those relying on traditional methods. Furthermore, the tool contributed to enhanced code quality, with higher rates of test case success on the first attempt. This underscores how AI-driven tools can significantly improve developer productivity while reducing errors in software development [20].
Building on these observations, the specific gap that this study aims to address is the lack of a unified framework for integrating AI-driven tools and methodologies throughout the entire software development lifecycle in a way that balances automation benefits with human oversight and ethical considerations. While several studies have explored the individual applications of AI in areas such as code generation or testing, there is a need for a more comprehensive perspective that examines how these different AI applications can work in synergy—from planning and requirements analysis to coding, testing, deployment, and maintenance [9,13]. By taking a lifecycle-wide approach, we seek to uncover best practices for the seamless integration of AI, highlighting where automation can most effectively augment human capabilities and where human expertise remains indispensable.
Accordingly, the primary purpose of this study is to gain an in-depth understanding of the role of AI in modern software engineering and propose a holistic integration strategy. The research objectives are as follows:
  • To examine the application of AI across various phases of the software development lifecycle, identifying where AI-driven tools offer the greatest potential for impact.
  • To evaluate the benefits of AI integration, including reductions in manual errors, development cycle acceleration, and improved collaboration among cross-functional teams.
  • To identify challenges and risks associated with AI-driven software engineering, such as the reliability of AI-generated code, ethical implications, and the possibility of skill degradation among developers.
  • To propose actionable recommendations for seamlessly integrating AI into existing workflows, ensuring responsible use and maintaining a balance between automation and human insight.
By addressing these objectives, this study aims to provide a roadmap for successfully incorporating AI in software engineering practices, thereby offering both theoretical and practical contributions. On the theoretical side, this work adds to the academic discourse by synthesizing diverse AI applications into a cohesive framework. On the practical side, it offers guidelines that practitioners can use to navigate the complexities of AI adoption, ensuring that the technology delivers on its promise of efficiency and quality improvements while safeguarding against potential pitfalls.
Ultimately, this research underscores that the integration of AI in software engineering is not merely a technical issue but also a strategic, organizational, and ethical one. By taking a holistic view, we can harness AI’s transformative power responsibly, driving sustainable innovations in the field of software engineering.

2. Literature Review

The integration of artificial intelligence (AI) into software engineering (SE) has emerged as a transformative force, reshaping traditional paradigms and introducing novel methodologies. Table 1 shows the evolution of software engineering practices. AI-driven tools, particularly automated code generation (ACG) and large language models (LLMs), have significantly enhanced productivity and code quality while redefining developer roles within the industry [6,21]. Platforms such as ChatGPT and GitHub Copilot exemplify this trend, not only assisting in code generation but also influencing the skill sets required for modern developers. This shift emphasizes the need for proficiency in AI tools and a deep understanding of their underlying mechanisms, necessitating a reevaluation of educational curricula to include AI literacy [22,23].
The sources reviewed in this section were selected based on the following criteria:
  • Relevance: Studies that focus on the integration of AI in software engineering processes.
  • Recency: Priority was given to studies published within the past five years to ensure up-to-date analysis.
  • Impact: Research with a significant number of citations or from high-impact journals.
  • Diversity: A mix of case studies, theoretical analyses, and empirical evaluations to provide a holistic perspective.
Despite the promising advancements, the integration of AI in software development presents several challenges. Concerns surrounding algorithmic bias, legal compliance, and security vulnerabilities have been prominent [29]. These issues underscore the importance of ethical considerations and the development of robust frameworks to mitigate potential risks associated with AI adoption. Furthermore, current large language models, such as ChatGPT, exhibit limitations in software modeling tasks, including syntactic and semantic deficiencies, inconsistency, and scalability issues [30]. Addressing these limitations requires ongoing research and development to enhance the capabilities of AI tools in SE contexts.
Machine learning (ML) algorithms have demonstrated significant potential in addressing long-standing software engineering challenges. Applications such as software quality assessment, bug prediction, and test automation have benefited from ML techniques, with tools like the Naïve Bayes algorithm and WEKA becoming staples in both research and industry applications [31,32]. Since 2009, the synergy between SE and ML/deep learning has intensified, with researchers exploring the complexities of applying ML solutions to SE problems. Studies have focused on issues of reproducibility and replicability, which are critical for the advancement of research and practical applications [33]. Predictive models using algorithms like random forests have achieved high accuracy in identifying software refactoring opportunities, underscoring ML’s efficacy in enhancing code maintainability and evolution [34].
In the realm of software testing, ML techniques have been employed to automate and improve various processes, including test case generation, refinement, evaluation, and the construction of test oracles. Additionally, ML has been utilized for cost prediction, aiding in resource allocation and project management [35]. These applications streamline testing procedures and contribute to more reliable and efficient software development cycles.
The increasing integration of AI models into development tools is poised to fundamentally alter the industry. Developers’ roles are gradually shifting from traditional coding to supervising and assessing AI-generated suggestions, potentially leading to changes in job requirements and team dynamics [36]. Generative AI, particularly LLMs, is considered a major disruptor, offering the promise of enhanced productivity and quality. However, it also raises ethical concerns, particularly regarding data privacy, intellectual property, and the potential for misuse [6].
AI-driven development environments (AIDEs) offer the potential to automate routine programming tasks, yet they bring forth concerns related to bias in AI algorithms, legal implications of AI-generated code, and security vulnerabilities inherent in AI systems [29]. Addressing these concerns is crucial for the responsible adoption of AI in SE. Moreover, the unique characteristics of AI systems pose new challenges to traditional SE approaches, necessitating the development of novel methodologies for responsible AI system creation [37].
Education plays a pivotal role in this evolving landscape. AI virtual assistants combined with recommender systems can enhance learning experiences in SE capstone courses by leveraging collective knowledge and providing personalized support to students [38]. As AI continues to permeate SE practices, it is crucial to adapt and develop new educational approaches to harness its potential while addressing associated challenges.
As AI continues to evolve, its impact on software development practices is expected to grow substantially [39]. The adoption of AI-driven tools and methodologies will require significant adjustments in education and training. There is a pressing need for new approaches to address the challenges and limitations of AI in software development, including ethical guidelines, legal frameworks, and security protocols. Future research should focus on exploring the full potential of AI in software engineering, developing strategies to mitigate associated risks, and effectively integrating AI into existing development practices. This literature review underscores the current state of AI in software engineering and emphasizes the necessity for further investigation and innovation in this rapidly advancing field.
The synergy between AI and SE offers significant potential for improving software development processes and outcomes. AI techniques can enhance various SE phases, from requirements engineering to maintenance, leading to more efficient, automated, and adaptive solutions for complex software problems [39,40]. Specific applications include cost-sensitive transfer learning for cross-project defect prediction, Bayesian networks for software quality estimation, and machine learning for program comprehension [40].
However, the integration of AI and SE is not without its challenges. The unique characteristics of AI systems introduce complexities that traditional SE approaches must address. These include ensuring the reliability and robustness of AI-driven tools, managing the ethical implications of AI applications, and adapting SE methodologies to accommodate the dynamic nature of AI technologies [37,41]. Assessing the readiness of both fields to leverage their combined potential is crucial for the successful integration of AI into SE [42].
As AI continues to permeate SE practices, responsible development becomes paramount. This involves adapting SE practices to address ethical considerations and potential risks associated with AI systems. Responsible AI development requires the establishment of ethical guidelines, legal frameworks, and security protocols to ensure that AI technologies are used safely and ethically in software development [37]. Furthermore, educational initiatives must evolve to prepare future professionals to navigate the complexities of AI-integrated SE, fostering a workforce that is both technically proficient and ethically aware [38].
The evolving relationship between AI and SE necessitates significant adjustments in educational curricula. Incorporating AI literacy into SE education ensures that upcoming professionals are equipped to leverage AI technologies effectively. AI virtual assistants and recommender systems can enhance learning experiences by providing personalized support and leveraging collective knowledge [38]. By integrating AI-focused modules and practical applications into SE programs, educational institutions can prepare students to meet the demands of an AI-enhanced software development landscape [23].
Below is the summary of the systematic analysis of findings:
  • Automated Code Generation: Tools like GitHub Copilot and AlphaCode have demonstrated significant improvements in developer productivity and accuracy. However, concerns remain regarding the reliability and ethical implications of AI-generated code.
  • Defect Prediction: Machine learning algorithms trained on historical data enable the early identification of potential bugs, reducing testing time and enhancing software reliability.
  • Ethical and Legal Challenges: Bias in AI models and intellectual property concerns require robust frameworks and continuous oversight to ensure responsible adoption.
Future research should aim to explore the full potential of AI in software engineering, focusing on developing strategies to mitigate associated risks and integrating AI into existing development practices effectively. Key areas for future investigation include the following:
  • Enhancing the capabilities of AI-driven tools to address current limitations in software modeling tasks.
  • Developing robust ethical guidelines and legal frameworks to govern the use of AI in SE.
  • Creating secure AI systems that mitigate vulnerabilities and ensure data privacy.
  • Improving reproducibility and replicability in ML applications within SE to advance research and practical implementations.
  • Exploring the impact of AI integration on developer roles, team dynamics, and job requirements.
This literature review highlights the current advancements and challenges in the integration of AI and SE, emphasizing the need for continued exploration and innovation to fully harness the potential of AI in enhancing software development practices.

3. AI Applications in Software Engineering

The advent of artificial intelligence (AI) has precipitated transformative shifts across various facets of software development. The integration of AI-driven tools and methodologies has enhanced the efficiency, accuracy, and adaptability of software engineering processes. This section examines the principal applications of AI within software engineering, illustrating how they revolutionize conventional techniques and augment overall efficiency [7,39,43].
Selection Criteria for Tools and Datasets: The tools, datasets, and examples included in this study were selected based on the following criteria:
  • Relevance to Industry Applications: Tools and datasets that are widely used in real-world software engineering contexts, such as GitHub Copilot and IBM’s defect prediction tools, were prioritized.
  • Recency: Preference was given to tools and datasets published or actively used within the past five years to ensure that the study reflects the current state of the field.
  • Accessibility: Open-source datasets and tools with publicly available documentation were selected to facilitate reproducibility.
  • Coverage of Development Phases: Examples were chosen to cover diverse phases of the software development lifecycle, including coding, testing, and maintenance.
  • Impact and Adoption: The selection emphasized tools and datasets with demonstrated effectiveness, as reported in industry and academic studies.

3.1. Requirements Analysis

3.1.1. Natural Language Processing (NLP) for Requirements Gathering

Requirements gathering constitutes a critical phase in the software development lifecycle, wherein stakeholders’ needs are translated into precise technical specifications. Traditionally, this process has been predominantly manual, labor-intensive, and susceptible to misinterpretations and omissions. AI, particularly natural language processing (NLP), presents significant advancements in automating and enhancing requirements elicitation [44].
NLP algorithms can process vast amounts of unstructured data from stakeholders, including emails, meeting transcripts, and user feedback. By leveraging sophisticated language models, AI systems can extract meaningful requirements efficiently, discern underlying functional and non-functional requirements, and detect ambiguities or inconsistencies. Furthermore, these systems can proactively suggest clarifications, thereby mitigating the risk of misunderstandings that could propagate costly errors in later stages of development.
Recent advancements in NLP, such as transformer-based models like BERT and GPT, have enhanced the capability of AI systems to comprehend and process human language with higher accuracy [45,46]. These models enable the extraction of nuanced requirements from complex and diverse stakeholder communications, improving the fidelity of requirements documentation.

3.1.2. AI-Driven User Stories

User stories are pivotal components in agile development methodologies, encapsulating user requirements in concise and accessible formats. AI technologies can augment the creation and management of user stories by automatically generating them based on stakeholder inputs or analyzing system usage data [47].
By employing machine learning models trained on extensive datasets from prior projects, AI systems can predict and formulate user stories that are congruent with business objectives and user expectations. Moreover, these systems can prioritize user stories by evaluating factors such as potential impact, feasibility, and alignment with strategic organizational goals.
For instance, let us consider a software development team tasked with implementing a new feature. To ensure comprehensive requirement capture, the team integrates AI into their requirements engineering process. After drafting initial requirements, the AI system assists in refining and expanding these into detailed requirements specifications and scenarios. Subsequently, the AI aids in prioritizing the requirements based on criteria such as stakeholder significance, development complexity, and anticipated impact. This application of AI not only streamlines the requirements engineering process but also enhances the accuracy and relevance of the specifications, thereby facilitating a more efficient development workflow.
Additionally, AI can assist in identifying inconsistencies or dependencies among user stories, thereby facilitating better planning and resource allocation. Machine learning models can analyze historical data to estimate the effort and time required for implementing each user story, enabling more accurate sprint planning and workload distribution.

3.2. Design and Architecture

3.2.1. Automated Design Pattern Recognition

Design patterns represent reusable solutions to recurrent problems in software design, and their appropriate application is instrumental in developing robust and maintainable systems. AI technologies can assist developers by automatically recognizing suitable design patterns based on system requirements and the existing codebase [48].
By leveraging machine learning algorithms, AI systems can analyze code repositories and architectural diagrams to suggest optimal design patterns that enhance code efficiency, scalability, and maintainability. Such automation alleviates the cognitive burden on developers, ensures consistency in design decisions, and expedites the architectural planning phase.
For example, AI tools can scan the codebase to identify areas where specific design patterns, such as Singleton, Observer, or Factory, may be effectively applied. By recommending refactoring opportunities, these tools can improve the overall code structure and performance. Moreover, AI can detect anti-patterns or suboptimal design choices, providing suggestions for architectural improvements.
Advanced AI techniques, including graph neural networks and pattern recognition algorithms, have enhanced the capability of AI systems to understand complex software architectures and code structures. These techniques enable the identification of patterns at a higher level of abstraction, facilitating more sophisticated design recommendations.

3.2.2. Intelligent System Modeling

System modeling is the process of creating abstract representations of software systems to comprehend and communicate their structure and behavior. AI augments system modeling by providing intelligent tools capable of generating models from requirements specifications or existing codebases.
Employing techniques such as graph analysis, machine learning, and pattern recognition, AI tools can automatically generate Unified Modeling Language (UML) diagrams or other architectural representations. These models assist in visualizing complex systems, identifying potential bottlenecks, and enhancing communication among development team members.
Automating system modeling through AI not only accelerates the modeling process but also ensures that documentation remains current, which is crucial for onboarding new team members and facilitating future system enhancements. Moreover, AI-driven modeling tools can simulate system behavior under various scenarios, enabling predictive analysis and the early detection of design flaws.
Advanced AI methodologies, such as model-driven engineering (MDE) augmented with AI, enable the generation of code directly from models, thereby bridging the gap between design and implementation [49]. This integration fosters a seamless transition from system models to executable code, enhancing development efficiency and consistency.

3.3. Coding and Implementation

3.3.1. AI-Assisted Code Generation

AI-assisted code generation represents a significant advancement in software engineering. Tools such as GitHub Copilot, powered by OpenAI’s Codex model, have demonstrated the potential of AI to generate code snippets based on natural language prompts or partially written code.
Developers can utilize AI to automate the creation of repetitive and structurally simple code constructs, such as loops, conditionals, or even entire functions. By providing a description or an initial code fragment, AI systems can predict and generate the subsequent code, thereby accelerating development and reducing manual coding effort.
Although AI-generated code may not always be flawless, it serves as a valuable assistant by handling boilerplate code, enabling developers to concentrate on more complex and innovative aspects of the application. Furthermore, AI code generation can contribute to standardizing coding practices and reducing human errors.
However, it is essential to consider the limitations and ethical implications associated with AI-assisted code generation. Issues such as code correctness, security vulnerabilities, and potential intellectual property concerns must be carefully managed [50].

3.3.2. Smart Code Completion and Suggestions

Beyond code generation, AI significantly enhances the coding experience through intelligent code completion and context-aware suggestions. Modern integrated development environments (IDEs) augmented with AI capabilities can predict subsequent lines of code, recommend optimal implementations, and detect potential errors in real time.
These intelligent suggestions are contextually aware, taking into account the project’s codebase, established coding standards, and industry best practices. By assisting developers in adhering to consistent coding styles and patterns, AI aids in reducing syntax errors, improving code quality, and expediting the coding process. The immediate feedback and guidance provided by AI-powered IDEs foster a more efficient and less error-prone development environment.
Moreover, AI can assist in identifying deprecated APIs or libraries and suggest modern alternatives, thereby enhancing code maintainability and future-proofing applications [51,52,53].

3.4. Testing and Quality Assurance

3.4.1. Automated Test Case Generation

Testing constitutes a critical aspect of software quality assurance, ensuring that applications perform as intended. AI can automate the generation of test cases by analyzing code changes and predicting potential failure points.
For example, when developers introduce new code into the codebase, AI models can suggest relevant tests for the changes and generate test scripts that seamlessly integrate with the existing testing framework. AI-driven tools can automatically create unit tests and other testing scripts, covering basic functionalities and edge cases. This automation reduces the manual effort associated with writing tests, enhances test coverage, and facilitates the early detection of defects.
By continuously updating test suites in response to code modifications, AI ensures that testing processes remain aligned with ongoing development activities, thereby maintaining high-quality standards throughout the project lifecycle.
AI techniques, such as model-based testing and symbolic execution, can analyze program behavior to generate comprehensive test cases that cover a wide range of scenarios [54]. Machine learning models can predict areas of the code that are more likely to contain defects, enabling targeted testing efforts.

3.4.2. AI in Defect Prediction and Management

Defect prediction is a critical component of proactive quality assurance. AI models trained on historical project data can predict code segments that are likely to contain defects, enabling developers to focus their testing and review efforts more effectively [55].
Machine learning algorithms can analyze patterns in code complexity metrics, change histories, developer activities, and previous defect occurrences to identify high-risk components. This predictive capability allows teams to allocate resources efficiently, prioritize testing efforts, and address potential issues before they manifest into significant problems.
Additionally, AI can assist in defect management by automating the triage process and assigning bugs to the most appropriate developers based on factors such as expertise, workload, and historical performance. By automating prioritization and assignment, AI reduces bottlenecks in the defect resolution process and accelerates the delivery of fixes.
Techniques such as random forests, support vector machines, and deep learning models have been employed for defect prediction with varying degrees of success [56]. The integration of AI into defect management systems enhances the overall quality assurance process and contributes to the development of more reliable software systems.

3.5. Deployment and Maintenance

3.5.1. Predictive Maintenance Using AI

In the deployment and maintenance phases, AI significantly contributes to ensuring system reliability and performance. Predictive maintenance employs AI algorithms to analyze operational data and predict potential system failures before they occur.
By continuously monitoring metrics such as system logs, resource utilization, network traffic, and user behavior patterns, AI models can detect anomalies and forecast issues. Machine learning techniques, such as time-series analysis and anomaly detection algorithms, enable the identification of patterns indicative of impending failures.
This proactive approach allows teams to address problems pre-emptively, thereby minimizing system downtime, reducing maintenance costs, and enhancing user satisfaction. Furthermore, predictive maintenance facilitated by AI supports the optimization of system performance over time by enabling data-driven decision-making in maintenance scheduling and resource allocation.

3.5.2. Automated Monitoring and Alert Systems

AI enhances system monitoring by automating the detection of irregularities in deployed applications. Intelligent alert systems, powered by AI, can distinguish between normal operational fluctuations and significant issues, thereby reducing false alarms and enabling teams to focus on critical events.
Integrating AI into continuous integration/continuous deployment (CI/CD) pipelines can automate post-deployment monitoring. AI systems can assess system logs, identify anomalies, and promptly notify the development team, ensuring the swift resolution of production issues.
For instance, developers may enhance their CI/CD pipelines by incorporating AI-generated unit tests, which are created on the fly and added to the codebase during deployment. These tests run upon deployment and in subsequent deployments, improving test coverage and reliability. Additionally, AI can be employed to monitor production system logs for a defined period after deployment, enabling the early detection of anomalies. This integration of AI into the deployment process results in a more efficient and robust system, reducing the likelihood of undetected issues impacting end-users.
Advanced AI techniques, such as machine learning-based anomaly detection and natural language processing for log analysis, can process large volumes of system data to identify subtle patterns indicative of potential issues [57]. By automating these processes, AI reduces the manual effort required for monitoring and enables real-time responsiveness to operational problems.

4. Case Studies

This section presents multiple case studies to illustrate how AI-driven solutions are transforming software engineering in real-world scenarios. Each case highlights distinct AI applications (e.g., generative code, automated defect detection, and intelligent coding assistants) implemented by leading organizations. The selection of case studies was based on their relevance to current AI applications in software engineering, focusing on tools widely adopted in industry, such as GitHub Copilot and IBM’s defect prediction tool. Theoretical models were chosen to ensure the coverage of diverse development phases, including requirements gathering, coding, and maintenance.
By analyzing these diverse implementations, we gain insights into both the benefits and challenges of adopting AI across various stages of the software development lifecycle.
The case studies included here were selected based on the following criteria:
  • Industry Impact: Organizations known for pioneering or significantly influencing AI-driven methodologies in software development (e.g., McKinsey, GitHub, IBM, Microsoft, Snyk, and Google).
  • Variety of AI Applications: Each case study addresses different facets of AI integration—such as code generation, defect prediction, and real-time code review—to present a holistic view of AI’s potential.
  • Empirical Evidence: Only initiatives with measurable outcomes (e.g., productivity gains or reduction in bugs) were chosen to ensure that concrete data support our analysis.
From a theoretical standpoint, our analysis aligns with socio-technical theory, examining how AI tools reshape interactions between developers (the social component) and software systems (the technical component). We also adopt principles from design science research (DSR) by exploring how AI solutions address specific software engineering challenges (e.g., high error rates or lengthy code reviews) and how they are iteratively refined over time. Additionally, we invoke activity theory to highlight the learning and adaptation processes that occur as human developers and AI tools collaborate.

4.1. McKinsey’s Integration of Generative AI in Software Development

McKinsey & Company, a global management consulting firm, has been at the forefront of integrating generative artificial intelligence (AI) into its software development processes. Recognizing the increasing demand for efficient and high-quality software solutions, McKinsey embarked on a comprehensive study to evaluate the impact of generative AI on software development productivity and quality [58].
The initiative involved deploying generative AI tools across multiple development teams. These AI tools assisted in automated code generation, routine coding tasks, and intelligent suggestions for code improvements. Trained on extensive code repositories, the AI models could produce code snippets aligned with both industry best practices and project-specific requirements.
The results demonstrated a notable increase in productivity—development teams using the generative AI tools completed coding tasks up to twice as fast compared to control groups. Code quality also improved modestly, evidenced by fewer bugs and enhanced maintainability. Crucially, McKinsey reported accelerated time to market for new software products.
Despite these positive outcomes, McKinsey identified challenges, particularly the variability in AI tool effectiveness depending on task complexity, and the need for developers to maintain sufficient expertise to leverage AI assistance responsibly. This underscores the importance of striking a balance between automation and skilled human oversight.

4.2. GitHub Copilot’s Impact on Developer Productivity and Code Quality

GitHub Copilot, an AI-powered coding assistant developed in collaboration with OpenAI, has transformed how developers approach coding tasks. By offering real-time code suggestions and completions, Copilot aims to enhance developer productivity and overall code quality [20].
In a large-scale study by GitHub, 95 professional developers were split into control and treatment groups. The treatment group utilized Copilot, while the control group relied on traditional coding methods. Metrics such as task completion time, success rates, and coding error frequency were evaluated.
The results indicated that developers using Copilot completed tasks approximately 55% faster than the control group. The Copilot-assisted code also exhibited higher rates of passing test cases on the first attempt, reflecting fewer syntactic and logical errors. The developers appreciated the reduced effort in maintaining consistent coding standards.
However, the study also highlighted potential risks, including the possibility of developer over-dependence on AI-generated suggestions and the occasional misalignment between Copilot’s contextual understanding and complex project requirements. Still, GitHub Copilot demonstrated a strong correlation between AI assistance and developer productivity, pointing to AI’s transformative capacity in modern software development.

4.3. IBM’s AI-Powered Defect Prediction Tool Enhances Software Quality

IBM’s recent integration of an AI-powered defect prediction tool reflects its long-standing commitment to technological innovation. Trained on a decade’s worth of code and defect data from large-scale projects, this tool aims to anticipate defect-prone areas, enabling proactive management throughout the software development lifecycle.
Upon deployment, development teams within IBM’s software division used the tool’s real-time insights to prioritize testing and debugging efforts. The resulting improvements were significant: post-release defects decreased by 20%, while defect detection rates during development rose by 15%. Furthermore, debugging time dropped by 30%, translating to faster project timelines and lower overall costs.
These metrics highlight the potential of AI-driven analytics to elevate software quality and developer productivity. By systematically identifying latent issues, the tool ensures more efficient allocation of resources, reinforcing IBM’s competitive edge in delivering robust, high-quality software.

4.4. Microsoft IntelliCode’s Enhancement of Developer Productivity and Code Quality

Microsoft IntelliCode, an AI-driven extension for Visual Studio and Visual Studio Code, leverages models trained on thousands of open-source projects to provide intelligent code completions and suggestions [59].
IntelliCode addresses common coding challenges such as inconsistent coding styles, manual code reviews, and the need for repetitive boilerplate code. By offering context-aware suggestions, IntelliCode accelerates the coding process while promoting best practices. Developers can further refine IntelliCode by training it on their own repositories, enhancing its project-specific recommendations.
Empirical studies indicate a 30% increase in coding speed and a 25% reduction in bugs when teams adopt IntelliCode’s recommendations. This not only expedites development cycles but also reduces technical debt, improving maintainability and easing collaborative workflows for distributed teams. Surveyed developers reported high satisfaction, citing IntelliCode’s positive impact on both productivity and code quality.

4.5. Snyk Code’s AI-Driven Enhancements to Code Quality and Developer Efficiency

Snyk Code—formerly known as DeepCode—applies advanced AI and natural language processing to automate code reviews and detect security vulnerabilities, bugs, and code quality issues [60].
By integrating with GitHub, GitLab, and Bitbucket, Snyk Code analyzes each commit and pull request, offering real-time feedback. This proactive approach allows teams to catch problems early and adopt consistent coding standards. Organizations employing Snyk Code reported a 40% increase in early bug detection, a 35% decrease in security vulnerabilities, and a 50% reduction in manual code review times.
The immediate feedback loop helps developers address vulnerabilities and bugs on the fly, mitigating the risk of introducing critical defects into production. Furthermore, the solution fosters developer satisfaction by reducing the repetitive and error-prone aspects of manual reviews, allowing senior developers to focus on more complex, high-value tasks.

4.6. Google’s DeepMind AlphaCode Transforms Code Generation and Problem Solving

DeepMind’s AlphaCode has garnered attention for its ability to generate high-quality code capable of solving intricate programming problems, often at or above human-competitive levels. Trained on a large corpus of competitive programming tasks and associated solutions, AlphaCode can interpret natural language problem statements and generate optimized, maintainable code.
Its performance in competitive programming contests is particularly notable, with a 65% success rate in solving attempted problems [12]. AlphaCode’s integration into development workflows enables rapid prototyping and brainstorming, cutting down the time from conceptualization to a working prototype by nearly 40%. This efficiency gain frees developers to concentrate on refining solutions, exploring multiple design approaches, and tackling more creative tasks.
Developer feedback indicates a 25% increase in productivity when leveraging AlphaCode for initial code generation. Moreover, it offers an excellent learning resource, exposing teams to diverse coding strategies and best practices. In essence, AlphaCode exemplifies AI’s capability not only to automate coding tasks but also to augment human problem-solving skills and foster innovation.

4.7. Underexplored Areas and Novelty

While these six case studies provide a broad perspective on AI’s role in software engineering, several underexplored areas remain:
  • Requirements Engineering: Existing research largely focuses on code-level AI applications; few studies address how AI can automate or improve the capture and analysis of complex, evolving software requirements.
  • Domain-Specific AI Tools: Many current solutions target general-purpose software. There is a gap in specialized domains—such as safety-critical, embedded, or high-assurance systems—where compliance and reliability are paramount.
  • Long-Term Human–AI Collaboration: Current insights offer snapshots of productivity gains, but long-term impacts on team skill development, morale, and reliance on AI-based suggestions are understudied.
By highlighting these gaps, we emphasize that the novelty in future research lies in exploring and bridging these underexamined domains. Addressing these topics will further demonstrate how AI can not only improve existing practices but also redefine the boundaries of what is achievable in software engineering.

5. Artificial Intelligence on Software Engineering Practices: A Quantitative Analysis

To evaluate the impact of artificial intelligence (AI) on software engineering practices, a survey (Appendix A) was conducted targeting professionals across various roles, including software developers, project managers, and quality assurance specialists. The survey aimed to measure perceptions, experiences, and outcomes related to AI-driven tools in the software development lifecycle (SDLC).
A total of 250 respondents participated in the survey, hailing from the Kingdom of Saudi Arabia and the Kingdom of Jordan. All participants had a minimum of two years of experience in software engineering, ensuring that the insights gathered were from individuals with substantial industry experience (Figure 1).
The survey employed a structured questionnaire comprising 20 questions, which were segmented into the following sections:
  • Adoption Rates: Assessed the frequency of AI tool usage and the specific stages of the SDLC where they are implemented.
  • Perceived Benefits: Evaluated improvements in efficiency, productivity, and code quality attributed to AI tools.
  • Challenges: Identified barriers to implementation, trust in AI-generated suggestions, and necessary skill shifts.
  • Future Outlook: Gauged the willingness of professionals to integrate more AI technologies in the future.
Responses were measured using a 5-point Likert scale, ranging from Strongly Disagree (1) to Strongly Agree (5).
  • Usage of AI Tools: In total, 68% of participants reported using AI tools in at least one phase of the SDLC. The most common applications were coding and debugging.
  • Popular AI Tools: The tools frequently cited included GitHub Copilot (48%), IntelliCode (30%), and AI-based test automation platforms (22%).
Key benefits were assessed in terms of efficiency, accuracy, and team collaboration:
  • Efficiency: In total, 74% agreed that AI reduced the time required for routine coding tasks.
  • Code Quality: A total of 62% noted a decrease in post-release defects in projects that utilized AI assistance.
  • Team Collaboration: In total, 45% acknowledged improved resource allocation due to automated task suggestions.
Challenges highlighted by the respondents include the following:
  • Trust in AI: Only 40% expressed complete trust in AI-generated solutions, citing concerns over errors and lack of transparency.
  • Skill Gaps: In total, 58% believed that insufficient AI training among team members hindered full adoption.
  • Implementation Costs: A total of 52% noted high initial costs as a significant barrier.
The analysis identified the following as critical to the successful integration of AI:
  • Training Programs: Teams with formal AI training achieved 30% higher efficiency gains.
  • Project Size: Medium-to-large projects benefited the most, with a 20% increase in developer productivity.
  • Tool Suitability: The customization and alignment of tools with project needs were pivotal, as cited by 68% of respondents.
The survey responses were statistically analyzed, with key metrics summarized in Table 2.
The findings affirm the potential of AI to revolutionize software engineering by enhancing productivity and quality. However, the success of AI integration depends on addressing challenges such as skill enhancement and building trust in AI recommendations. Organizations should invest in comprehensive training programs and select AI tools that align closely with their project requirements to maximize the benefits of AI in the software development lifecycle. To address the challenges and limitations of AI tools in software engineering, the following strategies are recommended:
  • Fairness-Aware Models: Implementing fairness-aware algorithms can help mitigate biases in AI-generated recommendations.
  • Human-in-the-Loop Systems: Combining AI with human oversight ensures that critical thinking and contextual expertise are preserved.
  • Continuous Model Updates: Regular updates and the fine-tuning of AI models based on user feedback and new datasets can enhance performance and reliability.
These measures can help maximize the value of AI tools while minimizing their risks, fostering a balanced and effective integration into software engineering practices.
Figure 2 provides a boxplot representation of the key metrics evaluated in this study. The metrics include reduction in coding time, improvement in code quality, increase in team collaboration, and challenges in skill development. The boxplot illustrates the distribution of responses, highlighting central tendencies (medians), variability (interquartile ranges), and potential outliers for each metric.

6. Challenges and Considerations

While the integration of AI into software engineering heralds numerous benefits, it also presents a range of challenges that must be carefully addressed to ensure successful adoption. These challenges encompass technical hurdles, ethical and legal dilemmas, and significant workforce implications (Figure 3). This section dives into these challenges, providing a comprehensive analysis to inform future research and practice.

6.1. Technical Challenges

6.1.1. Data Quality and Availability

The performance of AI systems is intrinsically linked to the quality and quantity of data used for training and validation. In software engineering, acquiring high-quality, relevant data can be particularly challenging due to the diverse and complex nature of software artifacts and processes [61].
Data quality issues such as noise, inconsistency, and incompleteness can significantly impair AI models, leading to unreliable predictions and recommendations. Moreover, proprietary constraints and privacy concerns often limit access to essential datasets, hindering the development of robust AI solutions [62].
To mitigate these challenges, organizations need to establish rigorous data governance frameworks that ensure data integrity and accessibility. Implementing standardized data collection and curation practices can enhance data reliability. Additionally, leveraging techniques such as data augmentation and synthetic data generation can alleviate data scarcity while preserving sensitive information [63].

6.1.2. Integration with Existing Systems

Integrating AI technologies into existing software engineering workflows poses significant technical complexities. Legacy systems may not be designed to accommodate AI components, resulting in compatibility issues and the need for substantial re-engineering efforts [64].
Challenges include ensuring interoperability between AI tools and current development environments, managing data flow between disparate systems, and maintaining system performance and scalability [65]. Furthermore, the integration process may disrupt established workflows, leading to temporary decreases in productivity.
To address these issues, organizations should adopt modular architectures and standardized interfaces that facilitate seamless integration. Employing middleware solutions and adopting microservice architectures can enhance flexibility and scalability. A phased integration approach, accompanied by thorough testing and validation, can minimize disruptions and ensure a smooth transition [66].

6.2. Ethical and Legal Issues

6.2.1. Bias in AI Algorithms

AI systems are susceptible to biases inherent in their training data or introduced during model development. In software engineering, such biases can lead to skewed analyses, unfair prioritizations, and suboptimal decision-making [67].
For instance, an AI tool trained on historical project data may inadvertently perpetuate past biases, such as underrepresenting certain programming paradigms or technologies. This can result in recommendations that do not align with current best practices or organizational goals [68].
Mitigating bias requires a multifaceted approach, including diversifying training datasets, implementing fairness-aware algorithms, and conducting regular bias audits. Transparency in AI processes and outputs is crucial, enabling stakeholders to understand and scrutinize AI-driven decisions [69].

6.2.2. Intellectual Property Concerns

The use of AI in generating software artifacts raises complex intellectual property (IP) issues. Questions arise regarding the ownership of code or designs produced by AI systems: whether the rights belong to the creator of the AI, the user, or the AI itself [70].
Moreover, AI models trained on proprietary or open-source code repositories may inadvertently replicate copyrighted material, leading to potential infringement disputes [71]. The legal frameworks governing AI-generated content are still evolving, creating uncertainty for organizations leveraging these technologies.
Organizations must proactively address IP concerns by establishing clear policies and agreements that define ownership and usage rights. Consulting legal experts specialized in technology law can provide guidance. Additionally, implementing safeguards within AI systems to detect and prevent the unauthorized use of protected content can mitigate legal risks [72].

6.3. Workforce Implications

6.3.1. Skill Gaps

The infusion of AI into software engineering necessitates a workforce proficient in both domains. There is a pressing demand for professionals who understand AI methodologies, data science principles, and their application within software development contexts [39].
However, many software engineers may lack the requisite AI expertise, leading to a skill gap that can impede the effective utilization of AI tools. This gap extends to understanding AI-driven insights, interpreting model outputs, and integrating AI recommendations into development processes [64].
Addressing this challenge requires strategic investment in education and training programs. Organizations can offer professional development opportunities, collaborate with academic institutions, and promote interdisciplinary learning. Cultivating an environment that encourages continuous skill enhancement will equip the workforce to harness AI’s full potential [73].

6.3.2. Resistance to Change

Adopting AI technologies can encounter resistance from practitioners accustomed to traditional software engineering methodologies. Concerns may include fear of job displacement, skepticism about AI effectiveness, or discomfort with altering established workflows [74].
Overcoming resistance involves transparent communication about the benefits and limitations of AI, emphasizing its role in augmenting rather than replacing human expertise. Involving employees in the adoption process, providing adequate training, and demonstrating tangible improvements can facilitate acceptance [75].
Leadership plays a critical role in driving cultural change. By championing AI initiatives and fostering an organizational mindset open to innovation, leaders can mitigate resistance and inspire confidence in new technologies [76].

7. Future Directions

While the case studies presented in this study provide valuable insights into the transformative role of AI in software engineering, they primarily focus on a subset of tools and applications. This scope may not fully capture the diversity of AI implementations across all domains of software engineering. Future research should aim to investigate AI applications in less-studied areas, such as real-time systems and distributed computing.
As artificial intelligence continues to evolve, its integration with software engineering is expected to deepen, influencing emerging technologies and necessitating shifts in research focus and educational practices. This section explores potential future developments, identifies areas ripe for research, and underscores the importance of educational reforms to prepare professionals for the changing landscape.
The convergence of AI and IoT is poised to create a new paradigm in software engineering. IoT devices generate massive amounts of data, providing rich opportunities for AI algorithms to extract insights and enable intelligent decision-making. Software engineers will need to develop systems capable of handling real-time data processing and analysis, ensuring seamless integration between AI algorithms and IoT infrastructures. This synergy can lead to the creation of adaptive systems that respond proactively to environmental changes and user behaviors.
The findings of this study, particularly those related to tools like GitHub Copilot, are based on specific datasets and scenarios. As such, their applicability to other tools or broader contexts in software engineering may be limited. To enhance generalizability, future studies should incorporate diverse datasets and evaluate a wider range of AI-driven tools. The ethical implications of adopting AI tools in software engineering remain a critical concern. Issues such as algorithmic bias, data privacy, and the long-term impact on workforce skills require careful attention. To address these concerns, organizations are encouraged to adopt fairness-aware algorithms, perform regular audits, and maintain a balance between automation and human expertise to mitigate risks and ensure ethical practices.
Quantum computing presents a frontier with the potential to revolutionize computational capacities. Its implications for software engineering are profound, as traditional algorithms may become obsolete. AI can assist in developing quantum algorithms and in simulating quantum processes, bridging the gap between current computational methods and future quantum technologies. Software engineers will be challenged to rethink software design principles to accommodate quantum computing’s unique properties, such as superposition and entanglement.
Despite significant advancements, several areas within AI-driven software engineering remain underexplored. Research is needed to enhance the explainability of AI models, ensuring transparency in decision-making processes. Additionally, the development of AI techniques for automated software maintenance and evolution poses a significant opportunity. Exploring AI’s role in cybersecurity within software systems is another critical area, given the increasing sophistication of cyber threats.
Bridging the gap between theoretical research and practical application is essential. Collaborative efforts between academia and industry can accelerate innovation by aligning research objectives with real-world challenges. Such partnerships can facilitate the sharing of resources, data, and expertise, leading to the development of robust AI-driven software solutions. Joint initiatives can also foster the creation of standards and best practices, promoting consistency and reliability across the industry.
To prepare the next generation of software engineers, educational institutions must integrate AI concepts into their curricula. This includes foundational courses in machine learning, data science, and AI ethics. Practical experience with AI tools and platforms should be emphasized to equip students with hands-on skills. Interdisciplinary programs combining software engineering with AI specializations can produce professionals adept at navigating the complexities of AI integration.
The rapid pace of AI innovation necessitates a commitment to lifelong learning among software engineering professionals. Continuous education programs, workshops, and certifications can help professionals stay abreast of the latest developments. Employers play a crucial role by supporting ongoing training initiatives, fostering a culture that values adaptability and continuous improvement. Embracing lifelong learning ensures that the workforce remains competent and competitive in an evolving technological landscape.
The findings of this study raise important questions about the future of AI in software engineering and its applicability across organizations of different scales.
  • Applicability to Small Startups: Startups with limited resources can use AI tools to automate routine tasks, such as code generation and defect detection, reducing overhead costs. For instance, a small development team could employ GitHub Copilot to streamline coding tasks while focusing their efforts on innovation. Additionally, AI tools offer startups an opportunity to adopt agile practices more effectively, despite their constrained budgets.
  • Applicability to Big Companies: In contrast, larger organizations can benefit from AI’s ability to optimize workflows across distributed teams. For example, large-scale enterprises could integrate AI tools into their DevOps pipelines, ensuring continuous integration and delivery. However, big companies must also invest in employee training programs to maximize AI’s potential and address ethical challenges, such as bias in AI algorithms.
  • Future Directions for AI in Software Engineering: As the field advances, AI has the potential to foster decentralized software teams that operate autonomously, driven by adaptive AI tools. Additionally, the development of universal benchmarks and ethical standards for AI-driven tools could facilitate cross-industry collaboration, ensuring sustainable and fair AI adoption.

8. Conclusions

This study has explored the transformative integration of artificial intelligence (AI) into software engineering processes, revealing its potential to reshape fundamentally the discipline. By bridging automation, predictive analytics, and intelligent decision-making, AI introduces not merely incremental advancements but a paradigm shift in addressing persistent challenges in software development. The key scientific contributions of this study are as follows:
  • Comprehensive AI Integration Across the Development Lifecycle: Unlike previous research that isolates specific AI applications (e.g., code generation or defect prediction), this study advocates for a cohesive, end-to-end approach. By uniting AI-driven components across planning, development, testing, and maintenance phases, this work highlights AI’s role in creating more adaptive, efficient, and robust software systems.
  • Novel Insights into Adaptive and Data-Driven Development: This study demonstrates AI’s capability to learn continuously from evolving codebases and defect datasets, reducing model drift and enhancing long-term reliability. Such adaptivity positions AI not as a static tool but as a collaborative partner capable of evolving with development workflows.
  • Balancing Human–AI Collaboration: The findings underscore the indispensable role of human expertise for contextual understanding, strategic oversight, and ethical considerations. This balance between human judgment and AI automation emerges as a critical factor in maximizing the effectiveness and fairness of software engineering practices.
  • Identification of Emerging Research Opportunities: By showcasing AI’s current applications in areas such as security and advanced code generation, this study identifies underexplored domains like requirements engineering, safety-critical systems, and human–AI co-evolution. These areas present significant opportunities for impactful future research.
  • Scientific Novelty and Future Directions: The study provides an original classification framework for AI methodologies in software engineering, offering a structured lens to analyze and advance the field. Future work should prioritize quantitative studies, ethical frameworks, and interdisciplinary collaborations to deepen this paradigm shift.
Looking ahead, this study calls for a collaborative evolution between AI and human engineers, where ethical, technical, and organizational challenges are navigated collectively. By fostering synergies among academia, industry, and practitioners, AI can sustainably drive efficiency, quality, and creativity in software engineering, paving the way for a new era of innovation.
These scientific contributions reinforce the notion that AI is neither a mere tool for incremental improvements nor a transient trend, but rather an integral component reshaping the fabric of software engineering. Our findings also illustrate the dynamic synergies emerging between AI-driven analytics and software development teams, suggesting that professionals and organizations must cultivate agility and continuous learning to harness AI’s transformative potential fully.
Looking ahead, the future of AI in software engineering involves deepening this collaborative paradigm, where AI and human engineers co-evolve to tackle escalating complexities in system design, maintenance, and innovation. By fostering multidisciplinary collaborations between academia, industry, and practitioners, the field can better navigate ethical, technical, and organizational challenges, ensuring that AI integration sustainably drives efficiency, quality, and creativity in software engineering.

Author Contributions

Methodology, M.A. (Mamdouh Alenezi); Software, M.A. (Mohammed Akour); Validation, M.A. (Mamdouh Alenezi); Investigation, M.A. (Mohammed Akour); Resources, M.A. (Mohammed Akour); Writing—original draft, M.A. (Mamdouh Alenezi); Writing—review and editing, M.A. (Mohammed Akour). All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to acknowledge the support of Prince Sultan University for paying the Article Processing Charge (APC) of this publication.

Institutional Review Board Statement

This approach ensured that participants’ rights and privacy were fully protected, adhering to ethical research standards. Given the anonymized nature of the data and the explicit consent obtained, no Institutional Review Board (IRB) approval was required under the relevant guidelines.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Mamdouh Alenezi was employed by the The Saudi Technology and Security Comprehensive Control Company (Tahakom). The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A. Survey Questions

Appendix A.1. Adoption of AI in Software Engineering

  • Which phases of the software development lifecycle (SDLC) do you currently use AI tools for?
    • Requirements Analysis
    • Design
    • Coding
    • Testing
    • Deployment
    • Maintenance
    • None
  • How frequently do you use AI tools in your daily work?
    • Always
    • Frequently
    • Occasionally
    • Rarely
    • Never
  • What are the primary AI tools you use?
    (Open-ended)

Appendix A.2. Perceived Benefits of AI Tools

4.
To what extent do you agree with the following statement: “AI tools have significantly improved my productivity in software engineering”.
(Likert Scale: Strongly Disagree to Strongly Agree)
5.
How much time has AI saved you on routine coding tasks compared to manual efforts?
  • No time saved
  • 10–25%
  • 26–50%
  • 51–75%
  • Over 75%
6.
Have AI tools reduced errors in your software development process?
(Yes/No/Not Sure)
7.
To what extent do you agree that AI tools improve code quality by following best practices?
(Likert Scale: Strongly Disagree to Strongly Agree)
8.
Have AI tools enhanced collaboration within your team?
(Yes/No)

Appendix A.3. Challenges in Using AI Tools

9.
What challenges have you faced in adopting AI tools?
(Select all that apply)
  • Lack of training
  • High cost
  • Integration issues
  • Trust in AI
  • Resistance to change
  • Other
10.
Do you feel sufficiently trained to use AI tools effectively in your work?
(Likert Scale: Strongly Disagree to Strongly Agree)
11.
How much do you trust the accuracy and reliability of AI-generated suggestions?
(Likert Scale: Strongly Disagree to Strongly Agree)
12.
Have you experienced any ethical or legal concerns when using AI tools?
(Yes/No)

Appendix A.4. Outcomes and Future Integration

13.
How have AI tools impacted project completion timelines in your organization?
  • Significantly shortened
  • Somewhat shortened
  • No impact
  • Lengthened
14.
Do you believe that AI tools will become a core component of future software development workflows?
(Yes/No/Maybe)
15.
What improvements or features would you like to see in AI tools to enhance their effectiveness?
(Open-ended)

Appendix A.5. Demographics and Context

16.
What is your primary role in software engineering?
  • Developer
  • Tester
  • Manager
  • Architect
  • Other
17.
How many years of experience do you have in software engineering?
  • Less than 2 years
  • 2–5 years
  • 6–10 years
  • Over 10 years
18.
What is the size of your organization?
  • Small (<50 employees)
  • Medium (50–500 employees)
  • Large (>500 employees)
19.
What industry sector do you work in?
  • Technology
  • Finance
  • Healthcare
  • Education
  • Other
20.
Have you received formal training on AI tools for software engineering?
(Yes/No)

References

  1. Gurcan, F.; Dalveren, G.G.M.; Cagiltay, N.E.; Soylu, A. Detecting latent topics and trends in software engineering research since 1980 using probabilistic topic modeling. IEEE Access 2022, 10, 74638–74654. [Google Scholar] [CrossRef]
  2. Inkollu, K.; Gorle, S.K.; Kondabattula, S.R.; Shankar, P.B.; Reddy, M.B. A Review on Software Engineering: Perspective of Emerging Technologies & Challenges. In Proceedings of the Eighth International Conference on Research in Intelligent Computing in Engineering, Hyderabad, India, 1–2 December 2023; pp. 23–27. [Google Scholar]
  3. Kuhrmann, M.; Tell, P.; Hebig, R.; Klünder, J.; Münch, J.; Linssen, O.; Pfahl, D.; Felderer, M.; Prause, C.R.; MacDonell, S.G.; et al. What makes agile software development agile? IEEE Trans. Softw. Eng. 2021, 48, 3523–3539. [Google Scholar] [CrossRef]
  4. Forsgren, N. DevOps delivers. Commun. ACM 2018, 61, 32–33. [Google Scholar] [CrossRef]
  5. Gall, M.; Pigni, F. Taking DevOps mainstream: A critical review and conceptual framework. Eur. J. Inf. Syst. 2022, 31, 548–567. [Google Scholar] [CrossRef]
  6. Sauvola, J.; Tarkoma, S.; Klemettinen, M.; Riekki, J.; Doermann, D. Future of software development with generative AI. Autom. Softw. Eng. 2024, 31, 26. [Google Scholar] [CrossRef]
  7. Barenkamp, M.; Rebstadt, J.; Thomas, O. Applications of AI in classical software engineering. AI Perspect. 2020, 2, 1. [Google Scholar] [CrossRef]
  8. Bader, J.; Kim, S.S.; Luan, F.S.; Chandra, S.; Meijer, E. AI in software engineering at Facebook. IEEE Softw. 2021, 38, 52–61. [Google Scholar] [CrossRef]
  9. Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; Pinto, H.P.D.O.; Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brockman, G.; et al. Evaluating large language models trained on code. arXiv 2021, arXiv:2107.03374. [Google Scholar]
  10. Li, Y.; Wang, S.; Nguyen, T.N. Dear: A novel deep learning-based approach for automated program repair. In Proceedings of the 44th International Conference on Software Engineering, Pittsburgh, PA, USA, 21–29 May 2022; pp. 511–523. [Google Scholar]
  11. de Moor, A.; van Deursen, A.; Izadi, M. A transformer-based approach for smart invocation of automatic code completion. In Proceedings of the 1st ACM International Conference on AI-Powered Software, Porto de Galinhas, Brazil, 15–16 July 2024; pp. 28–37. [Google Scholar]
  12. Li, Y.; Choi, D.; Chung, J.; Kushman, N.; Schrittwieser, J.; Leblond, R.; Eccles, T.; Keeling, J.; Gimeno, F.; Dal Lago, A.; et al. Competition-level code generation with alphacode. Science 2022, 378, 1092–1097. [Google Scholar] [CrossRef]
  13. Giray, G.; Bennin, K.E.; Köksal, Ö.; Babur, Ö.; Tekinerdogan, B. On the use of deep learning in software defect prediction. J. Syst. Softw. 2023, 195, 111537. [Google Scholar] [CrossRef]
  14. Li, L.; Ding, S.X.; Peng, X. Distributed data-driven optimal fault detection for large-scale systems. J. Process Control 2020, 96, 94–103. [Google Scholar] [CrossRef]
  15. Sawant, P.D. Test Case Prioritization for Regression Testing Using Machine Learning. In Proceedings of the 2024 IEEE International Conference on Artificial Intelligence Testing (AITest), Shanghai, China, 15–18 July 2024; pp. 152–153. [Google Scholar]
  16. Haji Mohammadkhani, A. Explainable AI for Software Engineering: A Systematic Review and an Empirical Study. Master’s Thesis, University of Calgary, Calgary, AB, Canada, 2023. Available online: https://prism.ucalgary.ca/handle/1880/115792 (accessed on 25 November 2024).
  17. Tamanampudi, V.M. Deep Learning Models for Continuous Feedback Loops in DevOps: Enhancing Release Cycles with AI-Powered Insights and Analytics. J. Artif. Intell. Res. Appl. 2022, 2, 425–463. [Google Scholar]
  18. Kokol, P. The Use of AI in Software Engineering: A Synthetic Knowledge Synthesis of the Recent Research Literature. Information 2024, 15, 354. [Google Scholar] [CrossRef]
  19. Amugongo, L.M.; Kriebitz, A.; Boch, A.; Lütge, C. Operationalising AI ethics through the agile software development lifecycle: A case study of AI-enabled mobile health applications. AI Ethics 2023, 1–18. [Google Scholar] [CrossRef]
  20. Kalliamvakou, E. Research: Quantifying GitHub Copilot’s Impact on Developer Productivity and Happiness. 2022. Available online: https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/ (accessed on 11 November 2024).
  21. Odeh, A.; Odeh, N.; Mohammed, A.S. A Comparative Review of AI Techniques for Automated Code Generation in Software Development: Advancements, Challenges, and Future Directions. TEM J. 2024, 13, 726–739. [Google Scholar] [CrossRef]
  22. France, S.L. Navigating software development in the ChatGPT and GitHub Copilot era. Bus. Horizons 2024, 67, 649–661. [Google Scholar] [CrossRef]
  23. Bull, C.; Kharrufa, A. Generative AI Assistants in Software Development Education: A vision for integrating Generative AI into educational practice, not instinctively defending against it. IEEE Softw. 2024, 41, 52–59. [Google Scholar] [CrossRef]
  24. Royce, W.W. Managing the development of large software systems. Proc. IEEE WESCON 1970, 26, 328–388. [Google Scholar]
  25. Beck, K.; Beedle, M.; Van Bennekum, A.; Cockburn, A.; Cunningham, W.; Fowler, M.; Grenning, J.; Highsmith, J.; Hunt, A.; Jeffries, R.; et al. Manifesto for Agile Software Development. 2001. Available online: http://agilemanifesto.org/ (accessed on 25 November 2024).
  26. Humble, J.; Farley, D. Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation; Pearson Education: London, UK, 2010. [Google Scholar]
  27. Smith, J.; Brown, T.; Wilson, R. Unlocking Developer Productivity: A Deep Dive into GitHub Copilot’s AI-Powered Code Completion. Int. J. Eng. Res. Technol. 2023, 13, 82–87. [Google Scholar]
  28. Qian, C.; Liu, W.; Liu, H.; Chen, N.; Dang, Y.; Li, J.; Yang, C.; Chen, W.; Su, Y.; Cong, X.; et al. ChatDev: Communicative Agents for Software Development. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Bangkok, Thailand, 11–16 August 2024; pp. 15174–15186. [Google Scholar]
  29. Ernst, N.A.; Bavota, G. Ai-driven development is here: Should you worry? IEEE Softw. 2022, 39, 106–110. [Google Scholar] [CrossRef]
  30. Cámara, J.; Troya, J.; Burgueño, L.; Vallecillo, A. On the assessment of generative AI in modeling tasks: An experience report with ChatGPT and UML. Softw. Syst. Model. 2023, 22, 781–793. [Google Scholar] [CrossRef]
  31. Borges, O.; Lima, M.; Couto, J.; Gadelha, B.; Conte, T.; Prikladnicki, R. ML@ SE: What do we know about how Machine Learning impact Software Engineering practice? In Proceedings of the 2022 17th Iberian Conference on Information Systems and Technologies (CISTI), Madrid, Spain, 22–25 June 2022; pp. 1–7. [Google Scholar]
  32. Mezouar, H.; Afia, A.E. A systematic literature review of machine learning applications in software engineering. In Proceedings of the International Conference on Big Data and Internet of Things, Chengdu, China, 2–4 December 2022; pp. 317–331. [Google Scholar]
  33. Wang, S.; Huang, L.; Gao, A.; Ge, J.; Zhang, T.; Feng, H.; Satyarth, I.; Li, M.; Zhang, H.; Ng, V. Machine/deep learning for software engineering: A systematic literature review. IEEE Trans. Softw. Eng. 2022, 49, 1188–1231. [Google Scholar] [CrossRef]
  34. Aniche, M.; Maziero, E.; Durelli, R.; Durelli, V.H. The effectiveness of supervised machine learning algorithms in predicting software refactoring. IEEE Trans. Softw. Eng. 2020, 48, 1432–1450. [Google Scholar] [CrossRef]
  35. Durelli, V.H.; Durelli, R.S.; Borges, S.S.; Endo, A.T.; Eler, M.M.; Dias, D.R.; Guimarães, M.P. Machine learning applied to software testing: A systematic mapping study. IEEE Trans. Reliab. 2019, 68, 1189–1212. [Google Scholar] [CrossRef]
  36. Bird, C.; Ford, D.; Zimmermann, T.; Forsgren, N.; Kalliamvakou, E.; Lowdermilk, T.; Gazit, I. Taking flight with copilot. Commun. ACM 2023, 66, 56–62. [Google Scholar] [CrossRef]
  37. Lu, Q.; Zhu, L.; Whittle, J.; Michael, J.B. Software engineering for responsible AI. Computer 2023, 56, 13–16. [Google Scholar] [CrossRef]
  38. Gonzalez, L.A.; Neyem, A.; Contreras-McKay, I.; Molina, D. Improving learning experiences in software engineering capstone courses using artificial intelligence virtual assistants. Comput. Appl. Eng. Educ. 2022, 30, 1370–1389. [Google Scholar] [CrossRef]
  39. Sofian, H.; Yunus, N.A.M.; Ahmad, R. Systematic mapping: Artificial intelligence techniques in software engineering. IEEE Access 2022, 10, 51021–51040. [Google Scholar] [CrossRef]
  40. Meriçli, Ç.; Turhan, B. Special section on realizing artificial intelligence synergies in software engineering. Softw. Qual. J. 2017, 25, 231–233. [Google Scholar] [CrossRef]
  41. Shehab, M.; Abualigah, L.; Jarrah, M.I.; Alomari, O.A.; Daoud, M.S. Artificial intelligence in software engineering and inverse. Int. J. Comput. Integr. Manuf. 2020, 33, 1129–1144. [Google Scholar] [CrossRef]
  42. Mashkoor, A.; Menzies, T.; Egyed, A.; Ramler, R. Artificial intelligence and software engineering: Are we ready? Computer 2022, 55, 24–28. [Google Scholar] [CrossRef]
  43. Marar, H.W. Advancements in software engineering using AI. Comput. Softw. Media Appl. 2024, 6, 3906. [Google Scholar] [CrossRef]
  44. Necula, S.C.; Dumitriu, F.; Greavu-Șerban, V. A Systematic Literature Review on Using Natural Language Processing in Software Requirements Engineering. Electronics 2024, 13, 2055. [Google Scholar] [CrossRef]
  45. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
  46. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. In Proceedings of the Advances in Neural Information Processing Systems; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2020; Volume 33, pp. 1877–1901. [Google Scholar]
  47. Raharjana, I.K.; Siahaan, D.; Fatichah, C. User stories and natural language processing: A systematic literature review. IEEE Access 2021, 9, 53811–53826. [Google Scholar] [CrossRef]
  48. Dwivedi, A.K.; Tirkey, A.; Ray, R.B.; Rath, S.K. Software design pattern recognition using machine learning techniques. In Proceedings of the 2016 IEEE Region 10 Conference (Tencon), Singapore, 22–25 November 2016; pp. 222–227. [Google Scholar]
  49. André, P.; Tebib, M.E.A. Assistance in Model Driven Development: Toward an Automated Transformation Design Process. Complex Syst. Informatics Model. Q. 2024, 38, 54–99. [Google Scholar] [CrossRef]
  50. Barke, S.; James, M.B.; Polikarpova, N. Grounded copilot: How programmers interact with code-generating models. Proc. ACM Program. Lang. 2023, 7, 85–111. [Google Scholar] [CrossRef]
  51. Alenezi, M.; Akour, M. Empowering student entrepreneurship skills: A software engineering course for innovation and real-world impact. J. Infrastruct. Policy Dev. 2024, 8, 9088. [Google Scholar] [CrossRef]
  52. Magabaleh, A.A.; Ghraibeh, L.L.; Audeh, A.Y.; Albahri, A.; Deveci, M.; Antucheviciene, J. Systematic Review of Software Engineering Uses of Multi-Criteria Decision-Making Methods: Trends, Bibliographic Analysis, Challenges, Recommendations, and Future Directions. Appl. Soft Comput. 2024, 163, 111859. [Google Scholar] [CrossRef]
  53. Zarour, M.; Akour, M.; Alenezi, M. Enhancing DevOps Engineering Education Through System-Based Learning Approach. Open Educ. Stud. 2024, 6, 20240012. [Google Scholar] [CrossRef]
  54. Bu, L.; Liang, Y.; Xie, Z.; Qian, H.; Hu, Y.Q.; Yu, Y.; Chen, X.; Li, X. Machine learning steered symbolic execution framework for complex software code. Form. Asp. Comput. 2021, 33, 301–323. [Google Scholar] [CrossRef]
  55. Wang, S.; Liu, T.; Nam, J.; Tan, L. Deep semantic feature learning for software defect prediction. IEEE Trans. Softw. Eng. 2018, 46, 1267–1293. [Google Scholar] [CrossRef]
  56. Bowes, D.; Hall, T.; Petrić, J. Software defect prediction: Do different classifiers find the same defects? Softw. Qual. J. 2018, 26, 525–552. [Google Scholar] [CrossRef]
  57. Almodovar, C.; Sabrina, F.; Karimi, S.; Azad, S. LogFiT: Log anomaly detection using fine-tuned language models. IEEE Trans. Netw. Serv. Manag. 2024, 21, 1715–1723. [Google Scholar] [CrossRef]
  58. Deniz, B.K.; Gnanasambandam, C.; Harrysson, M.; Hussin, A.; Srivastava, S. Unleashing Developer Productivity with Generative AI. McKinsey Digital. 2023. Available online: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/unleashing-developer-productivity-with-generative-ai (accessed on 10 November 2024).
  59. Svyatkovskiy, A.; Zhao, Y.; Fu, S.; Sundaresan, N. Pythia: Ai-assisted code completion system. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2727–2735. [Google Scholar]
  60. DeepCode AI|AI Code Review|AI Security for SAST|Snyk AI|Snyk. Available online: https://snyk.io/platform/deepcode-ai/ (accessed on 11 November 2024).
  61. Amershi, S.; Begel, A.; Bird, C.; DeLine, R.; Gall, H.; Kamar, E.; Nagappan, N.; Nushi, B.; Zimmermann, T. Software engineering for machine learning: A case study. In Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), Montreal, QC, Canada, 25–31 May 2019; pp. 291–300. [Google Scholar]
  62. Davoudian, A.; Liu, M. Big data systems: A software engineering perspective. ACM Comput. Surv. (CSUR) 2020, 53, 1–39. [Google Scholar] [CrossRef]
  63. Novichkov, P.S.; Chandonia, J.M.; Arkin, A.P. CORAL: A framework for rigorous self-validated data modeling and integrative, reproducible data analysis. GigaScience 2022, 11, giac089. [Google Scholar] [CrossRef]
  64. Russo, D. Navigating the complexity of generative ai adoption in software engineering. ACM Trans. Softw. Eng. Methodol. 2024, 33, 1–50. [Google Scholar] [CrossRef]
  65. Belgaum, M.R.; Alansari, Z.; Musa, S.; Alam, M.M.; Mazliham, M. Role of artificial intelligence in cloud computing, IoT and SDN: Reliability and scalability issues. Int. J. Electr. Comput. Eng. 2021, 11, 4458. [Google Scholar] [CrossRef]
  66. Said, M.A.; Ezzati, A.; Mihi, S.; Belouaddane, L. Microservices adoption: An industrial inquiry into factors influencing decisions and implementation strategies. Int. J. Comput. Digit. Syst. 2024, 15, 1417–1432. [Google Scholar] [CrossRef]
  67. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 2021, 54, 1–35. [Google Scholar] [CrossRef]
  68. Holstein, K.; Wortman Vaughan, J.; Daumé III, H.; Dudik, M.; Wallach, H. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–16. [Google Scholar]
  69. Orphanou, K.; Otterbacher, J.; Kleanthous, S.; Batsuren, K.; Giunchiglia, F.; Bogina, V.; Tal, A.S.; Hartman, A.; Kuflik, T. Mitigating bias in algorithmic systems—A fish-eye view. ACM Comput. Surv. 2022, 55, 1–37. [Google Scholar] [CrossRef]
  70. Degli Esposti, M.; Lagioia, F.; Sartor, G. The use of copyrighted works by AI systems: Art works in the data Mill. Eur. J. Risk Regul. 2020, 11, 51–69. [Google Scholar] [CrossRef]
  71. Chatterjee, S.; NS, S. Artificial intelligence and human rights: A comprehensive study from Indian legal and policy perspective. Int. J. Law Manag. 2022, 64, 110–134. [Google Scholar] [CrossRef]
  72. Li, P.; Huang, J.; Zhang, S.; Qi, C. SecureEI: Proactive intellectual property protection of AI models for edge intelligence. Comput. Netw. 2024, 255, 110825. [Google Scholar] [CrossRef]
  73. Karthikeyan, C.; Singh, S. Skill Development Challenges in the Era of Artificial Intelligence (AI). In Integrating Technology in Problem-Solving Educational Practices; IGI Global: Hershey, PA, USA, 2025; pp. 189–218. [Google Scholar]
  74. Wang, D.; Weisz, J.D.; Muller, M.; Ram, P.; Geyer, W.; Dugan, C.; Tausczik, Y.; Samulowitz, H.; Gray, A. Human-AI collaboration in data science: Exploring data scientists’ perceptions of automated AI. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–24. [Google Scholar] [CrossRef]
  75. Kelley, S. Employee perceptions of the effective adoption of AI principles. J. Bus. Ethics 2022, 178, 871–893. [Google Scholar] [CrossRef]
  76. La Torre, D.; Colapinto, C.; Durosini, I.; Triberti, S. Team formation for human-artificial intelligence collaboration in the workplace: A goal programming model to foster organizational change. IEEE Trans. Eng. Manag. 2021, 70, 1966–1976. [Google Scholar] [CrossRef]
Figure 1. Analysis.
Figure 1. Analysis.
Applsci 15 01344 g001
Figure 2. Key metrics evaluated in this study.
Figure 2. Key metrics evaluated in this study.
Applsci 15 01344 g002
Figure 3. Challenges and considerations.
Figure 3. Challenges and considerations.
Applsci 15 01344 g003
Table 1. Evolution of Software Engineering Practices.
Table 1. Evolution of Software Engineering Practices.
EraMethodologiesKey FeaturesIntroduction/Adoption PeriodReferences
TraditionalWaterfall modelLinear, sequential approach to software development1970sRoyce, 1970 [24]
IterativeAgile, ScrumFlexible, iterative approach emphasizing collaboration1990sBeck et al., 2001  [25]
ModernDevOps, continuous integrationAutomated testing, continuous deployment practices2010sHumble and Farley, 2010 [26]
AI-DrivenAI-powered tools, machine learningAutomated coding, adaptive software maintenance2020sSmith et al., 2023 (GitHub Copilot) [27], Qian et al., 2024 [28]
Table 2. Quantitative summary of survey responses.
Table 2. Quantitative summary of survey responses.
MetricMean Score (Out of 5)Std. Deviation
Reduction in Coding Time4.20.7
Improvement in Code Quality4.00.8
Increase in Team Collaboration3.70.9
Challenges in Skill Development3.91.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alenezi, M.; Akour, M. AI-Driven Innovations in Software Engineering: A Review of Current Practices and Future Directions. Appl. Sci. 2025, 15, 1344. https://doi.org/10.3390/app15031344

AMA Style

Alenezi M, Akour M. AI-Driven Innovations in Software Engineering: A Review of Current Practices and Future Directions. Applied Sciences. 2025; 15(3):1344. https://doi.org/10.3390/app15031344

Chicago/Turabian Style

Alenezi, Mamdouh, and Mohammed Akour. 2025. "AI-Driven Innovations in Software Engineering: A Review of Current Practices and Future Directions" Applied Sciences 15, no. 3: 1344. https://doi.org/10.3390/app15031344

APA Style

Alenezi, M., & Akour, M. (2025). AI-Driven Innovations in Software Engineering: A Review of Current Practices and Future Directions. Applied Sciences, 15(3), 1344. https://doi.org/10.3390/app15031344

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop