Journal Description
Software
Software
is an international, peer-reviewed, open access journal on all aspects of software engineering published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 15.7 days after submission; acceptance to publication is undertaken in 2.6 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Software is a companion journal of Electronics.
Latest Articles
Enhancing DevOps Practices in the IoT–Edge–Cloud Continuum: Architecture, Integration, and Software Orchestration Demonstrated in the COGNIFOG Framework
Software 2025, 4(2), 10; https://doi.org/10.3390/software4020010 - 15 Apr 2025
Abstract
►
Show Figures
This paper presents COGNIFOG, an innovative framework under development that is designed to leverage decentralized decision-making, machine learning, and distributed computing to enable autonomous operation, adaptability, and scalability across the IoT–edge–cloud continuum. The work emphasizes Continuous Integration/Continuous Deployment (CI/CD) practices, development, and versatile
[...] Read more.
This paper presents COGNIFOG, an innovative framework under development that is designed to leverage decentralized decision-making, machine learning, and distributed computing to enable autonomous operation, adaptability, and scalability across the IoT–edge–cloud continuum. The work emphasizes Continuous Integration/Continuous Deployment (CI/CD) practices, development, and versatile integration infrastructures. The described methodology ensures efficient, reliable, and seamless integration of the framework, offering valuable insights into integration design, data flow, and the incorporation of cutting-edge technologies. Through three real-world trials in smart cities, e-health, and smart manufacturing and the development of a comprehensive QuickStart Guide for deployment, this work highlights the efficiency and adaptability of the COGNIFOG platform, presenting a robust solution for addressing the complexities of next-generation computing environments.
Full article
Open AccessArticle
Regression Testing in Agile—A Systematic Mapping Study
by
Suddhasvatta Das and Kevin Gary
Software 2025, 4(2), 9; https://doi.org/10.3390/software4020009 - 14 Apr 2025
Abstract
►▼
Show Figures
Background: Regression testing is critical in agile software development, as it ensures that frequent changes do not introduce defects into previously working functionalities. While agile methodologies emphasize rapid iterations and value delivery, regression testing research has predominantly focused on optimizing technical efficiency
[...] Read more.
Background: Regression testing is critical in agile software development, as it ensures that frequent changes do not introduce defects into previously working functionalities. While agile methodologies emphasize rapid iterations and value delivery, regression testing research has predominantly focused on optimizing technical efficiency rather than aligning with agile principles. Aim: This study aims to systematically map research trends and gaps in regression testing within agile environments, identifying areas that require further exploration to enhance alignment with agile practices and value-driven outcomes. Method: A systematic mapping study analyzed 35 primary studies. The research categorized studies based on their focus areas, evaluation metrics, agile frameworks, and methodologies, providing a comprehensive overview of the field. Results: The findings strongly emphasize test prioritization and selection, reflecting the need for optimized fault detection and execution efficiency in agile workflows. However, areas such as test generation, test minimization, and cost analysis are under-explored. Current evaluation metrics primarily address technical outcomes, neglecting agile-specific aspects like defect severity’s business impact and iterative workflows. Additionally, the research highlights the dominance of continuous integration frameworks, with limited attention to other agile practices like Scrum and a lack of datasets capturing agile-specific attributes such as testing costs and user story importance. Conclusions: This study underscores the need for research to expand beyond existing focus areas, exploring diverse testing techniques and developing agile-centric metrics and datasets. By addressing these gaps, future work can enhance the applicability of regression testing strategies and align them more closely with agile development principles.
Full article

Figure 1
Open AccessArticle
Uplifting Moods: Augmented Reality-Based Gamified Mood Intervention App with Attention Bias Modification
by
Yun Jung Yeh, Sarah S. Jo and Youngjun Cho
Software 2025, 4(2), 8; https://doi.org/10.3390/software4020008 - 1 Apr 2025
Abstract
►▼
Show Figures
Attention Bias Modification (ABM) is a cost-effective mood intervention that has the potential to be used in daily settings beyond clinical environments. However, its interactivity and user engagement are known to be limited and underexplored. Here, we propose Uplifting Moods, a novel mood
[...] Read more.
Attention Bias Modification (ABM) is a cost-effective mood intervention that has the potential to be used in daily settings beyond clinical environments. However, its interactivity and user engagement are known to be limited and underexplored. Here, we propose Uplifting Moods, a novel mood intervention app that combines gamified ABM and augmented reality (AR) to address the limitation associated with the repetitive nature of ABM. By harnessing the benefits of mobile AR’s low-cost, portable, and accessible characteristics, this approach is to help users easily take part in ABM, positively shifting one’s emotions. We conducted a mixed methods study with 24 participants, which involves a controlled experiment with Self-Assessment Manikin as its primary measure and a semi-structured interview. Our analysis reports that the approach uniquely adds fun, exploring, and challenging features, helping improve engagement and feeling more cheerful and less under control. It also highlights the importance of personalization and consideration of gaming style, music preference, and socialization in designing a daily AR ABM game as an effective mental wellbeing intervention.
Full article

Figure 1
Open AccessArticle
Empirical Analysis of Data Sampling-Based Decision Forest Classifiers for Software Defect Prediction
by
Fatima Enehezei Usman-Hamza, Abdullateef Oluwagbemiga Balogun, Hussaini Mamman, Luiz Fernando Capretz, Shuib Basri, Rafiat Ajibade Oyekunle, Hammed Adeleye Mojeed and Abimbola Ganiyat Akintola
Software 2025, 4(2), 7; https://doi.org/10.3390/software4020007 - 21 Mar 2025
Abstract
►▼
Show Figures
The strategic significance of software testing in ensuring the success of software development projects is paramount. Comprehensive testing, conducted early and consistently across the development lifecycle, is vital for mitigating defects, especially given the constraints on time, budget, and other resources often faced
[...] Read more.
The strategic significance of software testing in ensuring the success of software development projects is paramount. Comprehensive testing, conducted early and consistently across the development lifecycle, is vital for mitigating defects, especially given the constraints on time, budget, and other resources often faced by development teams. Software defect prediction (SDP) serves as a proactive approach to identifying software components that are most likely to be defective. By predicting these high-risk modules, teams can prioritize thorough testing and inspection, thereby preventing defects from escalating to later stages where resolution becomes more resource intensive. SDP models must be continuously refined to improve predictive accuracy and performance. This involves integrating clean and preprocessed datasets, leveraging advanced machine learning (ML) methods, and optimizing key metrics. Statistical-based and traditional ML approaches have been widely explored for SDP. However, statistical-based models often struggle with scalability and robustness, while conventional ML models face challenges with imbalanced datasets, limiting their prediction efficacy. In this study, innovative decision forest (DF) models were developed to address these limitations. Specifically, this study evaluates the cost-sensitive forest (CS-Forest), forest penalizing attributes (FPA), and functional trees (FT) as DF models. These models were further enhanced using homogeneous ensemble techniques, such as bagging and boosting techniques. The experimental analysis on benchmark SDP datasets demonstrates that the proposed DF models effectively handle class imbalance, accurately distinguishing between defective and non-defective modules. Compared to baseline and state-of-the-art ML and deep learning (DL) methods, the suggested DF models exhibit superior prediction performance and offer scalable solutions for SDP. Consequently, the application of DF-based models is recommended for advancing defect prediction in software engineering and similar ML domains.
Full article

Figure 1
Open AccessReview
Designing Microservices Using AI: A Systematic Literature Review
by
Daniel Narváez, Nicolas Battaglia, Alejandro Fernández and Gustavo Rossi
Software 2025, 4(1), 6; https://doi.org/10.3390/software4010006 - 19 Mar 2025
Abstract
►▼
Show Figures
Microservices architecture has emerged as a dominant approach for developing scalable and modular software systems, driven by the need for agility and independent deployability. However, designing these architectures poses significant challenges, particularly in service decomposition, inter-service communication, and maintaining data consistency. To address
[...] Read more.
Microservices architecture has emerged as a dominant approach for developing scalable and modular software systems, driven by the need for agility and independent deployability. However, designing these architectures poses significant challenges, particularly in service decomposition, inter-service communication, and maintaining data consistency. To address these issues, artificial intelligence (AI) techniques, such as machine learning (ML) and natural language processing (NLP), have been applied with increasing frequency to automate and enhance the design process. This systematic literature review examines the application of AI in microservices design, focusing on AI-driven tools and methods for improving service decomposition, decision-making, and architectural validation. This review analyzes research studies published between 2018 and 2024 that specifically focus on the application of AI techniques in microservices design, identifying key AI methods used, challenges encountered in integrating AI into microservices, and the emerging trends in this research area. The findings reveal that AI has effectively been used to optimize performance, automate design tasks, and mitigate some of the complexities inherent in microservices architectures. However, gaps remain in areas such as distributed transactions and security. The study concludes that while AI offers promising solutions, further empirical research is needed to refine AI’s role in microservices design and address the remaining challenges.
Full article

Figure 1
Open AccessArticle
A Systematic Approach for Assessing Large Language Models’ Test Case Generation Capability
by
Hung-Fu Chang and Mohammad Shokrolah Shirazi
Software 2025, 4(1), 5; https://doi.org/10.3390/software4010005 - 10 Mar 2025
Abstract
►▼
Show Figures
Software testing ensures the quality and reliability of software products, but manual test case creation is labor-intensive. With the rise of Large Language Models (LLMs), there is growing interest in unit test creation with LLMs. However, effective assessment of LLM-generated test cases is
[...] Read more.
Software testing ensures the quality and reliability of software products, but manual test case creation is labor-intensive. With the rise of Large Language Models (LLMs), there is growing interest in unit test creation with LLMs. However, effective assessment of LLM-generated test cases is limited by the lack of standardized benchmarks that comprehensively cover diverse programming scenarios. To address the assessment of an LLM’s test case generation ability and lacking a dataset for evaluation, we propose the Generated Benchmark from Control-Flow Structure and Variable Usage Composition (GBCV) approach, which systematically generates programs used for evaluating LLMs’ test generation capabilities. By leveraging basic control-flow structures and variable usage, GBCV provides a flexible framework to create a spectrum of programs ranging from simple to complex. Because GPT-4o and GPT-3.5-Turbo are publicly accessible models, to present real-world regular users’ use cases, we use GBCV to assess LLM performance on them. Our findings indicate that GPT-4o performs better on composite program structures, while all models effectively detect boundary values in simple conditions but face challenges with arithmetic computations. This study highlights the strengths and limitations of LLMs in test generation, provides a benchmark framework, and suggests directions for future improvement.
Full article

Figure 1
Open AccessArticle
On the Execution and Runtime Verification of UML Activity Diagrams
by
François Siewe and Guy Merlin Ngounou
Software 2025, 4(1), 4; https://doi.org/10.3390/software4010004 - 27 Feb 2025
Abstract
The unified modelling language (UML) is an industrial de facto standard for system modelling. It consists of a set of graphical notations (also known as diagrams) and has been used widely in many industrial applications. Although the graphical nature of UML is appealing
[...] Read more.
The unified modelling language (UML) is an industrial de facto standard for system modelling. It consists of a set of graphical notations (also known as diagrams) and has been used widely in many industrial applications. Although the graphical nature of UML is appealing to system developers, the official documentation of UML does not provide formal semantics for UML diagrams. This makes UML unsuitable for formal verification and, therefore, limited when it comes to the development of safety/security-critical systems where faults can cause damage to people, properties, or the environment. The UML activity diagram is an important UML graphical notation, which is effective in modelling the dynamic aspects of a system. This paper proposes a formal semantics for UML activity diagrams based on the calculus of context-aware ambients (CCA). An algorithm (semantic function) is proposed that maps any activity diagram onto a process in CCA, which describes the behaviours of the UML activity diagram. This process can then be executed and formally verified using the CCA simulation tool ccaPL and the CCA runtime verification tool ccaRV. Hence, design flaws can be detected and fixed early during the system development lifecycle. The pragmatics of the proposed approach are demonstrated using a case study in e-commerce.
Full article
(This article belongs to the Topic Software Engineering and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
The Scalable Detection and Resolution of Data Clumps Using a Modular Pipeline with ChatGPT
by
Nils Baumgartner, Padma Iyenghar, Timo Schoemaker and Elke Pulvermüller
Software 2025, 4(1), 3; https://doi.org/10.3390/software4010003 - 2 Feb 2025
Abstract
This paper explores a modular pipeline architecture that integrates ChatGPT, a Large Language Model (LLM), to automate the detection and refactoring of data clumps—a prevalent type of code smell that complicates software maintainability. Data clumps refer to clusters of code that are often
[...] Read more.
This paper explores a modular pipeline architecture that integrates ChatGPT, a Large Language Model (LLM), to automate the detection and refactoring of data clumps—a prevalent type of code smell that complicates software maintainability. Data clumps refer to clusters of code that are often repeated and should ideally be refactored to improve code quality. The pipeline leverages ChatGPT’s capabilities to understand context and generate structured outputs, making it suitable for addressing complex software refactoring tasks. Through systematic experimentation, our study not only addresses the research questions outlined but also demonstrates that the pipeline can accurately identify data clumps, particularly excelling in cases that require semantic understanding—where localized clumps are embedded within larger codebases. While the solution significantly enhances the refactoring workflow, facilitating the management of distributed clumps across multiple files, it also presents challenges such as occasional compiler errors and high computational costs. Feedback from developers underscores the usefulness of LLMs in software development but also highlights the essential role of human oversight to correct inaccuracies. These findings demonstrate the pipeline’s potential to enhance software maintainability, offering a scalable and efficient solution for addressing code smells in real-world projects, and contributing to the broader goal of enhancing software maintainability in large-scale projects.
Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
German Translation and Psychometric Analysis of the SOLID-SD: A German Inventory for Assessing Security Culture in Software Companies
by
Christina Glasauer, Hollie N. Pearl and Rainer W. Alexandrowicz
Software 2025, 4(1), 2; https://doi.org/10.3390/software4010002 - 24 Jan 2025
Abstract
The SOLID-S is an inventory assessing six dimensions of organizational (software) security culture, which is currently available in English. Here, we present the German version, SOLID-SD, along with its translation process and psychometric analysis. With a partial credit model based on a sample
[...] Read more.
The SOLID-S is an inventory assessing six dimensions of organizational (software) security culture, which is currently available in English. Here, we present the German version, SOLID-SD, along with its translation process and psychometric analysis. With a partial credit model based on a sample of N = 280 persons, we found, overall, highly satisfactory measurement properties for the instrument. There were no threshold permutations, no serious differential item functioning, and good item fits. The subscales’ internal consistencies and the inter-scale correlations show very high similarities between the SOLID-SD and the original English version, indicating a successful translation of the instrument.
Full article
(This article belongs to the Special Issue Software Reliability, Security and Quality Assurance)
►▼
Show Figures

Figure 1
Open AccessArticle
A Common Language of Software Evolution in Repositories (CLOSER)
by
Jordan Garrity and David Cutting
Software 2025, 4(1), 1; https://doi.org/10.3390/software4010001 - 6 Jan 2025
Abstract
►▼
Show Figures
Version Control Systems (VCSs) are used by development teams to manage the collaborative evolution of source code, and there are several widely used industry standard VCSs. In addition to the code files themselves, metadata about the changes made are also recorded by the
[...] Read more.
Version Control Systems (VCSs) are used by development teams to manage the collaborative evolution of source code, and there are several widely used industry standard VCSs. In addition to the code files themselves, metadata about the changes made are also recorded by the VCS, and this is often used with analytical tools to provide insight into the software development, a process known as Mining Software Repositories (MSRs). MSR tools are numerous but most often limited to one VCS format and, therefore, restricted in their scope of application in addition to the initial effort required to implement parsers for verbose textual VCS output. To address this limitation, a domain-specific language (DSL), the Common Language of Software Evolution in Repositories (CLOSER), was defined that abstracted away from specific implementations while isomorphically mapping to the data model of all major VCS formats. Using CLOSER directly as a data model or as an intermediate stage in a conversion analysis approach could make use of all major repositories rather than be limited to a single format. The initial barrier to adoption for MSR approaches was also lowered as CLOSER output is a concise, easily machine-readable format. CLOSER was implemented in tooling and tested against a number of common expected use cases, including a direct use in MSR analysis, proving the fidelity of the model and implementation. CLOSER was also successfully used to convert raw output logs from one VCS format to another, offering the possibility that legacy analysis tools could be used on other technologies without any changes being required. In addition to the advantages of a generic model opening all major VCS formats for analysis parsing, the CLOSER format was found to require less code and complete parsing faster than traditional VCS logging outputs.
Full article

Figure 1
Open AccessCommunication
Dental Loop Chatbot: A Prototype Large Language Model Framework for Dentistry
by
Md Sahadul Hasan Arian, Faisal Ahmed Sifat, Saif Ahmed, Nabeel Mohammed, Taseef Hasan Farook and James Dudley
Software 2024, 3(4), 587-594; https://doi.org/10.3390/software3040029 - 17 Dec 2024
Abstract
►▼
Show Figures
The Dental Loop Chatbot was developed as a real-time, evidence-based guidance system for dental practitioners using a fine-tuned large language model (LLM) and Retrieval-Augmented Generation (RAG). This paper outlines the development and preliminary evaluation of the chatbot as a scalable clinical decision-support tool
[...] Read more.
The Dental Loop Chatbot was developed as a real-time, evidence-based guidance system for dental practitioners using a fine-tuned large language model (LLM) and Retrieval-Augmented Generation (RAG). This paper outlines the development and preliminary evaluation of the chatbot as a scalable clinical decision-support tool designed for resource-limited settings. The system’s architecture incorporates Quantized Low-Rank Adaptation (QLoRA) for efficient fine-tuning, while dynamic retrieval mechanisms ensure contextually accurate and relevant responses. This prototype lays the groundwork for future triaging and diagnostic support systems tailored specifically to the field of dentistry.
Full article

Figure 1
Open AccessArticle
A Fuzzing Tool Based on Automated Grammar Detection
by
Jia Song and Jim Alves-Foss
Software 2024, 3(4), 569-586; https://doi.org/10.3390/software3040028 - 14 Dec 2024
Abstract
Software testing is an important step in the software development life cycle to ensure the quality and security of software. Fuzzing is a security testing technique that finds vulnerabilities automatically without accessing the source code. We built a fuzzer, called JIMA-Fuzzing, which is
[...] Read more.
Software testing is an important step in the software development life cycle to ensure the quality and security of software. Fuzzing is a security testing technique that finds vulnerabilities automatically without accessing the source code. We built a fuzzer, called JIMA-Fuzzing, which is an effective fuzzing tool that utilizes grammar detected from sample input. Based on the detected grammar, JIMA-Fuzzing selects a portion of the valid user input and fuzzes that portion. For example, the tool may greatly increase the size of the input, truncate the input, replace numeric values with new values, replace words with numbers, etc. This paper discusses how JIMA-Fuzzing works and shows the evaluation results after testing against the DARPA Cyber Grand Challenge (CGC) dataset. JIMA-Fuzzing is capable of extracting grammar from sample input files, meaning that it does not require access to the source code to generate effective fuzzing files. This feature allows it to work with proprietary or non-open-source programs and significantly reduces the effort needed from human testers. In addition, compared to fuzzing tools guided with symbolic execution or taint analysis, JIMA-Fuzzing takes much less computing power and time to analyze sample input and generate fuzzing files. However, the limitation is that JIMA-Fuzzing relies on good sample inputs and works primarily on programs that require user interaction/input.
Full article
(This article belongs to the Special Issue Software Reliability, Security and Quality Assurance)
►▼
Show Figures

Figure 1
Open AccessArticle
RbfCon: Construct Radial Basis Function Neural Networks with Grammatical Evolution
by
Ioannis G. Tsoulos, Ioannis Varvaras and Vasileios Charilogis
Software 2024, 3(4), 549-568; https://doi.org/10.3390/software3040027 - 11 Dec 2024
Abstract
►▼
Show Figures
Radial basis function networks are considered a machine learning tool that can be applied on a wide series of classification and regression problems proposed in various research topics of the modern world. However, in many cases, the initial training method used to fit
[...] Read more.
Radial basis function networks are considered a machine learning tool that can be applied on a wide series of classification and regression problems proposed in various research topics of the modern world. However, in many cases, the initial training method used to fit the parameters of these models can produce poor results either due to unstable numerical operations or its inability to effectively locate the lowest value of the error function. The current work proposed a novel method that constructs the architecture of this model and estimates the values for each parameter of the model with the incorporation of Grammatical Evolution. The proposed method was coded in ANSI C++, and the produced software was tested for its effectiveness on a wide series of datasets. The experimental results certified the adequacy of the new method to solve difficult problems, and in the vast majority of cases, the error in the classification or approximation of functions was significantly lower than the case where the original training method was applied.
Full article

Figure 1
Open AccessArticle
Implementing Mathematics of Arrays in Modern Fortran: Efficiency and Efficacy
by
Arjen Markus and Lenore Mullin
Software 2024, 3(4), 534-548; https://doi.org/10.3390/software3040026 - 30 Nov 2024
Abstract
►▼
Show Figures
Mathematics of Arrays (MoA) concerns the formal description of algorithms working on arrays of data and their efficient and effective implementation in software and hardware. Since (multidimensional) arrays are one of the most important data structures in Fortran, as witnessed by their native
[...] Read more.
Mathematics of Arrays (MoA) concerns the formal description of algorithms working on arrays of data and their efficient and effective implementation in software and hardware. Since (multidimensional) arrays are one of the most important data structures in Fortran, as witnessed by their native support in its language and the numerous operations and functions that take arrays as inputs and outputs, it is natural to examine how Fortran can be used as an implementation language for MoA. This article presents the first results, both in terms of code and of performance, regarding this union. It may serve as a basis for further research, both with respect to the formal theory of MoA and to improving the practical implementation of array-based algorithms.
Full article

Figure 1
Open AccessArticle
Analysing Quality Metrics and Automated Scoring of Code Reviews
by
Owen Sortwell, David Cutting and Christine McConnellogue
Software 2024, 3(4), 514-533; https://doi.org/10.3390/software3040025 - 29 Nov 2024
Abstract
►▼
Show Figures
Code reviews are an important part of the software development process, and there is a wide variety of approaches used to perform them. While it is generally agreed that code reviews are beneficial and result in higher-quality software, there has been little work
[...] Read more.
Code reviews are an important part of the software development process, and there is a wide variety of approaches used to perform them. While it is generally agreed that code reviews are beneficial and result in higher-quality software, there has been little work investigating best practices and approaches, exploring which factors impact code review quality. Our approach firstly analyses current best practices and procedures for undertaking code reviews, along with an examination of metrics often used to analyse a review’s quality and current offerings for automated code review assessment. A maximum of one thousand code review comments per project were mined from GitHub pull requests across seven open-source projects which have previously been analysed in similar studies. Several identified metrics are tested across these projects using Python’s Natural Language Toolkit, including stop word ratio, overall sentiment, and detection of code snippets through the GitHub markdown language. Comparisons are drawn with regards to each project’s culture and the language used in the code review process, with pros and cons for each. The results show that the stop word ratio remained consistent across all projects, with only one project exceeding an average of 30%, and that the percentage of positive comments across the projects was broadly similar also. The suitability of these metrics is also discussed with regards to the creation of a scoring framework and development of an automated code review analysis tool. We conclude that the software written is an effective method of comparing practices and cultures across projects and can provide benefits by promoting a positive review culture within an organisation. However, rudimentary sentiment analysis and detection of GitHub code snippets may not be sufficient to assess a code review’s overall usefulness, as many terms that are important to include in a programmer’s lexicon such as ‘error’ and ‘fail’ deem a code review to be negative. Code snippets that are included outside of the markdown language are also ignored from analysis. Recommendations for future work are suggested, including the development of a more robust sentiment analysis system that can include detection of emotion such as frustration, and the creation of a programming dictionary to exclude programming terms from sentiment analysis.
Full article

Figure 1
Open AccessArticle
Implementation and Performance Evaluation of Quantum Machine Learning Algorithms for Binary Classification
by
Surajudeen Shina Ajibosin and Deniz Cetinkaya
Software 2024, 3(4), 498-513; https://doi.org/10.3390/software3040024 - 28 Nov 2024
Cited by 1
Abstract
►▼
Show Figures
In this work, we studied the use of Quantum Machine Learning (QML) algorithms for binary classification and compared their performance with classical Machine Learning (ML) methods. QML merges principles of Quantum Computing (QC) and ML, offering improved efficiency and potential quantum advantage in
[...] Read more.
In this work, we studied the use of Quantum Machine Learning (QML) algorithms for binary classification and compared their performance with classical Machine Learning (ML) methods. QML merges principles of Quantum Computing (QC) and ML, offering improved efficiency and potential quantum advantage in data-driven tasks and when solving complex problems. In binary classification, where the goal is to assign data to one of two categories, QML uses quantum algorithms to process large datasets efficiently. Quantum algorithms like Quantum Support Vector Machines (QSVM) and Quantum Neural Networks (QNN) exploit quantum parallelism and entanglement to enhance performance over classical methods. This study focuses on two common QML algorithms, Quantum Support Vector Classifier (QSVC) and QNN. We used the Qiskit software and conducted the experiments with three different datasets. Data preprocessing included dimensionality reduction using Principal Component Analysis (PCA) and standardization using scalers. The results showed that quantum algorithms demonstrated competitive performance against their classical counterparts in terms of accuracy, while QSVC performed better than QNN. These findings suggest that QML holds potential for improving computational efficiency in binary classification tasks. This opens the way for more efficient and scalable solutions in complex classification challenges and shows the complementary role of quantum computing.
Full article

Figure 1
Open AccessArticle
A Brief Overview of the Pawns Programming Language
by
Lee Naish
Software 2024, 3(4), 473-497; https://doi.org/10.3390/software3040023 - 19 Nov 2024
Abstract
►▼
Show Figures
This paper describes the Pawns programming language, currently under development, which uses several novel features to combine the functional and imperative programming paradigms. It supports pure functional programming (including algebraic data types, higher-order programming and parametric polymorphism), where the representation of values need
[...] Read more.
This paper describes the Pawns programming language, currently under development, which uses several novel features to combine the functional and imperative programming paradigms. It supports pure functional programming (including algebraic data types, higher-order programming and parametric polymorphism), where the representation of values need not be considered. It also supports lower-level C-like imperative programming with pointers and the destructive update of all fields of the structs used to represent the algebraic data types. All destructive update of variables is made obvious in Pawns code, via annotations on statements and in type signatures. Type signatures must also declare sharing between any arguments and result that may be updated. For example, if two arguments of a function are trees that share a subtree and the subtree is updated within the function, both variables must be annotated at that point in the code, and the sharing and update of both arguments must be declared in the type signature of the function. The compiler performs extensive sharing analysis to check that the declarations and annotations are correct. This analysis allows destructive update to be encapsulated: a function with no update annotations in its type signature is guaranteed to behave as a pure function, even though the value returned may have been constructed using destructive update within the function. Additionally, the sharing analysis helps support a constrained form of global variables that also allows destructive update to be encapsulated and safe update of variables with polymorphic types to be performed.
Full article

Figure 1
Open AccessArticle
Software Development and Maintenance Effort Estimation Using Function Points and Simpler Functional Measures
by
Luigi Lavazza, Angela Locoro and Roberto Meli
Software 2024, 3(4), 442-472; https://doi.org/10.3390/software3040022 - 29 Oct 2024
Abstract
►▼
Show Figures
Functional size measures are widely used for estimating software development effort. After the introduction of Function Points, a few “simplified” measures have been proposed, aiming to make measurement simpler and applicable when fully detailed software specifications are not yet available. However, some practitioners
[...] Read more.
Functional size measures are widely used for estimating software development effort. After the introduction of Function Points, a few “simplified” measures have been proposed, aiming to make measurement simpler and applicable when fully detailed software specifications are not yet available. However, some practitioners believe that, when considering “complex” projects, traditional Function Point measures support more accurate estimates than simpler functional size measures, which do not account for greater-than-average complexity. In this paper, we aim to produce evidence that confirms or disproves such a belief via an empirical study that separately analyzes projects that involved developments from scratch and extensions and modifications of existing software. Our analysis shows that there is no evidence that traditional Function Points are generally better at estimating more complex projects than simpler measures, although some differences appear in specific conditions. Another result of this study is that functional size metrics—both traditional and simplified—do not seem to effectively account for software complexity, as estimation accuracy decreases with increasing complexity, regardless of the functional size metric used. To improve effort estimation, researchers should look for a way of measuring software complexity that can be used in effort models together with (traditional or simplified) functional size measures.
Full article

Figure 1
Open AccessArticle
Opening Software Research Data 5Ws+1H
by
Anastasia Terzi and Stamatia Bibi
Software 2024, 3(4), 411-441; https://doi.org/10.3390/software3040021 - 26 Sep 2024
Abstract
►▼
Show Figures
Open Science describes the movement of making any research artifact available to the public, fostering sharing and collaboration. While sharing the source code is a popular Open Science practice in software research and development, there is still a lot of work to be
[...] Read more.
Open Science describes the movement of making any research artifact available to the public, fostering sharing and collaboration. While sharing the source code is a popular Open Science practice in software research and development, there is still a lot of work to be done to achieve the openness of the whole research and development cycle from the conception to the preservation phase. In this direction, the software engineering community faces significant challenges in adopting open science practices due to the complexity of the data, the heterogeneity of the development environments and the diversity of the application domains. In this paper, through the discussion of the 5Ws+1H (Why, Who, What, When, Where, and How) questions that are referred to as the Kipling’s framework, we aim to provide a structured guideline to motivate and assist the software engineering community on the journey to data openness. Also, we demonstrate the practical application of these guidelines through a use case on opening research data.
Full article

Figure 1
Open AccessArticle
A Software Tool for ICESat and ICESat-2 Laser Altimetry Data Processing, Analysis, and Visualization: Description, Features, and Usage
by
Bruno Silva and Luiz Guerreiro Lopes
Software 2024, 3(3), 380-410; https://doi.org/10.3390/software3030020 - 18 Sep 2024
Abstract
►▼
Show Figures
This paper presents a web-based software tool designed to process, analyze, and visualize satellite laser altimetry data, specifically from the Ice, Cloud, and land Elevation Satellite (ICESat) mission, which collected data from 2003 to 2009, and ICESat-2, which was launched in 2018 and
[...] Read more.
This paper presents a web-based software tool designed to process, analyze, and visualize satellite laser altimetry data, specifically from the Ice, Cloud, and land Elevation Satellite (ICESat) mission, which collected data from 2003 to 2009, and ICESat-2, which was launched in 2018 and is currently operational. These data are crucial for studying and understanding changes in Earth’s surface and cryosphere, offering unprecedented accuracy in quantifying such changes. The software tool ICEComb provides the capability to access the available data from both missions, interactively visualize it on a geographic map, locally store the data records, and process, analyze, and explore the data in a detailed, meaningful, and efficient manner. This creates a user-friendly online platform for the analysis, exploration, and interpretation of satellite laser altimetry data. ICEComb was developed using well-known and well-documented technologies, simplifying the addition of new functionalities and extending its applicability to support data from different satellite laser altimetry missions. The tool’s use is illustrated throughout the text by its application to ICESat and ICESat-2 laser altimetry measurements over the Mirim Lagoon region in southern Brazil and Uruguay, which is part of the world’s largest complex of shallow-water coastal lagoons.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, ASI, Blockchains, Computers, MAKE, Software
Recent Advances in AI-Enhanced Software Engineering and Web Services
Topic Editors: Hai Wang, Zhe HouDeadline: 31 May 2025
Topic in
Algorithms, Applied Sciences, Electronics, MAKE, AI, Software
Applications of NLP, AI, and ML in Software Engineering
Topic Editors: Affan Yasin, Javed Ali Khan, Lijie WenDeadline: 31 August 2025
Topic in
Applied Sciences, Electronics, Informatics, Information, Software
Software Engineering and Applications
Topic Editors: Sanjay Misra, Robertas Damaševičius, Bharti SuriDeadline: 31 October 2025

Conferences
Special Issues
Special Issue in
Software
Women’s Special Issue Series: Software
Guest Editors: Tingting Bi, Xing Hu, Letizia JaccheriDeadline: 20 May 2025
Special Issue in
Software
Software Reliability, Security and Quality Assurance
Guest Editors: Tadashi Dohi, Junjun Zheng, Xiao-Yi ZhangDeadline: 20 June 2025