Journal Description
Software
Software
is an international, peer-reviewed, open access journal on all aspects of software engineering published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 19.3 days after submission; acceptance to publication is undertaken in 5.8 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
- Software is a companion journal of Electronics.
Latest Articles
CORE-ReID: Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-Identification
Software 2024, 3(2), 227-249; https://doi.org/10.3390/software3020012 - 3 Jun 2024
Abstract
►
Show Figures
This study introduces a novel framework, “Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-identification (CORE-ReID)”, to address an Unsupervised Domain Adaptation (UDA) for Person Re-identification (ReID). The framework utilizes CycleGAN to generate diverse data that harmonize differences in
[...] Read more.
This study introduces a novel framework, “Comprehensive Optimization and Refinement through Ensemble Fusion in Domain Adaptation for Person Re-identification (CORE-ReID)”, to address an Unsupervised Domain Adaptation (UDA) for Person Re-identification (ReID). The framework utilizes CycleGAN to generate diverse data that harmonize differences in image characteristics from different camera sources in the pre-training stage. In the fine-tuning stage, based on a pair of teacher–student networks, the framework integrates multi-view features for multi-level clustering to derive diverse pseudo-labels. A learnable Ensemble Fusion component that focuses on fine-grained local information within global features is introduced to enhance learning comprehensiveness and avoid ambiguity associated with multiple pseudo-labels. Experimental results on three common UDAs in Person ReID demonstrated significant performance gains over state-of-the-art approaches. Additional enhancements, such as Efficient Channel Attention Block and Bidirectional Mean Feature Normalization mitigate deviation effects and the adaptive fusion of global and local features using the ResNet-based model, further strengthening the framework. The proposed framework ensures clarity in fusion features, avoids ambiguity, and achieves high accuracy in terms of Mean Average Precision, Top-1, Top-5, and Top-10, positioning it as an advanced and effective solution for UDA in Person ReID.
Full article
Open AccessExpression of Concern
Expression of Concern: Stephenson, M.J. A Differential Datalog Interpreter. Software 2023, 2, 427–446
by
Software Editorial Office
Software 2024, 3(2), 226; https://doi.org/10.3390/software3020011 - 6 May 2024
Abstract
With this notice, the Software Editorial Office states their awareness of the concerns regarding the appropriateness of the authorship and origins of the study of the published manuscript [...]
Full article
Open AccessArticle
A MongoDB Document Reconstruction Support System Using Natural Language Processing
by
Kohei Hamaji and Yukikazu Nakamoto
Software 2024, 3(2), 206-225; https://doi.org/10.3390/software3020010 - 2 May 2024
Abstract
►▼
Show Figures
Document-oriented databases, a type of Not Only SQL (NoSQL) database, are gaining popularity owing to their flexibility in data handling and performance for large-scale data. MongoDB, a typical document-oriented database, is a database that stores data in the JSON format, where the upper
[...] Read more.
Document-oriented databases, a type of Not Only SQL (NoSQL) database, are gaining popularity owing to their flexibility in data handling and performance for large-scale data. MongoDB, a typical document-oriented database, is a database that stores data in the JSON format, where the upper field involves lower fields and fields with the same related parent. One feature of this document-oriented database is that data are dynamically stored in an arbitrary location without explicitly defining a schema in advance. This flexibility violates the above property and causes difficulties for application program readability and database maintenance. To address these issues, we propose a reconstruction support method for document structures in MongoDB. The method uses the strength of the Has-A relationship between the parent and child fields, as well as the similarity of field names in the MongoDB documents in natural language processing, to reconstruct the data structure in MongoDB. As a result, the method transforms the parent and child fields into more coherent data structures. We evaluated our methods using real-world data and demonstrated their effectiveness.
Full article
Figure 1
Open AccessArticle
Defining and Researching “Dynamic Systems of Systems”
by
Rasmus Adler, Frank Elberzhager, Rodrigo Falcão and Julien Siebert
Software 2024, 3(2), 183-205; https://doi.org/10.3390/software3020009 - 1 May 2024
Abstract
Digital transformation is advancing across industries, enabling products, processes, and business models that change the way we communicate, interact, and live. It radically influences the evolution of existing systems of systems (SoSs), such as mobility systems, production systems, energy systems, or cities, that
[...] Read more.
Digital transformation is advancing across industries, enabling products, processes, and business models that change the way we communicate, interact, and live. It radically influences the evolution of existing systems of systems (SoSs), such as mobility systems, production systems, energy systems, or cities, that have grown over a long time. In this article, we discuss what this means for the future of software engineering based on the results of a research project called DynaSoS. We present the data collection methods we applied, including interviews, a literature review, and workshops. As one contribution, we propose a classification scheme for deriving and structuring research challenges and directions. The scheme comprises two dimensions: scope and characteristics. The scope motivates and structures the trend toward an increasingly connected world. The characteristics enhance and adapt established SoS characteristics in order to include novel aspects and to better align them with the structuring of research into different research areas or communities. As a second contribution, we present research challenges using the classification scheme. We have observed that a scheme puts research challenges into context, which is needed for interpreting them. Accordingly, we conclude that our proposals contribute to a common understanding and vision for engineering dynamic SoS.
Full article
(This article belongs to the Topic Software Engineering and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
NICE: A Web-Based Tool for the Characterization of Transient Noise in Gravitational Wave Detectors
by
Nunziato Sorrentino, Massimiliano Razzano, Francesco Di Renzo, Francesco Fidecaro and Gary Hemming
Software 2024, 3(2), 169-182; https://doi.org/10.3390/software3020008 - 18 Apr 2024
Abstract
NICE—Noise Interactive Catalogue Explorer—is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support
[...] Read more.
NICE—Noise Interactive Catalogue Explorer—is a web service developed for rapid-qualitative glitch analysis in gravitational wave data. Glitches are transient noise events that can smother the gravitational wave signal in data recorded by gravitational wave interferometer detectors. NICE provides interactive graphical tools to support detector noise characterization activities, in particular, the analysis of glitches from past and current observing runs, passing from glitch population visualization to individual glitch characterization. The NICE back-end API consists of a multi-database structure that brings order to glitch metadata generated by external detector characterization tools so that such information can be easily requested by gravitational wave scientists. Another novelty introduced by NICE is the interactive front-end infrastructure focused on glitch instrumental and environmental origin investigation, which uses labels determined by their time–frequency morphology. The NICE domain is intended for integration with the Advanced Virgo, Advanced LIGO, and KAGRA characterization pipelines and it will interface with systematic classification activities related to the transient noise sources present in the Virgo detector.
Full article
(This article belongs to the Topic Software Engineering and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Revolutionizing Coffee Farming: A Mobile App with GPS-Enabled Reporting for Rapid and Accurate On-Site Detection of Coffee Leaf Diseases Using Integrated Deep Learning
by
Eric Hitimana, Martin Kuradusenge, Omar Janvier Sinayobye, Chrysostome Ufitinema, Jane Mukamugema, Theoneste Murangira, Emmanuel Masabo, Peter Rwibasira, Diane Aimee Ingabire, Simplice Niyonzima, Gaurav Bajpai, Simon Martin Mvuyekure and Jackson Ngabonziza
Software 2024, 3(2), 146-168; https://doi.org/10.3390/software3020007 - 16 Apr 2024
Abstract
Coffee leaf diseases are a significant challenge for coffee cultivation. They can reduce yields, impact bean quality, and necessitate costly disease management efforts. Manual monitoring is labor-intensive and time-consuming. This research introduces a pioneering mobile application equipped with global positioning system (GPS)-enabled reporting
[...] Read more.
Coffee leaf diseases are a significant challenge for coffee cultivation. They can reduce yields, impact bean quality, and necessitate costly disease management efforts. Manual monitoring is labor-intensive and time-consuming. This research introduces a pioneering mobile application equipped with global positioning system (GPS)-enabled reporting capabilities for on-site coffee leaf disease detection. The application integrates advanced deep learning (DL) techniques to empower farmers and agronomists with a rapid and accurate tool for identifying and managing coffee plant health. Leveraging the ubiquity of mobile devices, the app enables users to capture high-resolution images of coffee leaves directly in the field. These images are then processed in real-time using a pre-trained DL model optimized for efficient disease classification. Five models, Xception, ResNet50, Inception-v3, VGG16, and DenseNet, were experimented with on the dataset. All models showed promising performance; however, DenseNet proved to have high scores on all four-leaf classes with a training accuracy of 99.57%. The inclusion of GPS functionality allows precise geotagging of each captured image, providing valuable location-specific information. Through extensive experimentation and validation, the app demonstrates impressive accuracy rates in disease classification. The results indicate the potential of this technology to revolutionize coffee farming practices, leading to improved crop yield and overall plant health.
Full article
(This article belongs to the Special Issue Automated Testing of Modern Software Systems and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
A Process for Monitoring the Impact of Architecture Principles on Sustainability: An Industrial Case Study
by
Markus Funke, Patricia Lago, Roberto Verdecchia and Roel Donker
Software 2024, 3(1), 107-145; https://doi.org/10.3390/software3010006 - 13 Mar 2024
Abstract
►▼
Show Figures
Architecture principles affect a software system holistically. Given their alignment with a business strategy, they should be incorporated within the validation process covering aspects of sustainability. However, current research discusses the influence of architecture principles on sustainability in a limited context. Our objective
[...] Read more.
Architecture principles affect a software system holistically. Given their alignment with a business strategy, they should be incorporated within the validation process covering aspects of sustainability. However, current research discusses the influence of architecture principles on sustainability in a limited context. Our objective was to introduce a reusable process for monitoring and evaluating the impact of architecture principles on sustainability from a software architecture perspective. We sought to demonstrate the application of such a process in professional practice. A qualitative case study was conducted in the context of a Dutch airport management company. Data collection involved a case analysis and the execution of two rounds of expert interviews. We (i) identified a set of case-related key performance indicators, (ii) utilized commonly accepted measurement tools, and (iii) employed graphical representations in the form of spider charts to monitor the sustainability impacts. The real-world observations were evaluated through a concluding focus group. Our findings indicated that architecture principles were a feasible mechanism with which to address sustainability across all different architecture layers within the enterprise. The experts considered the sustainability analysis valuable in guiding the software architecture process towards sustainability. With the emphasis on principles, we facilitate industry adoption by embedding sustainability in existing mechanisms.
Full article
Figure 1
Open AccessReview
Emergent Information Processing: Observations, Experiments, and Future Directions
by
Jiří Kroc
Software 2024, 3(1), 81-106; https://doi.org/10.3390/software3010005 - 5 Mar 2024
Abstract
►▼
Show Figures
Science is currently becoming aware of the challenges in the understanding of the very root mechanisms of massively parallel computations that are observed in literally all scientific disciplines, ranging from cosmology to physics, chemistry, biochemistry, and biology. This leads us to the main
[...] Read more.
Science is currently becoming aware of the challenges in the understanding of the very root mechanisms of massively parallel computations that are observed in literally all scientific disciplines, ranging from cosmology to physics, chemistry, biochemistry, and biology. This leads us to the main motivation and simultaneously to the central thesis of this review: “Can we design artificial, massively parallel, self-organized, emergent, error-resilient computational environments?” The thesis is solely studied on cellular automata. Initially, an overview of the basic building blocks enabling us to reach this end goal is provided. Important information dealing with this topic is reviewed along with highly expressive animations generated by the open-source, Python, cellular automata software GoL-N24. A large number of simulations along with examples and counter-examples, finalized by a list of the future directions, are giving hints and partial answers to the main thesis. Together, these pose the crucial question of whether there is something deeper beyond the Turing machine theoretical description of massively parallel computing. The perspective, future directions, including applications in robotics and biology of this research, are discussed in the light of known information.
Full article
Figure 1
Open AccessArticle
Precision-Driven Product Recommendation Software: Unsupervised Models, Evaluated by GPT-4 LLM for Enhanced Recommender Systems
by
Konstantinos I. Roumeliotis, Nikolaos D. Tselikas and Dimitrios K. Nasiopoulos
Software 2024, 3(1), 62-80; https://doi.org/10.3390/software3010004 - 29 Feb 2024
Cited by 1
Abstract
►▼
Show Figures
This paper presents a pioneering methodology for refining product recommender systems, introducing a synergistic integration of unsupervised models—K-means clustering, content-based filtering (CBF), and hierarchical clustering—with the cutting-edge GPT-4 large language model (LLM). Its innovation lies in utilizing GPT-4 for model evaluation, harnessing its
[...] Read more.
This paper presents a pioneering methodology for refining product recommender systems, introducing a synergistic integration of unsupervised models—K-means clustering, content-based filtering (CBF), and hierarchical clustering—with the cutting-edge GPT-4 large language model (LLM). Its innovation lies in utilizing GPT-4 for model evaluation, harnessing its advanced natural language understanding capabilities to enhance the precision and relevance of product recommendations. A flask-based API simplifies its implementation for e-commerce owners, allowing for the seamless training and evaluation of the models using CSV-formatted product data. The unique aspect of this approach lies in its ability to empower e-commerce with sophisticated unsupervised recommender system algorithms, while the GPT model significantly contributes to refining the semantic context of product features, resulting in a more personalized and effective product recommendation system. The experimental results underscore the superiority of this integrated framework, marking a significant advancement in the field of recommender systems and providing businesses with an efficient and scalable solution to optimize their product recommendations.
Full article
Figure 1
Open AccessArticle
Deep-SDM: A Unified Computational Framework for Sequential Data Modeling Using Deep Learning Models
by
Nawa Raj Pokhrel, Keshab Raj Dahal, Ramchandra Rimal, Hum Nath Bhandari and Binod Rimal
Software 2024, 3(1), 47-61; https://doi.org/10.3390/software3010003 - 28 Feb 2024
Cited by 1
Abstract
►▼
Show Figures
Deep-SDM is a unified layer framework built on TensorFlow/Keras and written in Python 3.12. The framework aligns with the modular engineering principles for the design and development strategy. Transparency, reproducibility, and recombinability are the framework’s primary design criteria. The platform can extract valuable
[...] Read more.
Deep-SDM is a unified layer framework built on TensorFlow/Keras and written in Python 3.12. The framework aligns with the modular engineering principles for the design and development strategy. Transparency, reproducibility, and recombinability are the framework’s primary design criteria. The platform can extract valuable insights from numerical and text data and utilize them to predict future values by implementing long short-term memory (LSTM), gated recurrent unit (GRU), and convolution neural network (CNN). Its end-to-end machine learning pipeline involves a sequence of tasks, including data exploration, input preparation, model construction, hyperparameter tuning, performance evaluations, visualization of results, and statistical analysis. The complete process is systematic and carefully organized, from data import to model selection, encapsulating it into a unified whole. The multiple subroutines work together to provide a user-friendly and conducive pipeline that is easy to use. We utilized the Deep-SDM framework to predict the Nepal Stock Exchange (NEPSE) index to validate its reproducibility and robustness and observed impressive results.
Full article
Figure 1
Open AccessArticle
Automating SQL Injection and Cross-Site Scripting Vulnerability Remediation in Code
by
Kedar Sambhus and Yi Liu
Software 2024, 3(1), 28-46; https://doi.org/10.3390/software3010002 - 12 Jan 2024
Abstract
►▼
Show Figures
Internet-based distributed systems dominate contemporary software applications. To enable these applications to operate securely, software developers must mitigate the threats posed by malicious actors. For instance, the developers must identify vulnerabilities in the software and eliminate them. However, to do so manually is
[...] Read more.
Internet-based distributed systems dominate contemporary software applications. To enable these applications to operate securely, software developers must mitigate the threats posed by malicious actors. For instance, the developers must identify vulnerabilities in the software and eliminate them. However, to do so manually is a costly and time-consuming process. To reduce these costs, we designed and implemented Code Auto-Remediation for Enhanced Security (CARES), a web application that automatically identifies and remediates the two most common types of vulnerabilities in Java-based web applications: SQL injection (SQLi) and Cross-Site Scripting (XSS). As is shown by a case study presented in this paper, CARES mitigates these vulnerabilities by refactoring the Java code using the Intercepting Filter design pattern. The flexible, microservice-based CARES design can be readily extended to support other injection vulnerabilities, remediation design patterns, and programming languages.
Full article
Figure 1
Open AccessArticle
A Survey on Factors Preventing the Adoption of Automated Software Testing: A Principal Component Analysis Approach
by
George Murazvu, Simon Parkinson, Saad Khan, Na Liu and Gary Allen
Software 2024, 3(1), 1-27; https://doi.org/10.3390/software3010001 - 2 Jan 2024
Abstract
►▼
Show Figures
Automated software testing is a crucial yet resource-intensive aspect of software development. This burden on resources affects widespread adoption, with expertise and cost being the primary challenges preventing adoption. This paper focuses on automated testing driven by manually created test cases, acknowledging its
[...] Read more.
Automated software testing is a crucial yet resource-intensive aspect of software development. This burden on resources affects widespread adoption, with expertise and cost being the primary challenges preventing adoption. This paper focuses on automated testing driven by manually created test cases, acknowledging its advantages while critically analysing its implications across various development stages that are affecting its adoption. Additionally, it analyses the differences in perception between those in nontechnical and technical roles, where nontechnical roles (e.g., management) predominantly strive to reduce costs and delivery time, whereas technical roles are often driven by quality and completeness. This study investigates the difference in attitudes toward automated testing (AtAT), specifically focusing on why it is not adopted. This article presents a survey conducted among software industry professionals that spans various roles to determine common trends and draw conclusions. A two-stage approach is presented, comprising a comprehensive descriptive analysis and the use of Principal Component Analysis. In total, 81 participants received a series of 22 questions, and their responses were compared against job role types and experience levels. In summary, six key findings are presented that cover expertise, time, cost, tools and techniques, utilisation, organisation, and capacity.
Full article
Figure 1
Open AccessArticle
A Comparative Study on the Ethical Responsibilities of Key Role Players in Software Development
by
Senyeki Milton Marebane and Robert Toyo Hans
Software 2023, 2(4), 504-516; https://doi.org/10.3390/software2040023 - 5 Dec 2023
Abstract
Background: Issues of lack of consideration for professional responsibility by software engineers (SEs) present major challenges and concerns to software users. Previous studies on the subject of ethical responsibility in software development assessed whether software development key stakeholders should take ethical responsibility for
[...] Read more.
Background: Issues of lack of consideration for professional responsibility by software engineers (SEs) present major challenges and concerns to software users. Previous studies on the subject of ethical responsibility in software development assessed whether software development key stakeholders should take ethical responsibility for their actions in software development. However, such studies focused on assessing responses from a particular grouping in software development. Objective: Based on the revelation, this study seeks to evaluate the perceived ethical responsibilities in software development by juxtaposing the perceptions of students, educators and industry-based software practitioners on the ethical responsibility of software development key stakeholders in South Africa. Methods: To meet this objective, the study collected data using a survey, which was shared on an online platform. A total of 561 (44 from computing academics; 103 from industry-based software practitioners and 414 from software development students) responses were received. The collected data were analysed using descriptive and variance statistical analysis approaches. Results: The study found that there is no significant statistical difference in how students, educators and software practitioners perceive the ethical responsibility of software development key stakeholders. Conclusions: This finding of the study shows that the prevailing view is that various software development key stakeholders should be held ethically responsible for their contribution to software development. Furthermore, the organisation of ethical responsibilities used in this study provides a useful framework to guide future studies on this subject.
Full article
Open AccessArticle
Beam Transmission (BTR) Software for Efficient Neutral Beam Injector Design and Tokamak Operation
by
Eugenia Dlougach and Margarita Kichik
Software 2023, 2(4), 476-503; https://doi.org/10.3390/software2040022 - 24 Oct 2023
Cited by 1
Abstract
BTR code (originally—“Beam Transmission and Re-ionization”, 1995) is used for Neutral Beam Injection (NBI) design; it is also applied to the injector system of ITER. In 2008, the BTR model was extended to include the beam interaction with plasmas and direct beam losses
[...] Read more.
BTR code (originally—“Beam Transmission and Re-ionization”, 1995) is used for Neutral Beam Injection (NBI) design; it is also applied to the injector system of ITER. In 2008, the BTR model was extended to include the beam interaction with plasmas and direct beam losses in tokamak. For many years, BTR has been widely used for various NBI designs for efficient heating and current drive in nuclear fusion devices for plasma scenario control and diagnostics. BTR analysis is especially important for ‘beam-driven’ fusion devices, such as fusion neutron source (FNS) tokamaks, since their operation depends on a high NBI input in non-inductive current drive and fusion yield. BTR calculates detailed power deposition maps and particle losses with an account of ionized beam fractions and background electromagnetic fields; these results are used for the overall NBI performance analysis. BTR code is open for public usage; it is fully interactive and supplied with an intuitive graphical user interface (GUI). The input configuration is flexibly adapted to any specific NBI geometry. High running speed and full control over the running options allow the user to perform multiple parametric runs on the fly. The paper describes the detailed physics of BTR, numerical methods, graphical user interface, and examples of BTR application. The code is still in evolution; basic support is available to all BTR users.
Full article
(This article belongs to the Special Issue Software Analysis, Evolution, Maintenance and Visualization)
►▼
Show Figures
Figure 1
Open AccessSystematic Review
A Systematic Mapping of the Proposition of Benchmarks in the Software Testing and Debugging Domain
by
Deuslirio da Silva-Junior, Valdemar V. Graciano-Neto, Diogo M. de-Freitas, Plinio de Sá Leitão-Junior and Mohamad Kassab
Software 2023, 2(4), 447-475; https://doi.org/10.3390/software2040021 - 12 Oct 2023
Abstract
►▼
Show Figures
Software testing and debugging are standard practices of software quality assurance since they enable the identification and correction of failures. Benchmarks have been used in that context as a group of programs to support the comparison of different techniques according to pre-established parameters.
[...] Read more.
Software testing and debugging are standard practices of software quality assurance since they enable the identification and correction of failures. Benchmarks have been used in that context as a group of programs to support the comparison of different techniques according to pre-established parameters. However, the reasons that inspire researchers to propose novel benchmarks are not fully understood. This article reports the investigation, identification, classification, and externalization of the state of the art about the proposition of benchmarks on software testing and debugging domains. The study was carried out using systematic mapping procedures according to the guidelines widely followed by software engineering literature. The search identified 1674 studies, from which, 25 were selected for analysis. A list of benchmarks is provided and descriptively mapped according to their characteristics, motivations, and scope of use for their creation. The lack of data to support the comparison between available and novel software testing and debugging techniques is the main motivation for the proposition of benchmarks. Advancements in the standardization and prescription of benchmark structure and composition are still required. Establishing such a standard could foster benchmark reuse, thereby saving time and effort in the engineering of benchmarks for software testing and debugging.
Full article
Figure 1
Open AccessArticle
A Differential Datalog Interpreter
by
Matthew James Stephenson
Software 2023, 2(3), 427-446; https://doi.org/10.3390/software2030020 - 21 Sep 2023
Cited by 1
Abstract
The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being
[...] Read more.
The core reasoning task for datalog engines is materialization, the evaluation of a datalog program over a database alongside its physical incorporation into the database itself. The de-facto method of computing is through the recursive application of inference rules. Due to it being a costly operation, it is a must for datalog engines to provide incremental materialization; that is, to adjust the computation to new data instead of restarting from scratch. One of the major caveats is that deleting data is notoriously more involved than adding since one has to take into account all possible data that has been entailed from what is being deleted. Differential dataflow is a computational model that provides efficient incremental maintenance, notoriously with equal performance between additions and deletions, and work distribution of iterative dataflows. In this paper, we investigate the performance of materialization with three reference datalog implementations, out of which one is built on top of a lightweight relational engine, and the two others are differential-dataflow and non-differential versions of the same rewrite algorithm with the same optimizations. Experimental results suggest that monotonic aggregation is more powerful than ascenting merely the powerset lattice.
Full article
(This article belongs to the Topic Software Engineering and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
User Authorization in Microservice-Based Applications
by
Niklas Sänger and Sebastian Abeck
Software 2023, 2(3), 400-426; https://doi.org/10.3390/software2030019 - 19 Sep 2023
Cited by 1
Abstract
►▼
Show Figures
Microservices have emerged as a prevalent architectural style in modern software development, replacing traditional monolithic architectures. The decomposition of business functionality into distributed microservices offers numerous benefits, but introduces increased complexity to the overall application. Consequently, the complexity of authorization in microservice-based applications
[...] Read more.
Microservices have emerged as a prevalent architectural style in modern software development, replacing traditional monolithic architectures. The decomposition of business functionality into distributed microservices offers numerous benefits, but introduces increased complexity to the overall application. Consequently, the complexity of authorization in microservice-based applications necessitates a comprehensive approach that integrates authorization as an inherent component from the beginning. This paper presents a systematic approach for achieving fine-grained user authorization using Attribute-Based Access Control (ABAC). The proposed approach emphasizes structure preservation, facilitating traceability throughout the various phases of application development. As a result, authorization artifacts can be traced seamlessly from the initial analysis phase to the subsequent implementation phase. One significant contribution is the development of a language to formulate natural language authorization requirements and policies. These natural language authorization policies can subsequently be implemented using the policy language Rego. By leveraging the analysis of software artifacts, the proposed approach enables the creation of comprehensive and tailored authorization policies.
Full article
Figure 1
Open AccessSystematic Review
A Quantitative Review of the Research on Business Process Management in Digital Transformation: A Bibliometric Approach
by
Bui Quang Truong, Anh Nguyen-Duc and Nguyen Thi Cam Van
Software 2023, 2(3), 377-399; https://doi.org/10.3390/software2030018 - 1 Sep 2023
Cited by 1
Abstract
►▼
Show Figures
In recent years, research on digital transformation (DT) and business process management (BPM) has gained significant attention in the field of business and management. This paper aims to conduct a comprehensive bibliometric analysis of global research on DT and BPM from 2007 to
[...] Read more.
In recent years, research on digital transformation (DT) and business process management (BPM) has gained significant attention in the field of business and management. This paper aims to conduct a comprehensive bibliometric analysis of global research on DT and BPM from 2007 to 2022. A total of 326 papers were selected from Web of Science and Scopus for analysis. Using bibliometric methods, we evaluated the current state and future research trends of DT and BPM. Our analysis reveals that the number of publications on DT and BPM has grown significantly over time, with the Business Process Management Journal being the most active. The countries that have contributed the most to this field are Germany (with four universities in the top 10) and the USA. The Business Process Management Journal is the most active in publishing research on digital transformation and business process management. The analysis showed that “artificial intelligence” is a technology that has been studied extensively and is increasingly asserted to influence companies’ business processes. Additionally, the study provides valuable insights from the co-citation network analysis. Based on our findings, we provide recommendations for future research directions on DT and BPM. This study contributes to a better understanding of the current state of research on DT and BPM and provides insights for future research.
Full article
Figure 1
Open AccessArticle
Challenges and Solutions for Engineering Applications on Smartphones
by
Anthony Khoury, Mohamad Abbas Kaddaha, Maya Saade, Rafic Younes, Rachid Outbib and Pascal Lafon
Software 2023, 2(3), 350-376; https://doi.org/10.3390/software2030017 - 18 Aug 2023
Cited by 1
Abstract
This paper starts by presenting the concept of a mobile application. A literature review is conducted, which shows that there is still a certain lack with regard to smartphone applications in the domain of engineering as independent simulation applications and not only as
[...] Read more.
This paper starts by presenting the concept of a mobile application. A literature review is conducted, which shows that there is still a certain lack with regard to smartphone applications in the domain of engineering as independent simulation applications and not only as extensions of smartphone tools. The challenges behind this lack are then discussed. Subsequently, three case studies of engineering applications for both smartphones and the internet are presented, alongside their solutions to the challenges presented. The first case study concerns an engineering application for systems control. The second case study focuses on an engineering application for composite materials. The third case study focuses on the finite element method and structure generation. The solutions to the presented challenges are then described through their implementation in the applications. The three case studies show a new system of thought concerning the development of engineering smartphone applications.
Full article
(This article belongs to the Topic Software Engineering and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
A Synthesis-Based Stateful Approach for Guiding Design Thinking in Embedded System Development
by
Hung-Fu Chang and Supannika Koolmanojwong Mobasser
Software 2023, 2(3), 332-349; https://doi.org/10.3390/software2030016 - 12 Aug 2023
Cited by 1
Abstract
►▼
Show Figures
Embedded systems have attracted more attention and have become more critical due to the recent computer technology advancements and applications in various areas, such as healthcare, transportation, and manufacturing. Traditional software design approaches and the finite state machine cannot provide sufficient support due
[...] Read more.
Embedded systems have attracted more attention and have become more critical due to the recent computer technology advancements and applications in various areas, such as healthcare, transportation, and manufacturing. Traditional software design approaches and the finite state machine cannot provide sufficient support due to two major reasons: the increasing need for more functions in designing an embedded system and sequential controls in the implementation. This deficiency particularly discourages inexperienced engineers who use conventional methods to design embedded software. Hence, we proposed a design method, the Synthesis-Based Stateful Software Design Approach (SSSDA), which synthesizes two existing methods, the Synthesis-Based Software Design Framework (SSDF) and Process and Artifact State Transition Abstraction (PASTA), to remedy the drawback of conventional methods. To show how to conduct our proposed design approach and investigate how it supports embedded system design, we studied an industrial project developed by a sophomore student team. Our results showed that our proposed approach could significantly help students lay out modules, improve testability, and reduce defects.
Full article
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Electronics, Informatics, Information, Software
Software Engineering and Applications
Topic Editors: Sanjay Misra, Robertas Damaševičius, Bharti SuriDeadline: 31 October 2024
Conferences
Special Issues
Special Issue in
Software
Empower Connectivity: Software-Driven Solutions for Interoperable Blockchains
Guest Editors: Diego Pennino, Jianbo GaoDeadline: 25 September 2024
Special Issue in
Software
New Advances in Formal Modeling of Software Systems
Guest Editors: Olga Siedlecka-Lamch, Sabina SzymoniakDeadline: 31 March 2025