Next Article in Journal
Tailored Polylactic Acid/Polycaprolactone Blends with Excellent Strength–Stiffness and Shape Memory Capacities
Previous Article in Journal
Resorcinarene-Based Polymer Conjugated for Pharmaceutical Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automated Verification Framework for DEVS-Coupled Models to Enhance Efficient Modeling and Simulation

by
Gyuhong Lee
1 and
Su Man Nam
2,*
1
TIM Solution, Ulsan 44538, Republic of Korea
2
Digital Security, Cheongju University, Cheongju 28503, Republic of Korea
*
Author to whom correspondence should be addressed.
Processes 2025, 13(5), 1327; https://doi.org/10.3390/pr13051327
Submission received: 18 March 2025 / Revised: 10 April 2025 / Accepted: 24 April 2025 / Published: 26 April 2025
(This article belongs to the Section AI-Enabled Process Engineering)

Abstract

:
Discrete Event System Specification (DEVS) is a formalism widely used for modeling and simulating complex systems. The main features of DEVS are defining models in a strict mathematical form and representing systems through hierarchical structures. However, when DEVS models have incorrect connection structures and inappropriate behaviors contrary to design intentions, simulation results can be distorted. This can cause serious problems that may lead to inaccurate decision-making. In this paper, we propose an automated verification framework to improve the accuracy and efficiency of coupled models in the DEVS-Python environment. This framework defines test scripts for coupled models, performs automatic verification before simulation execution, and provides the results to users. Experimental results showed that the proposed framework improved execution time by approximately 30–100 times compared to traditional unit testing methods, although memory and CPU usage increased slightly. Despite this increase in resource usage, the proposed framework provides high efficiency and consistent performance in verifying complex DEVS-coupled models.

1. Introduction

Discrete Event System Specification (DEVS) plays a crucial role in modeling and simulating complex systems [1,2,3,4,5,6,7]. Developed by Zeigler, DEVS can effectively represent complex systems through discrete event abstraction with its systematic and modular approach [4,7,8,9,10]. The strength of DEVS lies in its model definition and hierarchical model structure based on a rigorous formalism [11,12,13,14]. This formalism consists of coupled models that represent the system’s structure and atomic models that express its behavior [5,15].
DEVS-based simulation system research has been steadily advancing through systems such as DEVS-Scheme [16,17,18], DEVSJava [19,20], DEVS-Python [21,22], ADEVS [23], and DEVSim++ [8]. Some of these systems, like DEVS-Scheme and DEVS-Python, provide test environments for verifying atomic models. However, despite coupled models being composed of complex structural elements, such as sets of child models, input and output port information, internal and external coupling relationships, and execution priorities, there is currently no systematic test environment to verify these elements [24,25]. As a result, verification of the coupled model’s structural accuracy primarily occurs through abnormal simulation execution, requiring modelers to manually identify errors. This process demands considerable time and costs in the modeling process, thus emerging as an urgent task in DEVS research to develop an effective testing method for coupled models.
In this paper, we propose an automated verification framework for DEVS-coupled models to improve the computational resource efficiency of DEVS-Python. Our proposed framework defines test scripts to systematically verify all structural elements of coupled models, including in/out ports, sub-model composition, coupling relationships, and execution priorities, and performs automatic verification before simulation execution in DEVS-Python, providing the results to the user. Unlike traditional test-driven development [25,26,27,28,29], this framework can reduce coupled model verification time and maintain similar CPU usage, thereby enhancing execution time efficiency.
The contributions of this paper are as follows:
  • Proposing an automated verification framework that systematically verifies structural elements of DEVS coupled models through comprehensive test scripts;
  • Developing an innovative verification approach to enhance computational efficiency and reduce verification time in DEVS-Python simulation modeling.
This paper introduces DEVS formalism and discrete event systems in Section 2 and explains the existing DEVS testing methods in Section 3. In Section 4, we introduce the proposed framework, and in Section 5, we analyze the experimental results. Section 6 discusses our findings within the broader context of DEVS modeling and verification approaches. Finally, Section 7 provides conclusions and summarizes the contributions of this work.

2. Related Work

This section explains DEVS formalism and the existing testing methods. Figure 1 shows the main components of the modeling and simulation process and their relationships. In [14,30], the (1) Real System is the subject we aim to study, and the (2) Model Specification is its mathematical representation. The (3) Simulator is an implementation of the model specification in an executable form, consisting of source code and an execution engine.
These components interact through four relationships. Modeling is the process of observing and analyzing the real system to abstract it into a model specification, and Implementation is the process of implementing the model specification as a simulator through programming. Verification is the process of verifying whether the implemented simulator matches the model specification, confirming that the model has been accurately implemented as intended. Validation is the process of verifying whether the simulation results align with the behavior of the real system. This research focuses on the Verification process, proposing a framework to automatically verify the structural accuracy of DEVS coupled models.
Section 2.1 explains the basic concepts of DEVS formalism and the structures of atomic and coupled models, while Section 2.2 delves into existing testing techniques and the architecture of the DEVS engine system.

2.1. DEVS Formalism

The DEVS formalism [1,2,3,5,11,15] is a mathematical framework for modeling and simulating complex systems. It provides a theoretical and well-defined means of representing hierarchical and modular individual event models to analyze these complex systems. A key characteristic of DEVS is that it decomposes systems into modularized components, not only allowing each component to operate independently but also executing them in a manner where system states change according to event occurrences. Therefore, DEVS is distinguished by its atomic models that describe system behavior and coupled models that represent system structure. In DEVS, an atomic model has inputs, outputs, states, time, and functions for a system. The system’s functions determine the next state and output based on the current state and input. An atomic model consists of three sets ( X , S , Y ) and four functions ( δ i n t , δ e x t , λ , t a ).
A coupled model is a complex model created by connecting multiple atomic models or other coupled models. The formal representation of this coupled model is as follows:
C M = X , Y , M i , E I C , E O C , I C , s e l e c t
In the coupled model equation, the information from M i , E I C , E O C , I C enables the expression of system structure and connectivity, allowing the creation of complex models by combining various models [31]. Here, X and Y represent the set of input and output events for coupled models.

2.2. Testing Methodology of Traditional DEVS Engines

Since its initial proposal by Bernard P. Zeigler in 1976, the DEVS formalism has evolved through various implementation systems [24]. Beginning with DEVS-Scheme (1987), the first implementation of DEVS tools, subsequent systems such as DEVSim++ (1991), DEVS-Suite (1997, formerly DEVSJava), ADEVS (2001), and DEVS-Python (2022) were sequentially developed, with each system providing modeling, simulation, and testing capabilities based on its unique characteristics.
DEVS-Scheme, as the first implementation of DEVS formalism, supports modeling and simulation of discrete event systems by leveraging the functional programming characteristics of the Scheme language. In particular, its object-oriented design enables hierarchical composition of atomic and coupled models, allowing mathematically based formal verification. However, while DEVS-Scheme’s modularized structure enables component-based testing, it did not achieve complete test automation due to the complexity of coupled models.
DEVS-Suite, developed at Arizona State University, has evolved from the initial DEVSJava and now reaches version 6.0. The latest version introduces a black-box testing framework to support model test automation. In particular, it provides systematic test case definition and automated regression testing through JUnit integration, TestFrame class, and @TestScript annotations [26]. However, this testing framework is limited to atomic model verification and lacks comprehensive capabilities for validating the complex interactions and temporal dependencies of coupled models.
DEVSim++ and ADEVS, both implemented in C++, offer unique advantages. Developed in 1991, DEVSim++ focuses on efficient implementation and simulation of DEVS models, while ADEVS, developed in 2001, provides a lightweight library for high-performance simulation. However, both systems lack a dedicated testing framework, resulting in model verification that relies on manual confirmation through simulation execution.
Recently, DEVS-Python was developed by [21]. This system leverages Python’s flexibility and intuitive syntax to facilitate learning and implementation of DEVS concepts, while strictly adhering to the original DEVS framework hierarchy. When compared to other implementations, DEVS-Python offers several advantages: it overcomes DEVS-Scheme’s limited test automation capabilities by integrating Python’s native testing framework [22,25]; it provides a more accessible and flexible testing ecosystem than DEVS-Suite’s JUnit-based approach; and unlike ADEVS’s primary focus on simulation performance, DEVS-Python balances educational accessibility with formal adherence to DEVS principles. By maintaining the fundamental class structure of ENTITIES (with subclasses MODELS and PROCESSORS), DEVS-Python ensures that developers familiar with DEVS formalism can easily extend the framework by adding new classes. This architectural consistency makes it particularly accessible for both newcomers learning DEVS concepts and experienced modelers implementing complex simulations. However, like other tools, it still lacks automated testing capabilities for coupled models.
Analysis of these DEVS tools reveals a common limitation in coupled model testing. In particular, manual verification methods have high potential for human error and difficulty in ensuring test coverage. Therefore, the development of a systematic test automation methodology that considers the structural characteristics and dynamic behaviors of coupled models is urgently needed.

3. Problem Statement

In this section, the verification stage (Simulator Model Specification as shown in Figure 1) reveals two major limitations in the existing DEVS testing methods:
  • The existing methods primarily focus on testing for atomic models without coupled model verification [22,25]. As a result, verification of coupled model structural elements (child models, coupling, priorities) is mainly performed manually, which increases the risk of human errors;
  • While the existing DEVS engines can apply unit testing provided by JAVA, C++, etc., this requires changing the entire development process and takes considerable time to understand and implement. Particularly, the lack of automated tools for integrated testing of coupled models makes it difficult to verify model accuracy.
As explained in Section 2.1, DEVS coupled models consist of seven key elements. These coupled model elements must be defined during model design. For example, Table 1 shows the formalism of the Experimental Frame (EF) coupled model presented in [9,13].
In this coupled model formalism, the model’s input and output values ( X and Y ) are not defined when the coupled model is initially created and are only determined during simulation execution. The coupled model includes two child models named g e n r and t r a n s d , consisting of one EIC, two EOCs, and three ICs. Additionally, the model execution order, determined by the select function, proceeds from g e n r to t r a n s d .
These coupled model elements can be implemented as actual source code using DEVS-Python, with an example provided in Algorithm 1. Through this implementation process, the complexity of coupled models and the importance of testing become more prominent, emphasizing the need for an effective coupled model verification method.
Algorithm 1. An Example Source Code for EF Coupled Model using DEVS-Python
1:  class EF(COUPLED_MODELS):
2:    def __init__(self):
3:      COUPLED_MODELS.__init__(self)
4:      self.setName(self.__class__.__name__)
5:
6:      self.addInPorts(“in”)
7:      self.addOutPorts(“out”, “result”)
8:
9:      genr = GENR()
10:    transd = TRANSD()
11:
12:    self.addModel(genr)
13:    self.addModel(transd)
14:
15:    self.addCoupling(self, “in”, transd, “solved”)
16:    self.addCoupling(genr, “out”, self, “out”)
17:    self.addCoupling(transd, “out”, self, “result”)
18:    self.addCoupling(transd, “out”, genr, “stop”)
19:    self.addCoupling(genr, “out”, transd, “arrived”)
20:
21:    self.priority_list([genr, transd])
As shown in the coupled model source code, the modeler-written code from lines 6 to 21 needs to be verified after implementation. Specifically, these elements include the set of models ( M i ), external input coupling ( E I C ), external output coupling ( E O C ), and internal coupling ( I C ). Since these elements are directly input by the modeler in the source code, they have a relatively high potential for human error. For example, there is a valid coupling g e n r . o u t , t r a n s d . a r r i v e d that transfers from a source model’s port (source.port) to a destination model’s port (destination.port). Common typing errors involve parameter errors such as character sequence order and deletion. When string errors occur, it takes considerable time to locate them. Ref. [32] indicates that when coding in Python, parameter errors and incompatible return types account for 64.8% of all errors in the dataset. Therefore, the main issues that occur in DEVS coupled models are as follows.
  • Limitations of manual verification for structural elements (child models, coupling, priorities) of coupled models
  • Lack of integrated test automation tools for coupled models
To address these issues, we propose an automated verification framework that can verify all structural elements of DEVS coupled models.

4. Proposed Verification Framework

This section proposes an automated verification framework for DEVS-coupled models to address challenges in testing and verifying model structures. Section 4.1 provides an overview of the proposed system, and Section 4.2 presents the detailed procedures of the verification framework.

4.1. Overview

In this paper, we propose a system to improve accuracy and efficiency through automated testing of coupled models in the DEVS-Python environment. The proposed system generates test code that includes child models, coupling information, and execution priorities. Subsequently, it automatically parses the test code and the coupled model’s source code to efficiently compare and analyze the two sources of information. When asymmetric information is discovered, the method visually indicates the error areas to the modeler, enabling immediate corrections. This automated system significantly reduces verification time compared to manual inspection methods while simultaneously enhancing test accuracy. In particular, by ensuring the structural accuracy of coupled models, it contributes to improving the reliability of simulation models.
The overall process of the proposed system is illustrated in Figure 2. The upper stage represents the typical modeling and simulation development workflow, while the yellow-colored process below depicts the coupled model verification system proposed in this paper.
The existing modeling and simulation development process begins with defining a Conceptual Specification based on the Real System, followed by Model Design. The designed model is then implemented through Model Code Generation and ultimately verified through Performance Analysis.
The proposed system consists of three key stages, running parallel to the existing process:
  • Coupled Model Test Design: Design test cases for the coupled model defined in the model design stage.
  • Coupled Model Test Code Generation: Automatically generate test code based on the designed test cases.
  • Test Result Verification: Execute the generated test code and verify its results.
Through this automated verification stage, the structural accuracy of coupled models can be ensured, and potential errors in the development process can be detected and corrected early.

4.2. Execuction Procedure

In this section, we introduce the execution procedures of the proposed verification framework. Section 4.2.1 presents the coupled model test design, Section 4.2.2 describes the coupled model test code generation, and Section 4.2.3 explains the test result verification.

4.2.1. Coupled Model Test Design

The coupled model test design stage is the initial stage for verifying the structural correctness of DEVS-based coupled models [25]. This stage defines test items such as the set of child models, input/output port information, internal and external coupling relationships, and execution priorities, which are the main components of coupled models. During test design, the modeler clearly specifies the intended structure of the coupled model and documents it in a form that can be used in the subsequent automated verification stage. For example, when a coupled model CM consists of two atomic models AM1 and AM2, EIC verifies that the input port p1 of the coupled model is connected to the input port in1 of AM1, IC confirms that the output port out1 of AM1 is correctly connected to the input port in2 of AM2, and EOC tests that the output port out2 of AM2 is accurately connected to the output port p3 of the coupled model. This establishes a foundation for effectively identifying and verifying structural errors in coupled models.

4.2.2. Coupled Model Test Code Generation

The coupled model test code generation is a stage that converts the test items for verifying structural correctness of the designed coupled model into actual verifiable code. The test code is written in JSON format and systematically represents the metadata and structural information of the coupled model. This is automatically processed through a parser in the verification stage to identify structural errors. The generated test code ensures the accuracy and consistency of verification and enables repetitive test execution.
The JSON test code shown in Algorithm 2 is an example of verifying the structure of the EF model. This code specifies that the EF model consists of two child models, GENR and TRANSD, and indicates that TRANSD has execution priority over GENR when simultaneous events occur. Additionally, the code includes five coupling relationships: an EIC {“EF.in”: “TRANSD.solved”}, ICs {“TRANSD.out”: “GENR.stop”} and {“GENR.out”: “TRANSD.arrived”}, and EOCs {“GENR.out”: “EF.out”} and {“TRANSD.out”: “EF.result”}.
Algorithm 2. An Example Source Code for EF Coupled Model using DEVS-Python
  1: {
  2:  “log”: {
  3:      “author”: “Su Man Nam”,
  4:      “data”: “Date”,
  5:      “ver”: “Coupling Information of EF Model”
  6: },
  7: “EF”: [
  8:      {“model”: “GENR, TRANSD”},
  9:      {“select”: “TRANSD, GENR”},
10:      {“EF:in”: “TRANSD:solved”},
11:      {“GENR:out”: “EF:out”},
12:      {“TRANSD:out”: “EF:result”},
13:      {“TRANSD:out”: “GENR:stop”},
14:      {“GENR:out”: “TRANSD:arrived”}
15:  ]
16: }

4.2.3. Test Result Verification

Verification of the coupled model test results involves parsing the generated JSON-formatted test code to automatically verify the structural accuracy of the coupled model. In this stage, the method verifies the set of child models, input and output ports, coupling relationships, and execution priorities are correctly defined.
Figure 3 illustrates the entire process of verifying test results for coupled models. First, the coupled model to be verified is selected from the model base [4,12]. Next, a test script for the model is selected from the test base to verify the test results. If the verification result is unsatisfactory (No), the process returns to the test script selection step to perform verification with a different test case. When verification is completed (Yes), the entire process is terminated.
The proposed framework verifies the accuracy of the coupled model code directly written by the modeler, aiming to prevent human errors that may occur during manual processes. Particularly, mistakes that can happen while manually typing coupling information may lead to simulation malfunctions or unexpected results. Through this iterative verification process, the structural accuracy of the coupled model can be ensured.
The core of the proposed system is to compare the test code written in Section 4.2.2 with the coupled model’s source code and provide verification results to the modeler. This system minimizes potential mistakes that can occur when the modeler separately writes test code and source code based on design specifications. In particular, unlike traditional unit testing, our system uniquely features the verification of coupled models always performed before simulation begins.
In the verification stage, the SequenceMatcher from Python’s difflib [33] module is utilized. SequenceMatcher is a tool for comparing and analyzing the similarity between two sequences, returning a list of synchronization operations through the get_opcodes() method. The returned operation list consists of replace, delete, insert, and equal, with each operation indicated by a different color. For example, when str1 is ‘qabxcd’ and str2 is ‘abycdf’, the replace operation indicates that ‘x’ in str1 should be replaced with ‘y’ in str2. In practical application, when comparing coupling information such as ‘TRANSD.arrived’ and ‘TRANSD.arived’, the ‘r’ would be highlighted in green, enabling the modeler to visually identify and correct coupling errors.
Figure 4 provides a visual example of the verification results for a coupled model. Specifically, it reveals a typographical error in the coupling between GENR.out and TRANSD.arrived, where ‘arrived’ is mistakenly typed as ‘arived’. When such a typo is detected through SequenceMatcher, the corresponding part is highlighted in green, enabling the modeler to easily identify and correct the error.
The coupled model testing framework proposed in this study consists of three stages: test design, test code generation, and test result verification. In particular, the result verification stage using SequenceMatcher visually indicates errors in the coupling information written by the modeler, enabling effective debugging. Through this approach, the reliability of DEVS-based simulation can be enhanced, and the structural accuracy of complex coupled models can be ensured.

5. Experimental Results

In this section, we selected traditional unit testing methods (unit test [34], Pytest [35]) as comparative groups to evaluate the efficiency of the proposed verification framework. The experimental environment consists of Windows 11 with an Intel (R) Core i5-14500 2.6 GHz processor with 14 cores and 32 GB of RAM. Visual Studio Code (VSCode; v1.99.2) was used for conducting these experiments. The experimental models include the basic coupled models EF and EF-P, as well as the complex coupled models SENSORS and ACLUSTERS presented in [9,13]. The selected coupled models were executed using both the proposed automated verification framework and the traditional methods, comparing their respective execution times and hardware resource utilization.
Figure 5 compares the execution time of the proposed framework with the existing testing methods. In the execution results of the four coupled models in Figure 5a, the proposed framework showed consistent and very short execution times with an average of 0.001 s (0.9 ms). In contrast, unit tests showed execution times of 0.003–0.007 s on average (3.7 ms), while Pytest showed 0.04–0.13 s. This demonstrates that the proposed system performs approximately four times faster than unit tests and 30–100 times faster than Pytest when validating multiple models.
Similar performance patterns were observed in the single execution results of the ACLUSTERS model in Figure 5b. Our proposed framework showed execution times of 0–0.001 s (average 0.45 ms), unit tests showed 0.001–0.006 s (average 1.5 ms), and Pytest showed 0.033–0.083 s. The proposed framework is approximately 3.3 times faster than unit tests even with the complex ACLUSTERS model. While the absolute time differences may appear small in milliseconds, this performance improvement becomes significant when dealing with large-scale DEVS models or when conducting numerous validation tests during iterative development. Additionally, the novelty of our proposed framework lies not only in execution speed but also in its specialized design for DEVS coupled model validation, which existing general-purpose testing frameworks lack. Consequently, our framework significantly outperforms the existing methods in terms of computational resource efficiency and speed in the automated validation process of DEVS coupled models.
Figure 6 shows the comparison results of CPU usage for the four coupled models and the ACLUSTERS model. In the analysis of Figure 6a, the proposed framework exhibited an average processor load of 4.55%, with values ranging from a minimum of 0.7% to a maximum of 13.3%. Unit tests showed an average of 3.95% (0.4–9.1%), while Pytest demonstrated an average of 2.67% (0.5–5.7%). Although the computational demands of the proposed framework were relatively high, it maintained a consistent pattern. This increased CPU usage is a strategic design decision to achieve the significantly faster execution speeds demonstrated in Figure 5, where our framework outperformed existing methods by 3–4 times. The framework pre-loads and processes DEVS structural information more aggressively, utilizing available CPU resources to minimize verification time.
In Figure 6b, our framework registered an average processor consumption of 6.26%, ranging from a minimum of 0.5% to a maximum of 12%. Unit tests showed an average of 2.55% (0.6–8.1%), while Pytest showed an average of 3.64% (1–11.5%). The proposed system also showed higher processing requirements in the ACLUSTERS model but demonstrated stability in verifying complex models. Therefore, the proposed framework exhibited somewhat higher resource consumption in terms of CPU performance, which represents a deliberate trade-off to achieve superior speed performance.
Figure 7 compares the memory usage of the proposed validation framework with the existing testing methods. In the test results of four coupled models in Figure 7a, the proposed framework used approximately 57.6–59.8 MB of memory, while the unit test used approximately 22.4–23.4 MB, and Pytest used approximately 33.0–35.5 MB. It is noteworthy that, although the proposed system showed the highest memory usage, it maintained a consistent memory usage pattern after initial stabilization during the 20 test sequences. This consistent pattern represents a key advantage of our approach—predictable resource utilization regardless of model complexity or the number of verification tests performed, which is critical for large-scale DEVS implementations.
Similar patterns were observed in the single execution results of the ACLUSTERS model shown in Figure 7b. The proposed framework used approximately 57.7–59.9 MB, the unit test used approximately 22.5–23.2 MB, and Pytest used approximately 32.5–34.2 MB of memory. All testing methods showed a tendency for memory usage to stabilize after the initial 1–3 tests. The higher initial memory allocation in our framework ensures that additional allocations are minimized during ongoing verification processes, contributing to long-term system stability during iterative development cycles. Consequently, while our framework shows higher memory usage than the existing methods, this can be viewed as a trade-off for the significantly faster execution time shown in Figure 5. Additionally, the characteristic of maintaining consistent memory usage regardless of the number of DEVS coupled models provides the advantage of enabling predictable resource allocation in large-scale model verification scenarios.
Therefore, our proposed verification framework executes 30–100 times faster than the existing methods, despite its higher memory and CPU requirements. These increased resource demands are counterbalanced by the framework’s significantly improved speed and consistent performance when verifying multiple models simultaneously. In large-scale DEVS coupled model verification environments, our solution offers efficiency through both predictable resource allocation and rapid verification processes.

6. Discussion

This section provides an in-depth analysis of the experimental results presented in Section 5 and contextualizes our findings within the broader field of DEVS modeling and verification. The performance results demonstrate that our proposed framework achieves significant execution time improvements, running approximately 3–4 times faster than unit tests and 30–100 times faster than Pytest. This acceleration is particularly notable in the verification of complex models such as ACLUSTERS, where our framework maintained its performance advantage even with increased model complexity. The speed improvement, while appearing modest in absolute millisecond values, becomes critically important in large-scale DEVS applications where numerous verification cycles are required during iterative development.
The observed increase in CPU and memory utilization represents a deliberate design trade-off to achieve superior verification speeds. Our framework pre-loads DEVS structural information and aggressively utilizes available system resources to minimize verification time, resulting in higher but predictable resource consumption patterns. This approach differs from traditional testing frameworks that optimize for minimal resource usage at the expense of execution speed. Similar resource-performance trade-offs have been observed in related work, though our framework achieves a more favorable balance through DEVS-specific optimizations.
The consistent memory usage pattern demonstrated by our framework represents a key advantage for large-scale verification environments. After initial stabilization, memory consumption remained consistent regardless of model complexity or the number of verification tests performed. This predictability enables more efficient resource allocation in development environments where multiple DEVS models require verification. The framework’s ability to maintain stable performance when verifying multiple models simultaneously further enhances its utility in complex system modeling applications.
These findings have important implications for DEVS modeling practice, particularly in domains involving complex system simulations where model verification constitutes a significant portion of the development cycle. By reducing verification time while providing predictable resource utilization, our framework enhances development efficiency and facilitates the practical application of DEVS methodology in large-scale modeling projects. The specialized nature of our framework addresses a critical need in the DEVS community for tools that support not only model development but also rigorous and efficient verification.

7. Conclusions

In this study, we proposed an automated verification framework for DEVS coupled models in DEVS-Python that significantly enhances computational resource efficiency. Our experimental evaluation demonstrated that the proposed framework achieves remarkable performance improvements, executing verification processes approximately 30–100 times faster than conventional approaches such as unit tests and Pytest, with average execution times as low as 0.9 ms for multiple models.
While our framework showed higher resource utilization with memory usage of approximately 57.6–59.8 MB and CPU utilization ranging from 4.55% to 6.26%, these increases represent a deliberate design decision to prioritize verification speed and consistency. A key strength of our framework is its ability to maintain consistent resource usage patterns after initial stabilization, enabling predictable resource allocation in development environments involving multiple complex DEVS models.
By addressing the specific verification challenges of DEVS-coupled models that traditional testing frameworks are not optimized to handle, our work fills an important gap in the DEVS modeling toolkit. The substantial reduction in verification time directly addresses a major bottleneck in the DEVS development cycle, particularly for complex systems. This work enhances the overall utility and accessibility of the DEVS methodology in both research and practical applications, providing an efficient solution for large-scale DEVS modeling and simulation.

Author Contributions

Conceptualization, G.L. and S.M.N.; supervision S.M.N.; software G.L.; investigation, formal analysis, G.L. and S.M.N.; writing—original draft preparation, G.L. and S.M.N.; writing—review and editing, G.L. and S.M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Development of Proactive Crowd Density Management Platform Based on Spatiotemporal Multimodal Data Analysis and High-Precision Digital Twin Simulation Program through the Korea Institute of Police Technology (KIPoT) funded by the Korean National Police Agency & Ministry of the Interior and Safety (RS-2024-00405100).

Data Availability Statement

Data of this research is available upon request via corresponding author.

Conflicts of Interest

Author Gyuhong Lee was employed by the company TIM Solution. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Zeigler, B.P. Hierarchical, modular discrete-event modelling in an object-oriented environment. Simulation 1987, 49, 219–230. [Google Scholar] [CrossRef]
  2. Zeigler, B.P.; Cho, T.H.; Rozenblit, J.W. A knowledge-based simulation environment for hierarchical flexible manufacturing. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 1996, 26, 81–90. [Google Scholar] [CrossRef]
  3. Kim, T.G.; Zeigler, B.P. The devs formalism: Hierarchical, modular systems specification in an object oriented framework. In Proceedings of the 19th Conference on Winter Simulation, Atlanta, GA, USA, 14–16 December 1987; IEEE: Piscataway, NJ, USA, 1987; pp. 559–566. [Google Scholar]
  4. Kim, T.; Kim, H.-J. DEVS-based experimental framework for blockchain services. Simul. Model. Pract. Theory 2021, 108, 102279. [Google Scholar] [CrossRef]
  5. Cho, T. ST-DEVS: A Methodology Using Time-Dependent-Variable-Based Spatiotemporal Computation. Symmetry 2022, 14, 912. [Google Scholar] [CrossRef]
  6. Ali, Z.; Biglari, R.; Denil, J.; Mertens, J.; Poursoltan, M.; Traoré, M.K. From modeling and simulation to digital twin: Evolution or revolution? Simulation 2024, 100, 751–769. [Google Scholar] [CrossRef]
  7. Fritzson, P.; Pop, A.; Abdelhak, K.; Asghar, A.; Bachmann, B.; Braun, W.; Bouskela, D.; Braun, R.; Buffoni, L.; Casella, F.; et al. The OpenModelica Integrated Environment for Modeling, Simulation, and Model-Based Development. Model. Identif. Control 2022, 41, 241–285. Available online: https://ri.conicet.gov.ar/handle/11336/204742 (accessed on 7 February 2024). [CrossRef]
  8. Kim, T.G.; Lee, C.; Christensen, E.R.; Zeigler, B.P. DEVSim++ Toolset for Defense Modeling and Simulation and Interoperation. J. Def. Model. Simul.: Appl. Methodol. Technol. 2010, 8, 129–142. [Google Scholar] [CrossRef]
  9. Nam, S.M.; Cho, T.H. Context-Aware Architecture for Probabilistic Voting-based Filtering Scheme in Sensor Networks. IEEE Trans. Mob. Comput. 2017, 16, 2751–2763. [Google Scholar] [CrossRef]
  10. Ahn, J.S.; Cho, T.H. Modeling and simulation of abnormal behavior detection through history trajectory monitoring in wireless sensor networks. IEEE Access 2022, 10, 119232–119243. [Google Scholar] [CrossRef]
  11. Wainer, G.; Glinsky, E.; Gutierrez-Alcaraz, M. Studying performance of DEVS modeling and simulation environments using the DEVStone benchmark. Simulation 2011, 87, 555–580. [Google Scholar] [CrossRef]
  12. Nam, S.M.; Kim, H.-J. WSN-SES/MB: System Entity Structure and Model Base Framework for Large-Scale Wireless Sensor Networks. Sensors 2021, 21, 430. [Google Scholar] [CrossRef] [PubMed]
  13. Nam, S.M.; Cho, T.H. Discrete event simulation–based energy efficient path determination scheme for probabilistic voting–based filtering scheme in sensor networks. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720949134. [Google Scholar] [CrossRef]
  14. Wainer, G.; Govind, S. 100 volumes of SIMULATION—20 years of DEVS research. Simulation 2024, 100, 1297–1318. [Google Scholar] [CrossRef]
  15. Concepcion, A.I.; Zeigler, B.P. DEVS formalism: A framework for hierarchical model development. IEEE Trans. Softw. Eng. 1988, 14, 228–241. [Google Scholar] [CrossRef]
  16. Zeigler, B.P. DEVS-SCHEME: A LISP-Based Environment for Hierarchical, Modular Discrete Event Models; Technical Report; Department of Electrical and Computer Engineering, University of Arizona: Tucson, AZ, USA, 1986. [Google Scholar]
  17. Aiken, M.W.; Hayes, G.J. A DEVS-Scheme simulation of an electronic meeting system. Adv. Support Syst. Decis. 1989, 20, 31–39. [Google Scholar] [CrossRef]
  18. Kim, T.G. Hierarchical development of model classes in the DEVS-scheme simulation environment. Expert Syst. Appl. 1991, 3, 343–351. [Google Scholar] [CrossRef]
  19. Sarjoughian, H.S.; Zeigler, B. DEVSJAVA: Basis for a DEVS-based collaborative M&S environment. Simul. Ser. 1998, 30, 29–36. [Google Scholar]
  20. ACIMS—Arizona Center for Integrative Modeling and Simulation. Available online: https://acims.asu.edu/ (accessed on 23 April 2025).
  21. Nam, S.M. DEVS-Python. Available online: https://github.com/sumannam/DEVS-Python (accessed on 7 February 2022).
  22. Nam, S.-M. Script-Based Test System for Rapid Verification of Atomic Models in Discrete Event System Specification Simulation. J. Korea Soc. Comput. Inf. 2022, 27, 101–107. [Google Scholar]
  23. Nutaro, J. ADEVS. Available online: https://web.ornl.gov/~nutarojj/adevs/ (accessed on 7 February 2022).
  24. Van Tendeloo, Y.; Vangheluwe, H. An evaluation of DEVS simulation tools. Simulation 2017, 93, 103–121. [Google Scholar] [CrossRef]
  25. McLaughlin, M.; Sarjoughian, H. Developing Test Frames for DEVS Models: Black-Box Testing with White-Box Debugging; Technical Report; Arizona State University: Tempe, AZ, USA, 2020; Available online: https://sourceforge.net/projects/devs-suitesim/files/DEVS_Suite_6.0.0/Developing%20Test%20Frames%20for%20DEVS%20Models%20-%20Black-Box%20Testing%20with%20White-Box%20Debugging.pdf/download (accessed on 23 April 2025).
  26. McLaughlin, M.B.; Sarjoughian, H.S. DEVS-scripting: A black-box test frame for DEVS models. In Proceedings of the 2020 Winter Simulation Conference (WSC), Orlando, FL, USA, 14–18 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2196–2207. [Google Scholar]
  27. Thimothé, V.; Capocchi, L.; Santucci, J.F. DEVS models design and test using AGILE-based methods with DEVSimPy. In Proceedings of the 26th European Modeling and Simulation Symposium (Simulation in Industry) (EMSS), Bordeaux, France, 10–12 September 2014; pp. 563–569. [Google Scholar]
  28. Astels, D. Test Driven Development: A Practical Guide; Prentice Hall Professional Technical Reference: Upper Saddle River, NJ, USA, 2003. [Google Scholar]
  29. Beck, K. Test Driven Development: By Example; Addison-Wesley Professional: Boston, MA, USA, 2022. [Google Scholar]
  30. Alshareef, A.; Sarjoughian, H.S. DEVS specification for modeling and simulation of the UML activities. In Proceedings of the Symposium on Model-driven Approaches for Simulation Engineering, Virginia Beach, VA, USA, 23–26 April 2017; pp. 1–12. [Google Scholar]
  31. Zeigler, B.P. Object-Oriented Simulation with Hierarchical, Modular Models: Intelligent Agents and Endomorphic Systems; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  32. Chow, Y.W.; Di Grazia, L.; Pradel, M. PyTy: Repairing Static Type Errors in Python. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering, Lisbon, Portugal, 14–20 April 2024; pp. 1–13. [Google Scholar]
  33. Python Software Foundation. “difflib”—Helpers for Computing Deltas. Available online: https://docs.python.org/3/library/difflib.html (accessed on 23 April 2025).
  34. Python Software Foundation. “unittest”—Unit Testing Framework. Available online: https://docs.python.org/3/library/unittest.html (accessed on 10 April 2025).
  35. Krekel, H. pytest. Available online: https://docs.pytest.org/en/stable/index.html (accessed on 10 April 2025).
Figure 1. Modeling and simulation process: components and interactions.
Figure 1. Modeling and simulation process: components and interactions.
Processes 13 01327 g001
Figure 2. Overview of the proposed framework.
Figure 2. Overview of the proposed framework.
Processes 13 01327 g002
Figure 3. Flowchart of the coupled model test.
Figure 3. Flowchart of the coupled model test.
Processes 13 01327 g003
Figure 4. Verification result of the coupled model.
Figure 4. Verification result of the coupled model.
Processes 13 01327 g004
Figure 5. Execution time comparison of the proposed verification framework. (a) Execution time results for four coupled models, showing the performance of the proposed framework, unit test, and Pytest; (b) execution time results for the ACLUSTERS model, demonstrating the framework’s efficiency in complex model verification.
Figure 5. Execution time comparison of the proposed verification framework. (a) Execution time results for four coupled models, showing the performance of the proposed framework, unit test, and Pytest; (b) execution time results for the ACLUSTERS model, demonstrating the framework’s efficiency in complex model verification.
Processes 13 01327 g005
Figure 6. CPU usage comparison of the proposed verification framework. (a) Processor load analysis for four coupled models, showing variations in system performance across different testing methods; (b) CPU consumption results for the ACLUSTERS model, illustrating the framework’s resource utilization in complex model verification.
Figure 6. CPU usage comparison of the proposed verification framework. (a) Processor load analysis for four coupled models, showing variations in system performance across different testing methods; (b) CPU consumption results for the ACLUSTERS model, illustrating the framework’s resource utilization in complex model verification.
Processes 13 01327 g006
Figure 7. Memory usage comparison of the proposed verification framework. (a) Memory consumption analysis for four coupled models, demonstrating consistent memory allocation patterns across different testing methods; (b) memory usage results for the ACLUSTERS model, highlighting the framework’s resource characteristics in complex model verification.
Figure 7. Memory usage comparison of the proposed verification framework. (a) Memory consumption analysis for four coupled models, demonstrating consistent memory allocation patterns across different testing methods; (b) memory usage results for the ACLUSTERS model, highlighting the framework’s resource characteristics in complex model verification.
Processes 13 01327 g007
Table 1. Formalism of EF Coupled Model.
Table 1. Formalism of EF Coupled Model.
X = n u l l
Y = n u l l
M 1 , M 2 = g e n r ,   t r a n s d
E I C = e f . i n ,   t r a n s d . s o l v e d
E O C = g e n r . o u t , e f . o u t , t r a n s d . o u t , e f . r e s u l t
I C = t r a n s d . o u t , g e n r . s t o p , t r a n s d . o u t , g e n r . s t o p , g e n r . o u t , t r a n s d . a r r i v e d
s e l e c t = g e n r ,   t r a n s d
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, G.; Nam, S.M. An Automated Verification Framework for DEVS-Coupled Models to Enhance Efficient Modeling and Simulation. Processes 2025, 13, 1327. https://doi.org/10.3390/pr13051327

AMA Style

Lee G, Nam SM. An Automated Verification Framework for DEVS-Coupled Models to Enhance Efficient Modeling and Simulation. Processes. 2025; 13(5):1327. https://doi.org/10.3390/pr13051327

Chicago/Turabian Style

Lee, Gyuhong, and Su Man Nam. 2025. "An Automated Verification Framework for DEVS-Coupled Models to Enhance Efficient Modeling and Simulation" Processes 13, no. 5: 1327. https://doi.org/10.3390/pr13051327

APA Style

Lee, G., & Nam, S. M. (2025). An Automated Verification Framework for DEVS-Coupled Models to Enhance Efficient Modeling and Simulation. Processes, 13(5), 1327. https://doi.org/10.3390/pr13051327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop