**4. Simulation Architecture and Agent Platform**

The proposed framework couples virtual and real worlds by integrating simulations with human interactions by using two classes of agents: (1) Computational agents (chat bots), and (2) physical agents, both executed inside the simulation world by a unified agent processing platform. The agent-based simulation assumes a spatial context and agent interaction within a spatial domain (or range), similar to cellular automata.

The entire crowd sensing and simulation architecture consists of the following components:

	- **Physical behavioural agents** representing physical entities, e.g., individual artificial humans;
	- **Computational agents** representing mobile software, i.e., used for distributed data processing and digital communication, and implementing chat bots;
	- **Simulation agents** controlling the simulation and performing simulation analysis (e.g., creation of physical and computational agents, reading and writing sensor data, accessing databases)

All agents are programmed in *JavaScript* and executed by the JAM platform in a sandbox environment. Computational agents can migrate between platform nodes (as mobile process snapshots). This widely-used programming language offers a steep learning curve and *JavaScript* can be executed on a wide range of host platforms and software. This feature includes the execution of mobile agents with a WEB browser by embedding the agent platform in WEB pages. *JavaScript* programs and agents offer high portability which is essential for the deployment in strong heterogeneous environments and to reach a broad range of crowdsensing data providers.

Simulation is performed by the Simulation Environment for *JAM* (*SEJAM*, details are discussed in [31,33]). In this work, the *SEJAM* simulator is used to create a simulation world (consisting of entities represented by agents) that is attached to the Internet enabling remote crowd sensing with mobile computational agents. The *SEJAM* simulator is basically a graphical user interface (GUI) on the top of *JAM*, shown in Figure 2.

*SEJAM* extends *JAM* with a simulation layer providing virtualisation, visualisation of 2D- and 3D-worlds, chat dialogues and avatars, and an advanced simulation control with a wide range of integrated analysis and inspection tools to investigate the run-time behaviour of distributed systems. A *SEJAM* world consists of one physical *JAM* node and an arbitrary number of virtual/logical *JAM* nodes associated with a visual object (shape), which can be connected via arbitrary connection graphs to establish communication and migration of computational agents. In the graphical 2D world virtual *JAM* nodes can be either placed at any coordinate or aligned on a grid world.

**Figure 2.** The principle concept of closed-loop simulation for augmented virtuality: (**Left**) Simulation framework based on the JavaScript agent machine (JAM) platform (**Right**) Mobile and non-mobile devices executing the JAM platform connected with the virtual simulation world (via the Internet) [24].

Each *JAM* node is capable of processing thousands of agents concurrently. *JAM* and *SEJAM* nodes can be connected in clusters and on the Internet (of Things) enabling large-scale "real-world"-in-the-loop simulations with millions of agents. Agents executed in the simulation have access to an extended simulation API. Physical agents can use an extended *NetLogo* compatible simulation API extension, too, enabling simulation world analysis and control, based on object iterators (*ask* and *create* statements, see [14], with a detailed discussion in Appendix A).

The mixed-model simulation world consists of physical and computational agents bound to logical (virtual) platforms (host of the agent) that are arranged or located on a lattice (patch grid world) to provide world discretisation for the sake of simplicity. The patch grid world was derived from the *NetLogo* simulator, although the *SEJAM* simulator is not limited to this world model and can handle arbitrary two-dimensional non-discretised world coordinate systems (e.g., GPS). The agents are mobile. Computational agents, as mobile software processes, can migrate between platforms (both in virtual and real digital worlds), whereas physical agents are fixed to their platform and only the platform is mobile (in the virtual world only).

The *JAM* agent behaviour model is based on activity-transition graphs (ATG, details in [34]). An ATG decomposes the agent behaviour into activities performing actions (computation, mobility, and interaction with other agents and the world). An activity is related to a sub-goal of a set of goals of the agent. There are transitions between activities based on the internal state of the agent, basically related to reasoning behaviour under both pro-active (goal-directed) and reactive (event-driven) stimuli. Modification of agents (visual, shape, position, body variables, and state) can be performed globally

by the world agent via a simulation interface providing a *NetLogo* API compatibility layer, or by the individual agents.

Both *JAM* and *SEJAM* are programmed entirely in *JavaScript*, enabling the deployment on a wide range of host platforms (mobile devices, servers, IoT devices, and WEB browser, details described in [35]). *JAM* and *SEJAM* can be connected via IP-based communication links. *JAM* provides virtualisation and security (encapsulation) by the agent input–output system (AIOS), tuple spaces for generative or signal-driven inter-agent communication, and virtual (logical) nodes bound to a world contained in one physical *JAM* node. Each physical or logical *JAM* node can be connected with an unlimited number of remote *JAM* nodes by physical links (UDP/TCP/HTTP using the AMP protocol), shown in Figure 2. Logical nodes can be connected by virtual communication links. Links provide agent process migration, signal (message) and tuple propagation.

The agent behaviour defines:


A simulation world consists of multiple virtual *JAM* nodes (*vJAM*), which can be connected by virtual links. In contrast to pure ABM simulators like *Netlogo*, the *SEJAM* simulator simulates computational and physical agents and the agent processing platform itself and is primarily an ABC simulator, but can be used for ABM like simulations, too. There is a global simulation model defining agent classes (behaviour and visuals), nodes (visuals), resources, simulation parameters, and the construction of the simulation world.

Computational agents are always bound to a virtual *JAM* node. A node can be created dynamically at simulation run-time or during the initialisation of the simulation world. A node is related to a virtual position in the two-dimensional world (similar to a patch in the *Netlogo* world model). Virtual nodes can be grouped in tree structures and the movement of a parent node (in the virtual world) moves all children, too.

In contrast to a *Netlogo* simulation model using one main script describing the entire world, agents, and resources, a dedicated world agent (operating on a dedicated world node) controls the simulation in *SEJAM*. A node position can be changed by agents (related to the movement of turtles in the *Netlogo* model. To summarise, *JAM* agents play two different roles in the simulation: (1) ABC: mobile crowd sensing, digital interaction between physical agents and the environment, and simulation control and; (2) ABM: Representation of humans, bots, and digital twins of humans.

There is a significant difference between traditional closed-world simulations and simulations coupled with the real world and real-time environments (human-in-the-loop simulation). Closed simulations are performed on a short time-scale with pre-selected use-cases and input data. The simulation can be processed step-wise without a relation to a physical clock. In contrast, open-loop simulation requires continuous simulation on a large time-scale creating big data volumes. A relation to a physical clock is required, too.

The mapping of the physical onto virtual worlds and vice versa is another issue to be handled. There are basically three possible scenarios and world mapping models (illustrated in Figure 3):


**Figure 3.** (**A**) Real-world only deployed with humans. (**B**) non-overlapping real and virtual world, and (**C**) overlapping real and virtual world.
