**3. Implementation Details**

#### *3.1. Dynamic Flowsheet Simulation in Dyssol*

In this work, the Dyssol modelling framework [7] was used to perform simulations using the newly proposed methods. The system is being developed within the priority program of the German Research Foundation (DFG) SPP 1679 "Dynamic simulation of interconnected solids processes DYNSIM-FP" [47].

Dyssol is written entirely in the C++ programming language [48] using a modular architecture and an object-oriented approach [49]. This made it possible to split the entire program into separate, relatively independent modules, to facilitate their development. They are integrated into a single system using specially designed software interfaces. In particular, each model (e.g. agglomerator, mill, or screen) is an independent software library that can be created separately, and then connected to the simulation program. This greatly increases the flexibility and the extensibility of the entire system and simplifies its development and maintenance.

Due to the mathematical complexity and diversity of models in the area of solids processing technology, the calculations in Dyssol are carried out according to the sequential-modular approach [50]. Each model is calculated independently of the others, using its own most appropriate equations solvers and computational methods. The task of the simulation system itself in this case is to manage the information and material flows between the models in accordance with the flowsheet structure and to ensure convergence. The sequential-modular approach simplifies the development of new models, since they are less limited to the implementation and functionality of the modelling framework. Moreover, this method can be effectively applied even at the scale of individual process units [51].

Application of the modular approach imposes some restrictions on the flowsheet structures. In particular, schemes containing recycle flows cannot be calculated directly and require additional treatment. Before starting the simulation, all such flows must be identified and initialized with some predefined values in order to break the cycles [52]. After that, all units inside the recycle loop are calculated iteratively until convergence. To speed up this process, the initial parameters of recycles are calculated anew at each iteration using convergence methods [7,53].

Reaching convergence over a relatively large time interval can be very difficult and can become a resource-intensive task. To overcome this problem, dynamic modelling of flowsheets with recycle flows is performed using an approach based on the waveform relaxation method [54]. Here, iterative calculations are performed only for a certain short time interval [6,15], where convergence can be reached much faster. When the calculation of the current time interval is completed, the system uses data interpolation methods [7] to initialize the next time window and proceeds to its calculation.

## *3.2. Application of Transformation Matrices*

From a mathematical point of view, the application of a full transformation matrix (Equation (16)) is a relatively simple operation and can be described as the regular multiplication of matrices. However, with a large number of matrix dimensions and a large number of classes, such a naive calculation will be very demanding in terms of computational capabilities and memory consumption. Moreover, the need to work with time-dependent parameters imposes additional restrictions on data structures. Therefore, all distributed parameters in Dyssol are stored in sparse format, represented by specially developed tree data structures [7], as is schematically shown in Figure 2 for a single time point.

**Figure 2.** Tree data structure for storing interdependent multidimensional distributed parameters of solids in Dyssol.

Here, each hierarchy level describes the distribution of material over a specific parameter, and each entry is the mass fraction of material that has a specific combination of parameters. Since distributions often contain a lot of zero values, such a structure can drastically reduce the amount of data, which needs to be stored and processed. In addition, this data structure allows the use of relative mass fractions for each level of the hierarchy, instead of operating with absolute mass fractions. Thus, from Figure 2 it follows that there are 30% of particles with a size of (2 mm<sup>3</sup> + 3 mm3)/<sup>2</sup> = 2.5 mm3; of these, 50% have a sphericity of (0.5 + 0.75)/2 = 0.625; and from them, 20% have a porosity of (0.33 + 0.66)/2 = 0.5. The total percentage of particles with this combination of parameters is 0.3 × 0.5 × 0.2 = 3%.

The use of absolute mass fractions and tree structures allows for working with individual distributions without changing information in the others. So, transformation matrices can be effectively applied for each level separately and this will not affect all distributions located at higher hierarchical levels.

Take the *N*–dimensional distribution represented by *N* hierarchy levels, where *Ld* is the number of classes in *d*–th dimension and *Xpq* and *Ypq* are relative mass fractions in the *q*–th class of the *p*–th level in the initial and the transformed distribution, respectively (Figure 3).

**Figure 3.** The scheme of application of the transformation matrix for the *N*–dimensional distribution.

If the transformation matrix *T* is calculated for *M*–dimensional parameter space, its application will consist of three stages:

1. Apply *T* to the lowest *M*–th hierarchy level of input distribution, using Equation (17) in the form of

$$\mathbf{Y}\_{j\_1 j\_2 \dots j\_{M-1} j\_M}^{M} = \sum\_{i\_1=1}^{L\_1} \sum\_{i\_2=1}^{L\_2} \dots \sum\_{i\_{M-1}=1}^{L\_{M-1}} \sum\_{i\_M=1}^{L\_M} X\_{i\_1 i\_2 \dots j\_M}^{M} \cdot X\_{i\_1 i\_2 \dots i\_{M-1}}^{M-1} \cdot \dots \cdot X\_{i\_1 i\_2}^2 \cdot X\_{i\_1}^1 \cdot T\_{i\_1 i\_2 \dots i\_M j\_1 j\_2 \dots j\_M}. \tag{52}$$

For example, for *M* = 2, the second level of the hierarchy should be calculated as

$$\mathbf{Y}\_{j\_1 j\_2}^2 = \sum\_{i\_1=1}^{L\_1} \sum\_{i\_2=1}^{L\_1} \mathbf{X}\_{i\_1 i\_2}^2 \cdot \mathbf{X}\_{i\_1}^1 \cdot T\_{i\_1 i\_2, j\_1 j\_2} \tag{53}$$

2. Extract the transformation laws for the previous level (*M* − 1) from *T* and apply them using Equation (52). Repeat it up to the hierarchy level 1. For example, for *M* = 2, the first level will be calculated as

$$Y\_{j\_1}^1 = \sum\_{i\_1=1}^{L\_1} X\_{i\_1}^1 \cdot T\_{i\_1, j\_1 \cdot \prime} \tag{54}$$

which corresponds to Equation (14).

3. Apply *T* to calculate the remaining levels *K* below *M* as

$$\mathbf{Y}\_{j\_1 j\_2 \dots j\_{K-1} j\_K}^{K} = \frac{\sum\_{i\_1=1}^{L\_1} \mathbf{X}\_{i\_1 j\_2 \dots j\_K}^{K} \cdot \mathbf{X}\_{i\_1 j\_2 \dots j\_{K-1}}^{K-1} \cdot \dots \cdot \mathbf{X}\_{i\_1 j\_2}^2 \cdot \mathbf{X}\_{i\_1}^1 \cdot T\_{i\_1, j\_1}}{\mathbf{Y}\_{j\_1 j\_2 \dots j\_{K-1}}^{K-1} \cdot \mathbf{Y}\_{j\_1 j\_2 \dots j\_{K-2}}^{K-2} \cdot \dots \cdot \mathbf{Y}\_{j\_1 j\_2}^2 \cdot \mathbf{Y}\_{j\_1}^1} \,\tag{55}$$
 
$$M \prec K \le N$$

For example, if *M* = 1 and *K* = 2:

$$\chi^2\_{j\_1j\_2} = \frac{\sum\_{i\_1=1}^{L\_1} X\_{i\_1j\_2}^2 \cdot X\_{i\_1}^1 \cdot T\_{i\_1,j\_1}}{Y\_{j\_1}^1}. \tag{56}$$

Thus, using Equations (52) and (55), as well as sorting the levels in the hierarchical data structure, one can apply the transformation matrix of any complexity to any input distribution.

Depending on the unit type, the conventional algorithm for applying transformation matrices varies:

• For steady state units: copy the distribution from the input to the output, then apply the transformation matrix to the output.

• For dynamic units: use the input distribution to calculate the holdup, apply the transformation matrix to the holdup, and then calculate the output.

In the latter case, because of time discretization, the mixing of the input material with the holdup and the application of the transformation matrix are performed sequentially.
