Mesh sensitivity analysis was first performed to determine if the numerical solution was independent of domain discretization (mesh resolution). Three systematically refined meshes of -, -, and -million elements were used with the same operating parameters. Richardson extrapolation was going to be used, but it was found that the solution differed so minutely that Richardson extrapolation would hold no quantitative value for the solution of mesh convergence. Instead, it was deemed that the medium mesh size of -million elements would be used. Simulations were performed in our previous work that demonstrated a particle-diameter-to-minimum side-cell-size ratio of up to particle diameters can be used while still conserving the accurate momentum exchange between the two phases.
4.1. Particle Full Development
A fully developed flow is critical in these simulations. If we did not have a fully developed flow before the outlet of a simulation, then the particle flux and velocities would entirely depend on both the insertion location and initial velocities. It is difficult to conclude that the experimental work performed by Hardalupas et al. [
5] is a fully developed flow. The pipe used in the study was
in length with a Reynolds number of 13,000. For single-phase flows, this Reynolds number is well into the turbulent zone; therefore, no laminar boundary layer forms, and there will be a turbulent velocity profile. To further demonstrate this, an idealized entry length was calculated until fully developed flow by using an assumed velocity profile at the inlet and the classical solution developed by Latzko [
55], given as
where
for the circular tube. Calculating this, an entry length of
was obtained for an idealized single-phase flow. The pipe used for Hardalupas et al. [
5] was well above this entry length, but this did not indicate for certain that the particles were fully developed because of particle slippage during particle insertion and mixing. Indeed, this is briefly discussed in the paper when the results for the highest particle size (
m) showed signs of a lack of fully developed flow. In the work performed by Mena and Curtis [
56], it was postulated and concluded that 51 pipe diameters were sufficient for fully developed flow, and they measured this by the use of pressure transducers placed periodically along the pipe. In the study, they used water as the carrier fluid; therefore, the viscous effects were significantly higher than air, and particle slip velocity would be less of a concern.
To the current authors’ knowledge, Lau and Nathan [
51] and the future work of Lau and Nathan [
11] are the only studies to test for fully developed flow. They performed particle experiments with
and
with identical Stokes numbers to test for particle slippage. They realized that the particle concentration and velocities converged to similar values and, therefore, concluded that
was a sufficiently long-enough nozzle for full flow development at Stokes numbers of 0.3–11.2 and Reynolds numbers of 10,000 and 20,000. In the study performed by Hardalupas et al. [
5], Stokes numbers of
and
with a Reynolds number of 13,000 were used. There is the possibility that, with the larger Stokes number of 50 and the fact that Lau and Nathan [
51] had a pipe length of almost
-times that of Hardalupas et al. [
5] that we may not have fully developed flow. Therefore, no conclusion can be realized.
To test for fully developed flow for the numerical test case of Hardalupas et al. [
5], simulations were performed using merely the nozzle or pipe flow with the same operating parameters as previously outlined. Still, instead of particles inserted at
, they were inserted at
with sampling at locations at
and negative
,
,
,
,
,
,
,
,
,
, and
, as referenced from the nozzle exit with the positive in the flow direction. In addition, two simulations were performed to test for the location of full development and to determine whether inlet velocity made a significant difference in flow development.
It was realized that most of the flow development occurred in the first 40 nozzle diameters, where the last
experienced very minute changes in all parameters (
Figure 2). The inside of the nozzle for the Lau and Nathan (Lau3,
Table 1 and
Table 4) test case was also examined. It was realized that full development occurred in about the first 30 nozzle diameters with only minute differences observed in the last 10 nozzle diameters; see
Figure 3.
4.2. Hardalupas et al. [5] Results
It was realized that the results at the nozzle exit for all the test cases of Hardalupas et al. [
5] provided a similar trend for the mean and root-mean-squared velocity near the central regions of the jet stream. Still, towards the outer regions near the nozzle wall, it was observed that the particles moved more towards a laminar shape profile rather than a more-turbulent shape, as demonstrated by the experiments (
Figure 4a and
Figure 5a). This was believed to be caused by the interpolation method used for velocity in particle–fluid interactions.
To achieve effective momentum coupling, we required a cell size with a minimum length side of at least
-times the particle diameter for these types of flow problems [
20]. To model the boundary layer at the wall of a CFD simulation, we either needed to fully integrate into the wall or use an algebraic function. To fully integrate into the wall, we required an extremely fine mesh with, at the very least, 8–10 cells located below
[
27]. Achieving this, while also achieving effective momentum coupling, would require extremely small particles, far smaller than those used in many cases, including the experimental studies considered in this current work. Therefore, a wall approach near the wall was used in this current work to accurately model the strong viscous effects on the fluid velocity while achieving effective momentum coupling. That said, there was not a perfect solution here because there was also a concern with the fluid velocity and void fraction interpolation when calculating the drag particle–fluid coupling force. A wall function was used in this current work to model the fluid boundary layer in the larger cell. Still, for interpolation, that same wall function was not used for particle coupling parameter interpolation, but instead, a linear interpolation method.
In CFD-DEM, the interpolation method used is a linear cell point method. This method breaks each face into triangles to define tetrahedra and the cell center point; then, it cycles through the tetrahedra to determine where the point or particle cell point center is located. An inverse distance linear interpolation was then used from known velocity points of the cell centers. This produces a different distribution of coupling velocities than is modeled by our wall function. Optimally, we would want to use an interpolation method that follows the wall function. This would involve developing a tracking algorithm that allows the use of a custom interpolation scheme when in the near-wall cell, but difficulties would arise that may increase the computational expense, which would have to be addressed.
Although there were discrepancies at the nozzle exit, it was observed that, further in the flowfield, there were very similar trends between the numerical and experimental results for the lower (0.23) and higher (0.86) mass fractions, respectively.
In general, for the 80
m particle simulations (Hardalupas1 and Hardalupas2), discrepancies were observed at the nozzle exit associated with the interpolation method for both mean (rms) velocities and particle number fluxes (
Figure 4a and
Table 5). As previously outlined, this can be attributed to the issues with the interpolation method near the wall. That being said, relative accuracy was achieved for the velocities where the general change in the results between the two mass fractions was similar to that of the experiments. At location
, similar slopes were achieved between all numerical and experimental results. The numerical results showed a slight decrease in slope relative to the experimental results near the outer regions of the jet. At
, higher mean velocities were observed for the numerical results with root-mean-squared velocities closely following and a smaller spread with the particle number fluxes. It was concluded that, with 80
m particles, up to
should see viable results for the case of the optimization of an industrial unit, but anything further downstream would need more analysis. Work needs to be performed for pipe flows with wall functions in CFD-DEM to solve the interpolation issues, but it was observed that the particles “correct” themselves after they exit the nozzle. This could be attributed to having relatively low Stokes numbers, and therefore, the particles closely followed the highly accurate and validated fluid flow, where wall effects were not present.
For the 40
m particles, the same errors at the nozzle exit were observed because of interpolation for all fields, but with the same relative accuracy or change in trends between the two mass fluxes (
Figure 5a,b). An interesting note for the nozzle exit is that, with smaller mass loading (Hardalupas3), a particle number flux with a flatter profile than that of the larger mass loading (Hardalupas4) was observed. It was postulated that, if mass loading continued to increase, a more-triangular profile for particle number flux would be achieved. Further downstream, similar slopes in the results for the mean velocity and particle number flux were observed, but a significantly lower root-mean-squared velocity for the numerical results. It is curious to note that these smaller particle trends lined up quite well up to
, but with the larger particles results can only be trusted up to
. This was further reinforced by the root-mean-squared error values shown in
Table 5. It can be concluded that, as the Stokes number increased, where the particles tended to “go their own path”, the numerical results started deviating from the experimental results.
4.3. Lau and Nathan Single-Phase Results
The single-phase results for the test cases of Hardalupas et al. [
5] were validated in our previous work [
27], but the results of Lau and Nathan [
11] were not. Therefore, three different simulation setups were used to determine that the single-phase results were accurate before adding the particulate phase. If this was not performed, then there was a risk of obtaining a model that may have the perfect combination of parameters for the two-phase flow, but not necessarily because each of the phases separately is accurate. This would make it difficult to conclude with any confidence that this model can be generally applied to other similar flows and accurately represents the physics.
All cases started with the Lau3 test case (48 m/s), but without the inclusion of particles. The default Launder and Spalding [
26] k-
was used for one case. For the second case, a change of the empirical coefficients was made to the turbulence model by
and
. This modification was researched and validated extensively in our previous work [
27]. The third simulation used the modified k-
turbulence model, but without the use of co-flow. This was to test the effect that co-flow has on these types of numerical simulations.
There was very little difference between all simulations at the nozzle exit for both the mean and root-mean-squared velocities, as shown in
Figure 6a,d. At location
, the unmodified k-
turbulence model produced spreading that more closely followed the experiment for mean velocity, but the modified k-
produced a drop in axial velocity that more closely matched the experiment. Therefore, it is up to the researcher to weigh the pros and cons of using the modified k-
, whether it is important for accurate spreading or accurate axial drop in mean velocities. That being said, in the current authors’ previous work, it was found that both spreading rates and the axial drop in the mean velocity were both improved with the slight modification of the empirical coefficients [
27] for the two different sets of data provided by Bogusławski and Popiel [
28] and Hardalupas et al. [
5]. For the root-mean-squared velocities, the modified k-
produced spreading rates that more closely matched the experiments at all sampling locations. There was no appreciable difference in all sampling locations in adding a co-flow to the simulations. Considering all this, it was decided to use the modified k-
with a co-flow for all subsequent simulations for the test cases of Lau and Nathan [
11].
4.5. Coarse-Graining Results
Some DEM simulations use a large number of particles, upwards of six-million. This would require a significant amount of compute cores to track and calculate all collisions between particles. Furthermore, developing codes to have the ability to scale to a very large amount of compute cores can be difficult because of the issues associated with the slowed communication between compute nodes. To circumvent this, many researchers have turned to coarse-graining methods. The essence of coarse-graining is that multiple particles are represented as a single grain, and then, the grain is tracked through time. This significantly decreases the number of trajectories tracked throughout the system and, depending on the type of coarse-graining model, can reduce the number of tracking points by
. We can see that, with a coarse-graining factor of 2,
, the example of 6-million particles reduced down to 750-thousand particles. This reduced the overall computational cost significantly. Furthermore, additional computational savings can be realized by increasing the integration time step, and the duration of the soft sphere collisions will generally be smaller by several orders of magnitude [
39].
As previously mentioned in
Section 2.4, the type of coarse-graining used for this analysis was first proposed by Radl et al. [
40] and further expanded to include the Hertz nonlinear contact model by Nasato et al. [
41]. Radl et al. [
40] concluded that the major hurdle in using a coarse-graining approach is to correctly compute the collision rate and inter-parcel or inter-grain stress, with there being no easy way to scale the interaction parameters in an inertial flow regime [
40]. This conclusion is significant because it will be difficult to relate this current work’s collision frequencies, forces, and stresses from an unscaled system (no coarse-graining) to a scaled (coarse-grained) one. That being said, if the overall trends are followed, we may find the usefulness of coarse-graining in an industrial application.
To optimize an industrial-scale unit, a statistical analysis needs to be performed on the parameters, including the collision frequencies and forces acting on particles. Therefore, at first glance, it would seem that coarse-graining should not be used in this type of setting because of the disparity between inter-grain stresses from the unscaled to scaled system. That being said, the goal of numerical simulations, and any experimental study for that matter, is to find the optimum setup through an analysis of the changes in the geometric and operating parameters. Since this is the ultimate practical goal, and it may not be necessary to obtain the exact answers of the physical space; we can, for example, compare two different geometries of an unscaled system to a scaled system using those same geometries.
To investigate the accuracy of coarse-graining, two sets of simulations with the same operating parameters as the Hardalupas1 case were performed. The first set of simulations was unscaled, while the second set was scaled. In each set, a plate was inserted at two different axial locations:
and
relative to the nozzle exit, resulting in four simulations in total. The collision frequencies and force statistics applied to the particles when they hit the plate for both the
and
cases were then output. The ratio of the results obtained from the
and
simulations was calculated and compared to the ratios between the scaled and unscaled systems to determine the accuracy of coarse-graining. We then analyzed the collision frequencies and mean/variances of the normal/tangential forces for all collisions (
Figure 8a) and also analyzed the results grouped by location with intervals of
on the plate. These results were then used to determine whether the coarse-graining method accurately predicted the results that followed the simulation trend of an unscaled system.
The ratios we compared for collision frequencies are given by
where
and
are the collision frequencies for the unscaled DEM method for simulations with 5D and 15D plates, respectively.
and
are the collision frequency for the scaled system with a coarse-graining factor of two for the simulations with 5D and 15D plates, respectively. Similarly, for the force statistics, we have
The collision frequencies and force statistics were then output for all collisions (
Figure 8a) and concentric regions (
Figure 8b), and the ratios and associated errors were calculated, as shown in
Table 7.
It should be noted, for completeness, that the percent error was calculated from the traditional formula, given by
When analyzing all the collisions acting on the plate, it was realized that there was an error of less than
for all parameters, as indicated by
Table 7. These results demonstrated the accuracy in comparing the ratios of scaled to unscaled systems when analyzing all collisions on the plate.
To further analyze the trends of particle collisions with the plate, the collisions were independently considered within specific concentric regions on the plate, as shown in
Figure 8b, and the same analysis of the collision frequency and force statistics was performed.
In
Figure 9, the relative errors are plotted between the scaled and unscaled system ratios for each region. A slightly different conclusion than when analyzing all collisions can be realized. The majority of the mean values had an error less than
with the only region above being at
with an error of just above
. The highest errors in the force statistics were the variance errors, which were in line with the collision frequency errors; most errors were observed in regions where the collision frequency was low. To investigate this further,
Figure 9b shows individual collision frequencies for both unscaled and scaled systems at
and
plate positions with the corresponding collision frequency ratio relative errors. It was realized that the collision frequency was high near the center region with a much lower percentage error. Conversely, the collision frequency was lowest in the outer regions with a relatively high percentage error. Considering all this, along with the very low error of less than
for all collisions, it can be concluded that we achieved a relative accuracy that provided significant results for the practical application of an industrial flow problem.
A Kernel Density Estimate plot (KDE) was then used to visualize the distribution of the collisions on the plate (
Figure 10). This plot uses a Gaussian smoothing algorithm to estimate the probability density function. This allowed the visualization of the distribution of the particle impacts across the plate. It was realized that the probability of particle collisions between
and
was very comparable with the largest difference towards the outer regions, as demonstrated when comparing at location
(
Figure 10b,d). This reassuringly confirmed the use of coarse-graining for an industrial flow optimization problem.
Of special interest is comparing the flow profiles both inside of the nozzle and outside of the jet. This could be beneficial in future simulations because, if flow development occurs sooner for coarse-graining, that would further reduce the computational cost by allowing a shorter nozzle.
Very little difference in the length of the full development for the particles (
Figure 11) in all parameters was observed. Furthermore, it was realized that there was also very little difference between the scaled and unscaled systems in the jet stream for all fields, with the highest error being observed for the root-mean-squared velocity (
Figure 12).
If the desire is to optimize an industrial unit, then the goal is to pinpoint a design that performs better than another. Therefore, from a practical standpoint, if relatively accurate answers are achieved that follow the same trends as the changes in the design of a physical system, then the goal can be reached from purely numerical simulations alone. With that in mind, coarse-graining is deemed a very useful tool in the optimization of an industrial unit for high-speed jet flows because of the significant reduction in the simulation cost. That being said, these results and conclusions should not be realized for other types of systems, in particular systems with a large amount of particle–particle collisions in a semi-quasi steady state inertial system that significantly dictates the bulk flow behavior. It is worth acknowledging that the authors express a high level of confidence in their ability to employ coarse-graining techniques when dealing with Reynolds and Stokes numbers that fall within the range of those tested. However, it is imperative to conduct additional testing for values that lie outside of this range to ensure the validity of the results.