**3. Discussion**

We now explore the scaling properties and the statistically self-similar behavior of the constructed infinite tiling. The theoretically scaling behavior *l*(*t*) ∼ *t*1/3 of a characteristic length scale is tested by computing the interface area of each tile over time. The interface is thereby represented as the level-surface *φ* = 0. In addition, we also compute the length of the interface of the level line at *z* = 0.25, which is used for visualization. Figure 5 shows the results over time, which are averaged over all individual realizations of tiles.

**Figure 5.** Development of the interface area of the 3D-structure and the interface-length of a slice through the center of the domain over time, averaged for all tiles.

The results lead to a scaling exponent which is below the upper bound of 1/3. The slope is not constant, but on average equal for the two measures and approximately 0.26. There are different reasons for this lower value, either the structure has not reached the late-time behavior for which the theoretical scaling behavior is expected, or the considered domain with the zero-flux boundary conditions on top and bottom favor parallel structures and thus prevent coarsening. The limitation of our approach, to combine several tiles and to compute them separately, should also be mentioned. This approach only allows the consideration of coarsening up to a length scale of the size of a tile. For larger times, the approach is no longer valid. However, even if the theoretical scaling law could not be shown computationally, statistical self-similarity still might be possible. Statistical self-similarity can already be expected from a randomly chosen sample. We consider the middle slice at an early time instant, its coarsening and subsequent zooming-out in Figure 6. Instead of the interface line the two phases are rendered in black and white. When the coarse structure is zoomed out to a degree where the interface-length matches that of the earlier timestep, both structures are visually similar (see Figure 6a,c). To quantify this, we compute for each row and column of pixels the distance between two interfaces. This is done for all samples and plotted for different times in Figure 7.

**Figure 6.** Middle slice of the domain in an early time instance (**a**), a later point in time (**b**) and the later time-point zoomed out until the interface-length equals that of the early one (**c**). In the last image the unzoomed region is framed to indicate the level of zoom applied.

**Figure 7. Left**: density-histograms of the distances between interfaces for three selected timesteps (all tiles combined). **Right**: a square subregion of timestep 130 is considered which, after zooming to the size of a full tile, has the same interface area as timestep 1900. The histograms match almost exactly, which computationally indicates the statistically self-similar structure.

Even if the theoretically predicted scaling law *l*(*t*) ∼ *t*1/3 could not be computationally shown, the large-scale simulations, which run for each tile on a high-performance computer, reproduce the predicted statistical self-similarity. The huge structure, which results as an arrangemen<sup>t</sup> of individual tiles, would not have been possible to simulate on the available hardware. The approach fulfills two goals, it provides enough statistics to analyze scaling and statistical self-similarity and it allows the generation of aesthetically appealing tilings with almost invisible recurrences, which can be infinitely extended.

### **4. Materials and Methods**

The Cahn–Hilliard equation [4] is a fourth order partial differential equation resulting as a *H*−<sup>1</sup> gradient flow of a Ginzburg-Landau energy

$$\mathcal{E}\left[\phi\right] = \int\_{\Omega} \gamma \left(\frac{\epsilon^2}{2} |\nabla \phi|^2 + B(\phi)\right) \, d\Omega\_\prime \tag{2}$$

where Ω is a bounded domain,  a positive parameter determining the width of the diffuse interface, *γ* the surface energy, here considered as a positive constant, and *B*(*φ*) = 14 (1 − *φ*<sup>2</sup>)<sup>2</sup> a double well potential. The resulting equation reads

$$
\partial\_l \phi = \gamma \Delta \mu, \qquad \mu = -\epsilon^2 \Delta \phi + B'(\phi) \tag{3}
$$

and converges for  → 0 to the Mullins–Sekerka problem [5]; see Ref. [6].

Various numerical approaches have been proposed to solve the equation efficiently. We consider a convexity splitting approach, e.g., [7–11]. The idea is to split the double well potential *B*(*φ*) = *Bc*(*φ*) − *Be*(*φ*), such that both parts are convex and to consider the time discretization as

$$d\_{\tau} \phi^{n+1} = \gamma \Delta \mu^{n+1}, \qquad \mu^{n+1} = -\epsilon^2 \Delta \phi^{n+1} + B\_c'(\phi^{n+1}) - B\_c'(\phi^n), \tag{4}$$

with discrete time derivative *dτφn*+<sup>1</sup> = (*φn*+<sup>1</sup> − *φ<sup>n</sup>*)/*<sup>τ</sup>n*. The resulting scheme is unconditionally energy stable, unconditionally solvable and converges optimally in the energy norm [9]. To solve the above systems, we consider a linearization of *<sup>B</sup>c*(*φ<sup>n</sup>*+<sup>1</sup>) ≈ *<sup>B</sup>c*(*φ<sup>n</sup>*) + *Bc* (*φn*)(*φn*+<sup>1</sup> − *φ<sup>n</sup>*). We further consider adaptive mesh refinement according to criteria related to the position of the diffuse interface, here ∇*φ*, to ensure a resolution of approximately five grid points across the interface and a coarser mesh elsewhere; see Figure 8.

**Figure 8.** Typical structure, highlighting one of the two phases and the adaptively refined mesh along the diffuse interface.

The resulting linear system is solved in parallel using a block-preconditioner, see [12,13], and the iterative solver FGMRES. All problems are implemented in the adaptive finite-element toolbox AMDiS [14,15]. The considered parameters are  = 0.01 and *γ* = 1.0 and as computational domain rectangular cuboid Ω = (0, *<sup>L</sup>*)*x*(0, *<sup>L</sup>*)*x*(0, *l*) with *l* = 0.2 and *L* = 2.0 for the larger and *L* = 1.0 for the smaller domain and boundaries <sup>Γ</sup>*top*, <sup>Γ</sup>*bottom*, Γ<sup>0</sup> and Γ1 = <sup>4</sup>*i*=<sup>1</sup> <sup>Γ</sup>1,*i*, see Figure 9. The number of grid points on the larger domain reduces from approx. 2.95 million at the beginning to approx. 1.95 million at the final configuration.

**Figure 9.** Geometric setting and boundaries.

As initial condition we consider white noise around the mean value *φ* = 0. At Γ*top* and Γ*bottom* we specify zero-flux boundary conditions for *φ* and *μ*. We first consider the larger domain with zero-flux boundary conditions for *φ* and *μ* also on Γ0. In this setting the finite-element formulation in each time step reads: Find *φn*<sup>+</sup>1, *μ<sup>n</sup>*+<sup>1</sup> ∈ *Vh* such that ∀*η*, *ξ* ∈ *Vh*

$$\int\_{\Omega} d\_{\mathbf{r}} \phi^{n+1} \eta \, d\mathbf{x} + \gamma \int\_{\Omega} \nabla \mu^{n+1} \cdot \nabla \eta \, d\mathbf{x} \quad = \quad 0 \tag{5}$$

$$\int\_{\Omega} \mu^{n+1} \xi \,d\mathbf{x} - \epsilon^2 \int\_{\Omega} \nabla \phi^{n+1} \cdot \nabla \xi \,d\mathbf{x} - \int\_{\Omega} B'\_{\epsilon} (\phi^{n+1}) \xi \,d\mathbf{x} \quad = \quad - \int\_{\Omega} B'\_{\epsilon} (\phi^n) \xi \,d\mathbf{x},\tag{6}$$

with *Vh* = {*v* ∈ *<sup>C</sup>*<sup>0</sup>(Ω)|*<sup>v</sup>*|*T* ∈ *P*<sup>1</sup>(*T*)∀*T* ∈T} the space of piecewise linear Lagrange elements and triangulation T . In each time step we extract *φ<sup>n</sup>*+<sup>1</sup> 1 = *φn*+<sup>1</sup> and **n** · ∇*φ<sup>n</sup>*+<sup>1</sup> 1 = **n** · ∇*φn*+<sup>1</sup> along the inner boundary Γ1. These data are going to be used in the subsequent computations in the smaller domain as boundary conditions on Γ1. The finite-element formulation now reads in each time step: Find *φn*+<sup>1</sup> ∈ *Vh*,Γ1,*φ*<sup>1</sup>and *μ<sup>n</sup>*+<sup>1</sup> ∈ *Vh* such that ∀*η* ∈ *Vh*,Γ1,0 and ∀*ξ* ∈ *Vh*

$$\int\_{\Omega} d\_{\Gamma} \boldsymbol{\Phi}^{n+1} \boldsymbol{\eta} \, d\mathbf{x} + \gamma \int\_{\Omega} \nabla \boldsymbol{\mu}^{n+1} \cdot \nabla \boldsymbol{\eta} \, d\mathbf{x} \quad = \quad \text{0} \tag{7}$$

$$-\int\_{\Omega} \mu^{n+1} \underline{\xi} \,d\mathbf{x} - \epsilon^2 \int\_{\Omega} \nabla \boldsymbol{\phi}^{n+1} \cdot \nabla \underline{\xi} \,d\mathbf{x} - \int\_{\Omega} B'\_{\varepsilon} (\boldsymbol{\phi}^{n+1})^{\underline{x}}\_{\boldsymbol{\theta}} \,d\mathbf{x} \quad = \quad - \int\_{\Omega} B'\_{\varepsilon} (\boldsymbol{\phi}^{n})^{\underline{x}}\_{\boldsymbol{\theta}} \,d\mathbf{x} - \epsilon^2 \int\_{\Gamma^{1}} \mathbf{n} \cdot \nabla \boldsymbol{\phi}\_{1} \underline{\xi} \,d\mathbf{s} \tag{8}$$

with *Vh*,Γ1,*<sup>α</sup>* = {*v* ∈ *<sup>C</sup>*<sup>0</sup>(Ω)|*<sup>v</sup>*|*T* ∈ *P*<sup>1</sup>(*T*)∀*T* ∈ T , *v* = *α* on <sup>Γ</sup><sup>1</sup>}.

For different initial data this leads to different solutions with common boundary conditions. However, to construct tilings, they also must match, which is not ye<sup>t</sup> guaranteed. To fulfill this requirement, we proceed in two different ways. The first approach considers only one computation on the larger domain and uses the extracted values and fluxes *φ<sup>n</sup>*+<sup>1</sup> 1 and **n** · ∇*φ<sup>n</sup>*+<sup>1</sup> 1 at Γ1 only from two sides Γ1,1 and Γ1,2 (*N* = 2) and specifies them also on the opposite sides for the computations on the smaller domain. For *M* different initial conditions this generates *M* individual samples which match with each other at the boundaries if translated by the domain size in *x*- or *y*-direction. This leads to a very flexible arrangemen<sup>t</sup> of the samples but has the drawback of frequent recurrence at the boundaries.

The second approach also begins with a computation on the larger domain with boundary Γ<sup>0</sup> but subsequent computations are performed on intermediate domains that extend to the bounds of Γ<sup>0</sup> in directions where a fresh structure is desired at the boundary and are restricted to the bounds of the smaller domain Γ1 in directions where the structure is to be continued from an already existing neighbor tile by using its stored values and fluxes *φ*1 and **n** · ∇*φ*1 at Γ1. This requires small modifications of the finite-element formulation in Equations (7) and (8) using only parts of Γ1 instead of the whole inner boundary. When enough samples are computed to define all *N* boundary sides the remaining samples are computed on Γ1 with all sides fixed by boundary conditions from earlier computations.

The most simple setup meeting our design-goal of non-obvious recurrence of boundaries requires five different tiles *A*0, *B*0, *C*0, *D*0 and *E*0 which can be assembled into a row that matches the same row displaced by a few tiles above and below. Then, five rows of five tiles each form a square which can be continued in all directions indefinitely (see Figure 10c). This setup allows for ten unique boundary sides *s*1 through *s*10 (see Figure 11). Inside our 5 × 5 square of tiles recurrences do not occur in a row, a column, or diagonally, which would not be possible with a smaller number of unique tiles. With only two or three different tiles, repetitions would have to occur at least diagonally (see Figure 10a) and with four different tiles it is only possible to build a unique 4 × 2 rectangle without diagonal repetitions (see Figure 10b).

On the macroscopic scale our pattern still has obvious repetitions since we need to continue the same square of 5 × 5 tiles in all directions to form the infinite tiling. To remedy this problem, we generate variants of our five initial tiles *A*1, *B*1, *C*1, *D*1 and *E*1 which exactly copy the boundaries of their respective prototype with index zero but differ on the inside. Now, in our infinite tiling each tile of a specific prototype is replaced randomly with either the original 0-variant or the new 1-variant of that type. With only two variants per type we already allow for more than 33 million (225) distinct squares of 5 × 5 tiles. For our interactive visualization software, we settled with these *M* = 10 tiles

due to memory-constraints. However, for printed realizations such as wallpapers, further variants could be computed, making it even harder to spot recurrences.

The initial five tiles must be computed consecutively since their boundaries depend on each other. Starting with *A*0 we have no constraints ye<sup>t</sup> and thus the simulation yields four fresh boundary sides. The simulation domain Γ<sup>0</sup> exceeds the region of interest Γ1 in all four connecting directions because we need to avoid the level lines always being perpendicular to the boundary of Γ1 (Figure 11a).

**Figure 10.** (**a**) Layout of 3 × 3 tiles: diagonal recurrence cannot be avoided. (**b**) Layout of 4 × 4 tiles: the bottom two rows are duplicates of the top two; all other arrangements would yield diagonal recurrence. (**c**) Layout of 5 × 5 tiles: recurrences in non-obvious pattern.

**Figure 11.** Initial five tiles and their connecting boundary sides. a) Tile *A*0 is simulated on Γ0; b,d) *Btmp* and *Ctmp* are intermediate steps in preparation of *B*0 and *C*0, respectively; c,e,f,g) tiles *B*0 through *E*0 are simulated on domains that extend Γ1 towards Γ<sup>0</sup> only in directions where fresh boundary data is required, the other boundaries are fixed.

As a next step we simulate a temporary tile *Btmp* to prepare the run for *B*0. Obviously, tile *B*0 has to match *A*0's right-hand side boundary (*<sup>s</sup>*2 in Figure 11b) at its own left-hand side. However, this is not the only constraint. Our 5 × 5 layout requires that *B*0 meets *A*0 also in its top-right corner (see Figure 10c). Thus, in our temporary step besides fixing the left boundary we also fix the top boundary with a mirrored version *sm*3 of the data from *A*0's bottom side. The mirroring is required because the top-right corner of *B*0 must conform to the bottom-left corner of *A*0. The simulation domain for *Btmp* exceeds the region of interest only in two directions: bottom and left, where we obtain data for two fresh boundary sides (Figure 11b). Now we are ready to compute our second tile *B*0. We fix the right, bottom and left boundaries with data from our previous two simulations and this time leave the top-side unconstrained to obtain a fresh boundary that is only fixed at the two corners where it will match tile *A*0 (Figure 11c).

For tile *C*0 we require another preliminary run *Ctmp* to fix the bottom-left corner where it meets *A*0. We use the mirrored top-side *sm*1 of *A*0 at *Ctmp*'s bottom to account for that and fix the left and top sides where *C*0 shares sides with *B*0 and *A*0. Only at the left-hand side we obtain a fresh set of boundary-data (Figure 11d). For *C*0 we release the bottom-constraint of *Ctmp* and obtain another fresh boundary (Figure 11e). The last tile with a fresh boundary is *D*0. All sides, except the right-hand side one, are already constrained by previous computations (Figure 11f). For *E*0 all four sides are fully predetermined (Figure 11g). We need to simulate it only for its interior.

All variant-tiles *A*1 through *E*1 (and further ones if desired) can now be computed in parallel since their boundaries are already known. They are all restricted to Γ1 and do not require extraction of boundary-values and -fluxes, hence are computationally less expensive than the initial runs.

**Supplementary Materials:** The following are available online at http://www.mdpi.com/2073-8994/11/4/444/s1, Figure S1: tile1, Figure S2: tile2, Figure S3: tile3, Figure S4: tile4, Video S5: Coarsening of 3D structure, visualized by showing *φ* = 1 in Ω, Video S6: Coarsening of 3D structure, visualized by showing *φ* = 0.5 at *z* = 0.01, 0.13, 0.25, 0.37, 0.49, Video S7: Coarsening of 3D structure, visualized by showing *φ* = 0.5 at *z* = 0.01, 0.13, 0.25, 0.37, 0.49 projected to *z* = 0, Video S8: Journey through space and time in the visualization software in black and white, Video S9: Journey through space and time in the visualization software with colored tiles, Video S10: Rubik's cube.

**Author Contributions:** F.S. and A.V.; methodology, F.S.; software, F.S. and A.V.; writing—original draft preparation, F.S.; visualization, A.V.; supervision, A.V.; project administration, A.V.; funding acquisition.

**Funding:** This research was funded by DFG gran<sup>t</sup> number SFB/TRR96 A7. We acknowledge computing resources provided by the Jülich Supercomputing Center under gran<sup>t</sup> number HDR06.

**Acknowledgments:** We acknowledge preliminary contributions by Folke Post and Lars Ludwig and fruitful discussions with Rainer Backofen and Simon Praetorius.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
