Topology optimization (TO) aims at establishing the optimal material distribution over the design domain with reference to performance objectives (e.g., compliance) [
17], subject to predefined loading and boundary conditions. Its application spectrum extends over several fields, including implants manufacturing [
18], aerospace engineering [
19], architectural engineering [
20], design of materials [
21], fluid mechanics [
22], structural engineering [
4] and others.
Several approaches have been proposed for solving TO problems, the main ones being [
17] (i) Level-Set method, (ii) Density method, (iii) Phase field method, (iv) Topological derivative method and (v) Evolutionary method. SIMP is the most well-known variant of the density method, which was proposed in the 1990s [
23,
24,
25]. The general formulation of a TO problem is summarized below:
where
denotes the criterion/objective (e.g., compliance of the structural system) to be optimized,
x refers the vector of unknowns, i.e., the FEs densities,
K is the stiffness matrix of the structural system, vectors
P and
U contain the loads and displacements, respectively and
is the vector of constraint functions (volume fraction, etc.).
3.1. The SIMP Approach
SIMP is conceivably the most commonly employed approach for solving TO problems. In structural topology optimization (STO) problems, the system compliance
C is the most widely adopted performance indicator. If
FEs are used to discretize the design domain
, the distribution of material over
is expressed via
, where
and
.
correspond to the density values of each FE:
indicates no material over the
ith FE domain, while
indicates that the
ith FE domain is filled with material. Therefore, Equation (
5) can be written as follows:
where
is the compliance for the current material distribution
x, and
,
and
, respectively, denote the volume values corresponding to the current and initial density vectors
x and
, as well as the target volume value of the optimized domain. In SIMP, the Young modulus
E is associated via a power law to the density value of each FE as follows:
where the penalization parameter
p is usually set equal to
. Thus, compliance can be expressed as follows:
Accordingly, the formulation of Equation (
6) can be expressed as
In the literature, various search algorithms have been used in conjunction with SIMP for solving the optimization problem of Equation (
9). The most commonly employed ones are the Optimality Criteria (OC) algorithm and the Method of Moving Asymptotes (MMA).
3.2. The Deep Learning-Assisted Topology Optimization (DLTOP) Methodology
SIMP is one of the most widely employed, highly accurate and robust methodologies, applicable to a wide spectrum of TO problems. Nonetheless, in the field of STO, the required model scale, complexity and discretization level are continuously increasing, and thus the application of SIMP to such problems encounters bottlenecks, even in modern computing facilities, due to the substantial associated demand in computing time and resources.
This insufficiency of SIMP motivated the development of DLTOP by the authors. In cases of pattern recognition problems, DBNs are able to discover the different levels of representation of the data nonlinearity. This capability of DBNs motivated the recent development of the so-called Deep Learning-assisted Topology Optimization (DLTOP) methodology by the authors of [
1], which enables reducing the iterations required by SIMP. The novelty of DLTOP lies in the use of deep learning techniques (DBNs) to propose a close-to-final optimized configuration of a structural system, based upon the system configuration after only a few SIMP iterations. This eliminates the greatest portion of the required SIMP iterations to obtain the final optimized structural topology, thus resulting in a substantial reduction in the required computing time and resources, while overcoming potential bottlenecks that would otherwise be encountered. The novel DLMU methodology proposed in this paper partially relies on the idea of DLTOP; however, it is additionally assisted by information provided by reduced models of the design domain, thus allowing for further acceleration of SIMP. Before presenting the features and advantages of DLMU, it is useful to provide here a detailed description of DLTOP.
DLTOP is a combination of SIMP and DBNs, where a DBN is trained to transform the pattern of density fluctuation of FEs generated during the starting iterations of SIMP to the final distribution of the density values over the design domain. DBN prediction capability is built based on a training procedure performed once over benchmark TO problems. The main advantage of DLTOP is that DBN needs not be trained again before being applied to any STO problem, irrespective of the FE mesh configuration and type (structured/unstructured), domain dimensions, target density, loading and boundary conditions, SIMP implementation features (e.g., filter value), etc. This key feature of DLTOP is attributed to its implementation such that every FE is handled separately, without requiring information regarding its location over the domain, its specific boundary and loading conditions, etc. The validation of DLTOP via several benchmark topology optimization test examples indicates a reduction of the number of iterations originally required by SIMP by more than one order of magnitude. Expectedly, the gain in computing time offered by DLTOP is proportional to the TO problem size. DLTOP is described hereafter with the use of a qualitative example.
Let us consider a design domain discretized with
FEs. In every iteration
t of SIMP, the density values
(that is also denoted as
) are being updated for every FE. The fluctuation of the density value
of the
FE can be expressed as a function of
t as follows:
The density value fluctuation with respect to the SIMP iterations varies drastically for different FEs [
1], due to their relative position over the domain with respect to loads, supports, etc. In this sense, each FE represents a different optimization history of density values with respect to the SIMP iterations, analogous to a discrete time-series. An initially uniform density value of 0.40 is specified for all FEs over the mesh, equal to the target volume ratio of the system of 40%; this constitutes common practice in SIMP implementation [
26,
27]. The computational demand of SIMP depends on the number of FEs. Considerable demands are posed even by moderately discretized domains, which becomes more pronounced in finer 3D meshes. As an example, the STO problem of a 3D bridge test case, discretized with a moderately dense mesh of 83,000 FEs, requires up to 7 h for performing 200 iterations of SIMP in serial CPU execution and up to 1 h in parallel GPU environment [
4]. The DLTOP methodology can be applied to both serial or parallel, CPU or GPU execution implementations.
DLTOP can be seen as a two-phase methodology. In the first phase, SIMP performs a limited number of iterations; the histories of the density values generated during these iterations are used as input data for the DBN. At the end of the first phase, DBN proposes an optimized distribution of the density values over the design domain based on this input data. Upon evaluation of the input data by the trained DBN, the latter performs a discrete jump from the pool of density values of the initial iterations to a close to final density for every FE. Subsequently, as part of the second phase, SIMP carries out a series of limited additional iterations, which corresponds to a fine-tuning process over the DBN-proposed distribution of optimized densities. A schematic representation of the two-phase DLTOP procedure to a single, randomly selected FE is depicted in
Figure 2, where the abscissa correspond to the iterations performed by SIMP and the ordinate to the specific FE densities.
Classification represents a challenging problem class for predictive models. Comparing the well-known regression predictive models with the classification ones, it is worth mentioning that the latter require information related to the complexity of a sequence dependence amid input parameters. For the STO problem, the density values of the early SIMP iterations represent the sequence dependence information that needs to feed the proposed (classification) methodology. The sequence of discrete-time data, i.e., the density value for every FE and the maximum number of iterations
T required for convergence by SIMP, are generated by SIMP and stored in a density matrix
D, as shown below:
A limited number of the iterations of the optimization procedure equal to the initial
t iterations are used as time-series input data for training the DBN while the vector of the final SIMP iteration, i.e., the
Tth column of
D, is used as the target vector during the DBN training.