*2.1. Structure of the Optimization Environment*

The optimization environment used in this work is shown in Figure 1 and consists of several optimization steps and further functionalities to reduce the simulation effort.

The input of the optimization environment consists of the problem description, the electromagnetic IM models used in the respective optimization stages and the optimization parameters that are varied during the optimization. The problem description includes the physical problem including its constraints and decision parameters used to define the objective function and the fitness function respectively. The electromagnetic machine models used in the respective optimization stages are defined using a methodical model selection approach presented in [27]. Based on the problem description, this determines the most suitable problem-specific IM model for the individual stage of the optimization process. This results from the fact that, in a global search, if necessary, a lower level of detail of the model is sufficient for a rough estimate of the fitness value, whereas in a local search a high precision is required to converge to the actual minimum. The optimization parameters are also defined methodically. Building on the model selection methodology, the methodological parameter selection approach presented in [27] is used to determine the variable optimization parameters based on their influence on the optimization problem. For these parameters, possible lower and upper bounds are described.

The optimization itself consists of five optimization stages. The result of each stage represents the initial solution of the further stages. The first stage is performed using the SA method. This features good global convergence, but has a low local convergence speed. The SA optimization is used to identify a local group of solutions. The following stages involve a successive hybrid optimization method with a faster local convergence. A hybrid optimization method is a combination of a stochastic and a deterministic search method. In this paper, the combination of the ES method with the PS method forms such a hybrid method. In the ES method, both the required population size and the required number of generations increase for stable convergence of the method with an increasing number of optimization parameters and thus the dimensions of the solution space. This relationship can be explained as a function of the number of optimization variables *n* by *O*(*n*2) [24]. To reduce the solution effort, the hybrid optimization is performed successively in two consecutive steps. In the first step, significant optimization parameters are varied and less significant optimization parameters are set to constant values. In the second step, the parameters optimized in the first step are assumed to be constant and the less significant parameters are varied. The classification of the optimization parameters into significant and less significant parameters is done by the user.

The hybrid successive optimization approach results in a reduction of the solution space and thus also of the computational effort within the individual stages. Since there is no holistic global search for the optimum, but the stages act independently of each other, this is a heuristic approach to solve the optimization problem.

To further reduce the simulation effort, an ANN is introduced. This is used to determine the objective function without the need for electromagnetic simulation. The database for the training, selection and testing instances of the ANN is set up during the first stage of the optimization environment. Once the minimum size of the database has been reached, the ANN is constructed and applied in the further course of the optimization.

In the following, the individual parts of this multi-stage optimization environment are explained in more detail. First, the model and parameter selection approach are introduced. Second, the used optimization methods and the fitness function used by all optimization methods are explained. Finally, the ANN used is described.

**Figure 1.** Flow chart of the multi-stage optimization environment.

#### *2.2. Model Selection Approach in the Optimization*

Whereas in the global search a lower level of detail of the model may be sufficient for a rough estimation of the fitness value, in the local search, a high precision is required to converge to the true minimum. For this reason, the model selection approach presented in [27] is used to methodically assign different machine models of the IM to each step of the optimization environment. In the context of the optimization environment, the output variables and effects to be considered describe those variables, which influence the decision parameters of the optimization problem. The use of the model selection approach will allow different levels of detail to be considered and defined in the individual optimization steps, allowing both precision and solution effort to be flexibly adjusted.

The model selection approach assigns a constant model to the SA optimization. Since SA implements a global search to identify a suitable local group, a lower level of detail may be sufficient in this step, which in turn reduces the solution cost of the resulting model.

For the ES method, a model vector of arbitrary length and increasing level of detail is used. The desired level of detail is specified and the most suitable model which fulfils this is determined by the model selection approach and added to the model vector. In general, the model selection does not distinguish between the first and second stage in the case of the successive optimization. The ES optimization starts in both stages with the first model in the model vector. If the minimum fitness value *f*(*x*min,k) of the generation *k* has not changed more than a tolerance *ε* over a given number of generations *n*, thus

$$|f(\vec{x}\_{\text{min},\mathbf{k}-\mathbf{n}}) - f(\vec{x}\_{\text{min},\mathbf{k}})| < \varepsilon \tag{1}$$

is satisfied, the next most accurate machine model in the model vector is selected. This detects convergence to a local minimum and supports it with a more precise model. Provided that the criterion from (1) is met again, the next model in the model vector is selected, and so on.

Since the PS method is a deterministic and thus local optimization method, a high level of detail of the machine model of the IM is required. Otherwise, the geometry changes required for convergence to the actual minimum may not be adequately represented. This may even lead to a degradation of the solution. The model selection approach assigns a constant model to the PS optimization, which is also independent of the stage of successive optimization. This model is already used in the last iteration of the ES method in order to support the local convergence there by a more precise model and thus to prepare the use of the PS optimization.
