3.2. Design Modification Procedure
CAESES software has been selected for the parametric design and variation of the parent hull.
The initial geometry is represented by a set of basic curves providing topological information in the longitudinal direction (design waterline, centerline, deck-line) and a set of 32 section curves. All of them are either F-splines or B-splines. F-splines are used to describe areas or characteristic lines subjected to variation, which directly affect the geometrical hull form parameters to be optimized.
The geometry is split into three regions: the main hull, the stern region, and the bow bulb, assigning specific design variables for each of them in order to ease the optimization process. Hull form is described by different kinds of surfaces which reflect the changes at the parameters under investigation. Surfaces are generated either by interpolating the parametric-modeled section curves or by using the so-called engine curves. The approximation of the initial surfaces is very satisfactory and allows for the establishment of the eight design variables in total. Five of them refer to the bow bulb and the main hull, while the remaining three at the stem region.
Figure 2 depicts the geometry delivered by CAESES software, while
Figure 3 presents the control lines and the surface of the bow bulb.
The design variables of
Table 1 are described in the following:
dX8% aft FP: | The longitudinal shift of the frame located 8% of ship length L aft of FP. |
FOS_dZ: | Vertical variation of the lower point of the Flat-Of-Side (FOS) area. |
Bulb_dL: | Change of bulbous bow total length. |
Angle_WL: | Angle of waterline at design draft T. |
Angle_Prof: | Angle of rise of bulbous bow profile curve. |
TransomLow_zPos: | Vertical position of transom lowest point (Figure 4). |
Curve_xPos: | Profile stern curve (Figure 4). |
TubeEnd_xPos: | Variation of stern tube axis length (Figure 4). |
The range of the design variables and their values for the parent hull form are presented in
Table 1. The constraints of the geometrical variables have been specified by a trial and error method to reduce the number of non-realistic or, more generally, invalid variant hull forms. However, this does not mean that all the variants generated during the optimization process are realistic hulls.
3.5. Viscous Flow Calculation
Regarding the CFD solver, an in-house (U)RANS solver, MaPFlow, was employed. MaPFlow is a cell centered CFD solver that can use both structured and unstructured grids, capable of solving compressible flows, as well as fully incompressible flows using the artificial compressibility method. For the reconstruction of the flow field, a 2nd order piecewise linear interpolation scheme is used. The limiter of Venkatakrishnan [
18] is utilized when needed. The viscous fluxes are discretized using a central 2nd order scheme.
Turbulence closures implemented on MaPFlow include the one-equation turbulence model of Spalart (SA) [
19] as well as the two-equation turbulence model of Menter (k-ω SST) [
20]. Regarding laminar to turbulent transition modeling, the correlation γ-Reθ model of Langtry and Menter [
21] has been implemented.
MaPFlow can handle both steady and unsteady flows. Time integration is achieved in an implicit manner permitting large CFL numbers. The unsteady calculations use a 2nd order time accurate scheme combined with the dual time-stepping technique to facilitate convergence. MaPFlow is able to handle moving/deforming geometries through the arbitrary Eulerian Lagrangian formulation.
Regarding the free surface treatment, the Volume of Fluid (VOF) method is employed, and two phase flows are described by two immiscible fluids with their interface being defined implicitly as a discontinuity in the density field. The system of equations is solved in a non-segregated manner, utilizing the Kunz Preconditioner, as discussed by Yue and Wu [
22], to remove density dependencies from the system’s eigenvalues.
For the CFD simulations, in order to reduce the computational domain, half of the hull is resolved with symmetry conditions applied on the side. The high Reynolds number of full scale simulations poses a significant challenge for CFD simulations since fully resolved simulations are computationally prohibitive. Following a grid-independence study, a grid consisting of approximately 5 million cells is employed. In the wall region, wall functions are employed; nevertheless, a structured-like region composed of 25 layers is used around the solid boundary. Lastly, the hull was resolved using approximately 200,000 elements. A snapshot of the computational grid employed can be seen in
Figure 6.
The span of the domain was 5 LBP in the streamwise direction, 3 LBP in the side direction, and 6 LBP in the vertical direction. On the ship hull, a no-slip condition was applied, symmetry (zero gradient in the normal direction) conditions were applied on the symmetry plane, while a freestream condition was imposed on the rest of the domain. Additionally, a damping zone was adopted to avoid reflections from the generated wave system.
Regarding the y+ values, the average y+ was 150, while a maximum of 300, and due to that, wall functions were adopted. Unfortunately, this was a mandatory compromise cost-wise in order to make full scale simulations feasible.
For all the CFD simulations, a time step of 0.1 s using a second-order implicit scheme is used which yields a convective CFL around 3. Nevertheless, it was adopted to save computational time since the flows considered here converge to a steady state. It is evident from
Figure 7 that both the time step and the grid spacing selected are tuned in order to properly capture the resulting wave system.
3.6. Artificial Neural Network (ANN)
During the last decade, the application of Artificial Neural Networks has stepped up in every scientific field. From plain vanilla networks to unsupervised deep convolutional networks, ANN is able to model or detect complex nonlinear relationships within systems without using the physics of the system. Furthermore, they are a valuable tool, since they can bridge fragmented data to efficiently identify system characteristics or make up for a lack of analytical relations within complex systems.
Artificial neural network theory is based on the analysis of biological nervous systems consisting of neurons and their connections. A mathematical model of a neural network is created, based on this structure and signal transmission. ANNs are composed of internal parameters to be specified through the process of training. Such parameters are the weights by which the inputs of each neuron are multiplied so that the corresponding output emerges. Explicitly, the output of the neuron is calculated by the sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. Additionally, a bias term is introduced to this sum. This weighted sum is often called the activation, which is, then, passed through a (usually nonlinear) activation function to produce the output. Ultimately, the output of the last neuron, subsequently the output of the model in general, is compared against actual values and the difference between predicted and real values of the same parameter is estimated through a metric function. This part of the optimization algorithm is critical. The training aims at the minimization of the mean difference, or loss as it is called in the ANN field, by updating the weights in each iteration.
In the field of naval architecture, artificial neural networks (ANN) have gained popularity. In recent years, applications of ANNs for modelling and predicting vessel hull form [
23], calm water resistance [
24,
25], added resistance in waves [
26], speed and fuel consumption [
27], maneuverability qualities [
28], and seakeeping characteristics [
29] are presented in several studies. In most cases, the use of artificial neural networks offers satisfactory results.
Compared to the above-mentioned publications, this paper’s distinctive feature lies in the relatively low number of input data available for the training of an ANN. As discussed in the Introduction, the ANN is trained with only 27 examples as input, which correspond to the 27 (=33) combinations of three values per variable, the two selected limiting ones, and the value of the parent hull for the three chosen stern geometrical variables affecting the optimization scheme. Usually, the stern design variables are two to four and three values per variable are sufficient to train a reliable ANN, taking into account that the variation of the variables is limited. The use of limited CFD calculations is a major advantage of the proposed methodology. The derived ANN is handled by a GA, which, after 440 evaluations, reaches the optimum combination of the stern variables.
Selection of a suitable artificial neural network structure is probably the hardest part of the problem and critical to obtaining accurate predictions. Since dealing with a regression problem, the multilayer perceptron (MLP) concept was applied consisting of an input, hidden, and output layer, as well as utilizing a backpropagation learning algorithm. The development of the ANN was performed in Python assisted by the Tensorflow/Keras neural network library.
The usual search process for the optimal neural network goes through the following steps: data normalization, division of data set, selection of ANN model architecture, and finally the assessment of ANN model results. In this study, the small number of available training data significantly hindered this process. Input data were normalized using a custom Min/Max scalar function centered around parent hull resistance value with a 20% reserve. This reserve was used in order to ease the first step of optimization process, ANN’s function minimization search, by allowing us to extrapolate values beyond observed ranges. Moreover, during ANN model’s training, no validation set was used. It was decided to use every data point available for the more efficient training of the network and take the risk to validate the model’s prediction at the final stage of optimization procedure via CFD calculations:
In order to identify the ANN architecture that is better suited to the problem, many trials were conducted with different configurations. The number of input neurons was set to three, representing the three variables set at the vessel stern region, and the output node was set to one referring to the target value of CFD calculation. The rest of the ANN configuration as determined by the number of hidden layers, the number and type of neurons that comprise each one of them, the training algorithm, learning rate, and the backpropagation optimizer method went through exhaustive numerical experiments, probably an inevitable stage when developing an AΝΝ. An overview of the performance of the best ANN models is presented in
Table 3. The number of neurons at each layer and their activation functions can also be seen.
In this work, the Mean Squared Error (MSE) function was used as the loss or cost function under minimization during models’ training. Notice should be kept, though, on the Mean Absolute Error (MAE) as well. The progression of MSE and MAE values during training is presented in
Figure 8.
The network (N-1) that showed the lowest MSE and MAE error was selected in order to evaluate hull meta-models. It consists of the three stern parameters as inputs, two hidden layers comprised of six and four nodes, respectively, and the output layer. In our effort to overcome the problem of vanishing gradients and saddle points, the sigmoid function was used as an activation function at input, first and output layer of the network, while tanh was utilized at the second layer. Stochastic gradient descent with momentum was selected against ADAM as the backpropagation optimizer method, learning rate, and momentum were set to 0.16 and 0.7, respectively. Selected ANN model’s architecture and performance are presented at
Figure 9 and
Figure 10. The Pearson coefficient was calculated at 0.975.
The small size of the training data led to an increase of epochs performed in order to get satisfactory results. For the best configuration found, the number of epochs was set at the scale of 150,000. Despite the large number of epochs, the training procedure required less than 5 min to complete at an i7 9700K.