**1. Introduction**

With the rapid development of computer technology, computational fluid dynamics (CFD) has been significantly advancing. Ship designers have introduced optimization technology into ship design and have combined it with CFD technology to develop a CFD-based method for automatic hull form optimization. The CFD method substantially reduces the time and economic costs compared with a ship model test. However, for complex optimization problems or scenarios with many design variables, it can have a very high computation cost and even lead to the curse of dimensionality. Therefore, effectively reducing the number of design variables and the dimensionality of the design space is a major problem in hull form design optimization.

Currently, the above problem is solved mainly using the sensitivity analysis (SA) method. The SA method can qualitatively or quantitatively assess the effects of the input variables on the output objective function. Cheng et al. [1] proposed an uncertainty and sensitivity analysis framework for ship motion data, in which the sensitivity and uncertainty of the samples or weights generated by an artificial-neural-network-based surrogate model

**Citation:** Chang, H.; Wang, C.; Liu, Z.; Feng, B.; Zhan, C.; Cheng, X. Research on the Karhunen–Loève Transform Method and Its Application to Hull Form Optimization. *J. Mar. Sci. Eng.* **2023**, *11*, 230. https://doi.org/ 10.3390/jmse11010230

Academic Editors: Carlos Guedes Soares and Serge Sutulo

Received: 1 November 2022 Revised: 8 January 2023 Accepted: 8 January 2023 Published: 16 January 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

are analyzed. Hu et al. [2] adjusted an airfoil curve using control points and a free-form deformation technique and reduced the dimensionality of control variables by SA. Guan et al. [3] used an SA procedure to estimate the effects of the values assigned to the key calculation parameters of a fully parametric hull form optimization algorithm on the calculation process and results. Hamed [4] numerically optimized a trimaran hull form using an optimization platform, with the total resistance and the propeller intake flow defined as the study objectives. Moreover, information about highly significant regions of a hull was obtained by surface SA based on an adjoint solver. Jung [5] introduced examples of SA of the effects on the added resistance and speed loss in waves caused by the variations in ship dimensions. SA has been proven to be a feasible method for dimensionality reduction, and variance-based SA has been widely used to determine variables that are unimportant for this purpose. Therefore, Liu et al. [6] implemented variance-based SA in a hull form optimization model and identified a variable that did not affect dimensionality reduction. Guan et al. [7] developed an automatic calculation framework for the resistance of a smallwaterplane-area twin hull at different speeds, for which a fully parametric model was established to realize automatic geometric transformation. They also performed an SA of all parameters to the resistance and a correlation analysis between the design variables and the resistance, including the calculation and examination of the correlation coefficients. Jeon et al. [8] identified important hull form parameters and estimated their optimal values that satisfied the required intercept time of mission competence and the turning rate of a specific underwater vehicle based on SA. Coppedè et al. [9] proposed a computational framework for the hydrodynamic shape optimization of complex hull forms and applied it to improve the calm-water performance of the KRISO Container Ship. They also performed a preliminary SA of the mesh size to reduce the computational burden of the CFD solver. Fu et al. [10] proposed a multiobjective optimization method for autonomous underwater gliders based on the fast elitist nondominated sorting genetic algorithm (NSGA-II) and performed an SA of the variables. In hull form optimization, models of the input variables and performance indicators are established to compare the effects of the variables on the ship performance. This enables identification of the important variables and elimination of the variables with the minimum effects on the performance, to realize dimensionality reduction.

SA requires calculation of the effects of all variables on the output and, thus, has a high computational cost. Different from SA, the Karhunen–Loève (K–L) transform reduces data dimensionality by projecting data in the direction of the eigenvectors corresponding to large eigenvalues based on an analysis of the data matrix. K–L transform–based dimensionality reduction needs to consider only the relationships between the variables in the sample data matrix and does not require calculating the effects of the variables on the objective function.

Considering that K–L and directional seismic wave methods can extract target signals and suppress random noise, Yue et al. [11] used these methods to solve the problem of difficulty in the separation of weak effective signals. Fan et al. [12] developed a spatiotemporal model based on the K–L decomposition, a multilayer perceptron, and a long short-term memory network for estimating temperature distributions by a three-step procedure. Ahuja et al. [13] suggested that a large-sized discrete Fourier transform of a colored random process approximates the behavior of the K–L transform in terms of achieving approximate decorrelation between frequency-domain terms. Trudu et al. [14] proposed a new method for computing the K–L transform for application to complex voltage data for the detection and noise reduction of astronomical signals. Siripatana et al. [15] applied a generalized K–L expansion that incorporated the construction of a reference basis of spatial modes and a coordinate transformation to the prior field. Reed and Roberts [16] applied the K–L transform to a 44-group version of the 2-D C5G7 benchmark problem. Allaix and Carbone [17] proposed a method for the discretization of random fields by combining a K–L series expansion and the finite element method. Azevedo et al. [18] studied the numerical approximation of a homogeneous Fredholm integral equation of the second kind associated with the K–L expansion of Gaussian random fields. Liu et al. [19] proposed to

encode the third spectral information with an adaptive K–L transform and mathematically proved that interspectral correlations are preserved in two-dimensional randomly encoded spatial information. Ai [20] obtained the K–L expansion and distribution identity for a demeaned stationary Ornstein–Uhlenbeck process and applied them to the small-deviation asymptotic behavior of the L2 norm and Laplace transform of the process. Chowdhary and Najm [21] introduced a Bayesian K–L expansion procedure that allows developing a probabilistic model based on the principal components and accounts for the inaccuracies due to a limited sample size. Feng et al. [22] suggested that a K–L representation of a target random medium can be efficiently estimated by projecting reconstructed samples onto the K–L basis. Diez et al. [23–25] elaborated the mathematical principle behind the K–L transform and applied it to the optimization of the David Taylor Model Basin (DTMB) 5415 hull form for the first time. Using control points as the design variables based on a free-form deformation technique, they reduced the initial number of design variables from 29 to 4 using the K–L transform. Concurrently, 95% variability of the original design space was retained. In addition, the convergence speed was improved while achieving optimization results approximate to those without dimensionality reduction.

In hull form optimization, the dimensionality of the hull form geometric information matrix can be reduced using the K–L transform to extract several principal components to represent the geometric characteristics of the full shape. This allows for rapid reconstruction of the geometric information matrix of a new hull form and the control of hull form variability using a few variables from the perspective of geometric transformation. Consequently, complex CFD computations are avoided, and the time cost is reduced.

The remainder of this paper is structured as follows: Section 1 presents the mathematical principle, derivation, and geometric implications of the K–L transform and the detailed steps and procedure of using it to reduce dimensionality. Section 2 describes the methods for solving the eigenvalues and eigenvectors of the covariance matrices. Section 3 discusses the methods for obtaining the eigenvalues of a matrix. Section 4 proposes a K–L transform–based method for hull form reconstruction, which serves as the basis for applying the K–L transform to the geometric transformation of a hull form. Section 5 presents the multidisciplinary platform developed for a comprehensive optimization of ship hydrodynamic performance and its application to the optimization of the DTMB 5415 hull form. Section 6 summarizes this study and provides an outlook for future research.

#### **2. K–L Transform Method**

#### *2.1. Basic Principle of K–L Transform*

Harold [26–28] proposed the K–L transform, by which discrete signals are transformed into a series of uncorrelated coefficients. The K–L transform describes a sample using fewer features, thereby reducing the dimensionality of the feature space. Its most salient advantage is decorrelation, and it is the best transform in terms of the mean square error (MSE). Its mathematical definition is as follows.

For a discrete case, a uniform sampling is performed on *x*(*t*) in interval *a* ≤ *t* ≤ *b*; thus, *x* can be expressed using the following vector form:

$$\mathbf{x} = \begin{bmatrix} \mathbf{x}(\mathbf{t}\_1), \mathbf{x}(\mathbf{t}\_2), \dots, \mathbf{x}(\mathbf{t}\_n) \end{bmatrix}^T \tag{1}$$

*x* is expanded using a system of orthonormal vectors *uj*, *j* = 1, 2, . . . , ∞ as follows:

$$\chi = \sum\_{j=1}^{\infty} c\_j u\_j \tag{2}$$

*x* is estimated with a finite series *d* as follows:

$$
\hat{\mathfrak{x}} = \sum\_{j=1}^{d} c\_j u\_j \tag{3}
$$

Thus, the MSE is

$$\boldsymbol{\varepsilon} = E[\left(\boldsymbol{\mathfrak{x}} - \hat{\boldsymbol{\mathfrak{x}}}\right)^{T} \left(\boldsymbol{\mathfrak{x}} - \hat{\boldsymbol{\mathfrak{x}}}\right)] \tag{4}$$

because *ui Tuj* = 1, *j* = *i* 0, *<sup>j</sup>* <sup>=</sup> *<sup>i</sup>* , *cj* <sup>=</sup> *uj Tx*,

$$\varepsilon = E[\sum\_{j=d+1}^{\infty} u\_j^T \mathbf{x} \mathbf{x}^T u\_j] = \sum\_{j=d+1}^{\infty} u\_j^T E[\mathbf{x} \mathbf{x}^T] u\_j \tag{5}$$

Setting Ψ = *E*[*xxT*] yields

$$\varepsilon = \sum\_{j=d+1}^{\infty} u\_j^{-T} \Psi u\_j \tag{6}$$

When the orthogonality condition is satisfied, the following equations can be obtained using the Lagrange multiplier method:

$$\log(u\_j) = \sum\_{j=d+1}^{\infty} u\_j^{\top} \Psi u\_j - \sum\_{j=d+1}^{\infty} \lambda\_j [u\_j^{\top} u\_j - 1] \tag{7}$$

$$\frac{d}{du\_j}\mathbf{g}(u\_j) = 0, \; j = d+1, \ldots, \infty \tag{8}$$

$$(\Psi - \lambda\_{\dot{f}}I)u\_{\dot{f}} = 0, \; j = d+1, \dots, \infty \tag{9}$$

Thus, it can be concluded that when *x* is expanded in the eigenvector direction of matrix Ψ, the MSE of truncation reaches the minimum value. When *x* is approximated with *d* number of *uj*, the MSE of truncation is

$$\varepsilon = \sum\_{j=d+1}^{\infty} \lambda\_j \tag{10}$$

$$\mathcal{B} = \frac{\sum\_{j=1}^{d} \lambda\_j}{\sum\_{j=1}^{\infty} \lambda\_j} \tag{11}$$

where *λ<sup>j</sup>* represents the eigenvalues of matrix Ψ.

Thus, the MSE of truncation when *x* is expanded using the eigenvectors corresponding to the largest *d* number of eigenvalues of the matrix is smaller than that from the expansion of *x* using *d* coordinates in any other orthogonal coordinate system. The percentage ratio of the sum of the *d* largest eigenvalues to the sum of all eigenvalues, *β*, is referred to as the feature contribution rate, to which a threshold is assigned during dimensionality reduction. The orthogonal coordinate system consisting of the *d* eigenvectors used in the expansion is referred to as the *d*-dimensional K–L transform coordinate system in the *D*-dimensional space where *x* is located. The coefficient for the expansion of *x* in the K–L coordinate system is referred to as the K–L transform of *x*.

### *2.2. K–L Transform–Based Data Dimensionality Reduction Process*

The K–L transform represents an orthogonal transform *<sup>A</sup>* ∈ *<sup>R</sup>n*×*n*, where vectors *<sup>X</sup>* ∈ *<sup>R</sup><sup>n</sup>* are mapped onto vectors *<sup>Y</sup>* ∈ *<sup>R</sup><sup>n</sup>* and the components of vector *<sup>Y</sup>* are uncorrelated, that is,

$$\mathbf{C} = E\left\{ (X - \overline{X})(X - \overline{X})^T \right\} \tag{12}$$

where *X* is the mean of the original data and *C* is the covariance matrix of the demeaned data.

The covariance matrix is subjected to eigendecomposition, and the resulting eigenvalues are arranged in a descending order as follows:

$$\mathbf{C}z\_i = \lambda\_i z\_i \tag{13}$$

where *λ<sup>i</sup>* represents the eigenvalues and *Zi* represents the eigenvectors corresponding to *λi.*

The dimensionality after dimensionality reduction depends on the threshold assigned to *β*, with a larger threshold leading to a larger retained variance and a smaller MSE. If the predefined threshold is satisfied and the dimensionality of the original matrix can be reduced to *K*, the threshold of *β* is defined as

$$\beta = 1 - \varepsilon = \frac{\sum\_{i=K+1}^{\infty} \lambda\_i}{\sum\_{i=1}^{\infty} \lambda\_i} \tag{14}$$

For a transformation matrix *A* consisting of the eigenvectors corresponding to the largest *K* eigenvalues, the mapping relationship from *X* to *Y* is

$$\mathcal{Y} = A\left(\mathcal{X} - \overline{\mathcal{X}}\right) \tag{15}$$

Figure 1 shows a flowchart of the dimensionality reduction process using the K–L transform. The detailed steps are as follows:


As shown in Figure 1, the core of dimensionality reduction using the K–L transform is obtaining the eigenvalues of the covariance matrix. In the following, we discuss and present the investigation on the methods for solving the eigenvalues and eigenvectors of the covariance matrix and reducing the solution time.

#### **3. Methods for Eigenvalue of Matrix**

The core step of the K–L transform method is solving the eigenvalues and eigenvectors of the covariance matrix of the sample data matrix. The methods for obtaining the eigenvalues of matrices mainly include the Jacobi iteration and rapid principal component analysis (PCA) methods, as described below.

#### *3.1. Jacobi Iteration Method*

Low-order linear equations, such as *AX* = *B*, are typically solved using the pivot elimination method. However, engineering problems frequently involve computer-aided solutions of large sparse matrix equations, which need to be obtained using an iterative method. Its major principle is as follows:

Consider the following equations:

$$\begin{cases} a\_{11}x\_1 + a\_{12}x\_2 + \dots + a\_{1n}x\_n = b\_1 \\ a\_{21}x\_1 + a\_{22}x\_2 + \dots + a\_{2n}x\_n = b\_2 \\ \dots \\ a\_{n1}x\_1 + a\_{n2}x\_2 + \dots + a\_{nn}x\_n = b\_n \end{cases} \tag{16}$$

where coefficient matrix *A* is nonsingular and *aii* = 0. These equations can be transformed to obtain the following iteration equations:

$$\begin{cases} \mathbf{x}^{(0)} = \left(\mathbf{x}\_1^{(0)}, \mathbf{x}\_2^{(0)}, \dots, \mathbf{x}\_n^{(0)}\right)^T \\ \mathbf{x}\_i^{(k+1)} = \frac{1}{a\_{ii}} (b\_i - \sum\_{\substack{i\\i=1\\j\neq i}}^n a\_{ij} \mathbf{x}\_j^{(k)}) \\ \mathbf{j} \neq i \end{cases} \tag{17}$$

where *x*(0) represents the initial vectors and *x*(*k*) represents the vectors at the *k*-th iteration.

The matrix form is as follows: First, the coefficient matrix, *A*, of the equations is decomposed into three components, that is, *A* = *L* + *D* + *U*:

$$D = \begin{bmatrix} a\_{11} \\ & a\_{22} \\ & & \ddots \\ & & & a\_{nn} \end{bmatrix} \\ L = \begin{bmatrix} 0 \\ a\_{21} & 0 \\ \vdots & & \ddots \\ a\_{n1} & a\_{n2} & \cdots & 0 \end{bmatrix} \\ U = \begin{bmatrix} 0 & a\_{12} & \cdots & a\_{1n} \\ & 0 & \cdots & a\_{2n} \\ & & \ddots & \vdots \\ & & & 0 \end{bmatrix} \tag{18}$$

where *D*, *L*, and *U* are the diagonal, lower triangular, and upper triangular matrices, respectively.

Thus, the following matrix formulation can be derived:

$$D\mathbf{x} = -(L + \mathcal{U}I)\mathbf{x} + b \tag{19}$$

which can be rearranged as

$$\mathbf{x} = -D^{-1}(L+\mathcal{U})\mathbf{x} + D^{-1}\mathcal{b} \tag{20}$$

which can be simplified as

$$\mathbf{x} = B\mathbf{x} + d\tag{21}$$

Thus, the Jacobi iteration equations can be expressed in the following matrix form:

$$\begin{cases} \begin{array}{c} \mathbf{x}^{(0)} = \left(\mathbf{x}\_1^{(0)}, \mathbf{x}\_2^{(0)}, \dots, \mathbf{x}\_n^{(0)}\right)^T\\ \mathbf{x}^{(k+1)} = B\mathbf{x}^{(k)} + d \end{array} \tag{22}$$

From the above process, the Jacobi iteration method involves a simple calculation process, requiring only the multiplication of matrices in each iteration. However, it has remarkable disadvantages, including a low convergence speed and a large computer memory requirement.

#### *3.2. Rapid PCA Method*

PCA requires the calculation and eigendecomposition of the covariance matrix of the input sample data to obtain its eigenvalues and eigenvectors. For a sample data matrix *X* of size *n* × *d* (where *n* and *d* are the numbers of samples and feature dimensions, respectively), its covariance matrix *S* is a *d* × *d* square matrix. Extremely complex calculations are required when *d* is large. The rapid PCA algorithm solves the transposed matrix of the covariance matrix, instead of directly solving the covariance matrix. The dimensionality of the covariance matrix of the transposed matrix depends on the number of samples. When the number of feature dimensions is much larger than the number of samples, the transposed matrix can be very easily solved using the rapid PCA algorithm.

The derivation of the rapid PCA algorithm is as follows: For an *n* × *d* matrix of a sample dataset *Z* with the number of samples, *n*, much smaller than the number of feature dimensions, *d*, the covariance matrix is

$$\mathbf{S} = \mathbf{Z}^{\mathsf{T}} \mathbf{Z} \tag{23}$$

The transposed matrix of the covariance matrix is

$$R = ZZ^T\tag{24}$$

*S* and *R* have the same nonzero eigenvalues:

$$(ZZ^T)v = \lambda \ v \tag{25}$$

Multiplying both sides of the above equation with *ZT* yields

$$(Z^T Z) Z^T v = \lambda \ Z^T v \tag{26}$$

Thus, the eigenvectors of *S* are *ZTv*.

From the above equations, the eigenvalues and eigenvectors of the original covariance matrix can be obtained by solving those of its transposed matrix.

From the principles of the two methods described above, the Jacobi iteration method is typically suitable for obtaining computer-aided eigenvalues of matrices. It can deal with moderate computational complexity; however, it requires a long solution time in cases of massive computations. The rapid PCA algorithm is suitable when the number of feature dimensions is much larger than the number of samples. By solving the feature matrix of the transposed matrix, it transforms a high-dimensional matrix to be solved into a low-dimensional matrix, thereby improving the computational efficiency.

We previously developed a multidisciplinary platform for a comprehensive optimization of ship hydrodynamic performance (SHIPMDO-WUT) using a radial basis function (RBF)– based interpolation method [29–33] for hull form modification. In this method, the number of discrete points on a hull surface usually reaches thousands, resulting in tens of thousands of dimensions of covariance matrix, making the direct eigendecomposition of the covariance matrix considerably difficult. Because the number of feature dimensions is much larger than the number of samples, the covariance matrix was solved using the rapid PCA method, thereby substantially reducing the dimensionality by eigendecomposition and improving the solution efficiency.

#### **4. K–L Transform–Based Hull Form Reconstruction Method**

The core of the K–L transform–based hull form reconstruction method is solving the problem that the design variables in the projected dimensionality-reduced matrix are not points on the hull surface and do not have physical relevance. This problem makes obtaining a new hull form directly by RBF modification impossible. This study developed a transformation-matrix-based method for offset matrix reconstruction that can ensure surface smoothness of the new reconstructed hull form. In the following, the hull form reconstruction process is described, including the presentation of the derivation of its equations and the application of the process to actual cases.

#### *4.1. Hull Form Reconstruction Process*

In the K–L transform–based dimensionality reduction, a sample offset matrix of hull forms obtained by varying the original hull form is subjected to dimensionality reduction to obtain projection and eigenvector matrices. Each column of the projection matrix following the dimensionality reduction represents a new design variable, which is a combination of the initial variables and does not correspond to a discrete point on the hull surface. Because the new design variables do not have physical relevance, a new hull form cannot be obtained by surface modification, thus necessitating another method to generate it. The process of generating a new hull form using the new design variables is referred to as hull form reconstruction. Figure 2 shows a flowchart of the hull form reconstruction process. The steps are detailed as follows:


**Figure 2.** Flowchart of hull form reconstruction.

In the hull form reconstruction method, the core problem is the reconstruction of the new offset matrix from the dimensionality-reduced projection matrix. In the following, this core problem is discussed.

*N* sample hull forms are sampled. For each hull form, *M* offset points are selected, and their *x*-, *y*-, and *z*-coordinates are arranged as a geometric information matrix, which can be expressed as

$$\mathbf{x}\_{i}^{T} = (\mathbf{x}\_{1}, y\_{1}, z\_{1}, \mathbf{x}\_{2}, y\_{2}, z\_{2}, \dots, \mathbf{x}\_{M}, y\_{M}, z\_{M});\tag{27}$$

The geometric information matrices of the *N* sample hull forms are used as the basis function. Subsequently, the offset matrix of each hull form is decomposed into a linear combination of several sample hull forms as follows:

$$\mathbf{x}\_{i} = \overline{\mathbf{x}} + \sum\_{j=1}^{N} a\_{ij} \mathbf{u}\_{j} \tag{28}$$

The mean of each column is subtracted from the elements in the corresponding column. The geometric characteristics of each demeaned sample hull form can be expressed as

$$
\overline{\mathfrak{X}}\_{i} = \mathfrak{x}\_{i} - \overline{\mathfrak{X}}\_{i} \tag{29}
$$

The demeaned geometric information matrices of all sample hull forms can be expressed as

$$X = (\overleftarrow{x}\_1, \overleftarrow{x}\_2, \dots, \overleftarrow{x}\_N) \tag{30}$$

The covariance matrix of the geometric information matrices can be obtained as

$$S = \bar{X}\bar{X}^T = \sum\_{i=1}^{N} \bar{\mathfrak{X}}\_i \bar{\mathfrak{X}}\_i^T \tag{31}$$

The covariance matrix is subjected to eigendecomposition to obtain the eigenvalues and the corresponding eigenvectors as follows:

$$XX^T\boldsymbol{\mu}\_{\dot{j}} = \lambda\_{\dot{j}}\boldsymbol{\mu}\_{\dot{j}}\tag{32}$$

where *λ<sup>j</sup>* represents the eigenvalues and *uj* represents the eigenvectors.

The threshold of *β*, which determines the dimensionality after dimensionality reduction, is calculated as

$$\mathcal{B} = \frac{\sum\_{j=1}^{M\_P} \lambda\_j}{\sum\_{j=1}^{3M} \lambda\_j} \tag{33}$$

The eigenvalues are arranged in a descending order. The eigenvectors corresponding to the first *Mp* eigenvalues constitute the transformation matrix,

$$\mathcal{U} = (\mathfrak{u}\_1, \mathfrak{u}\_2, \dots, \mathfrak{u}\_{M\_P}) \tag{34}$$

Each geometric information matrix is reconstructed to obtain the offsets of the corresponding reconstructed hull form as follows:

$$\propto\_{i}^{\text{rcc}} = \overline{\mathfrak{x}} + \sum\_{j=1}^{M\_P} a\_{i\ j} u\_j \tag{35}$$

where *aij* <sup>=</sup> *<sup>x</sup><sup>i</sup> Tuj* represents the new design variables, *uj* represents the retained eigenvectors, and *x* is the mean geometric information matrix, that is, a matrix consisting of the means of the geometric information of all sample hull forms.

In summary, in the K–L transform/dimensionality reduction–based hull form reconstruction, the offset matrix of a hull form can be obtained using Equation (35). This is possible when the mean geometric information matrix, values of the new design variables, and retained eigenvector matrix (i.e., transformation matrix) are known, thereby generating a new hull form.

#### *4.2. Examples of Hull Form Reconstruction*

As shown in Figure 3, the Series 60 hull form was optimized using SHIPMDO-WUT. First, the Series 60 hull surface was discretized. The model consisted of two surfaces, and each surface was discretized in the U and V directions into a 40 × 30 point cloud. As shown in Figure 4, 10 variable points are selected and defined to be variables in the *Y*-direction. Table 1 lists their respective variation ranges. Using a uniform experimental design, 50 samples were collected, and the surface was varied using the RBF method; consequently, a sample offset matrix of 50 sample hull forms was obtained, with each hull surface represented as a 40 × 30 point cloud. The offset matrix was subjected to dimensionality reduction using the K–L transform procedure, with the threshold of *β* set as 95%. Accordingly, a total of 180,000 eigenvalues were obtained, which were arranged in a descending order. Except for the first 50 eigenvalues, all eigenvalues were approximately zero. Based on Table 2, the sum of the first 5 eigenvalues is 10.17 and that of the first 50 eigenvalues is 10.61. The first 10 eigenvalues are significant in magnitude, based on Table 3. Figure 5 shows the distribution of the 10 largest eigenvalues. The percentage ratio of the sum of the first 5 eigenvalues to the sum of all eigenvalues is 95.9%, which is higher than

the preset threshold (95%), as shown in Figure 6. Thus, the eigenvectors corresponding to the first 5 eigenvalues were selected to constitute the transformation matrix.

**Figure 4.** Distributions of variable and fixed points on the Series 60 hull surface (blue: fixed points; red: variable points).

**Table 1.** S60 variation ranges of variables.


**Table 2.** Five largest eigenvalues.


**Table 3.** Ten largest eigenvalues of S60.


**Figure 5.** Distributions of the 10 largest eigenvalues.

**Figure 6.** Contribution rates of the 10 largest eigenvalues.

The hull surface was reconstructed by feeding a random set of new variables *aij* into Equation (36). Figures 7–9 show the line plans of the reconstructed hull forms.

**Figure 7.** Line plan of reconstructed hull form KLE-5.

**Figure 8.** Line plan of reconstructed hull form KLE-4.

**Figure 9.** Line plan of reconstructed hull form KLE-3.

First, the threshold of *β* for dimensionality reduction was set as 95% by default. The eigenvectors corresponding to the 5 largest eigenvalues were used to constitute the transformation matrix. The reconstructed hull form (KLE-5) had smooth molded lines, indicating that the 5 principal components were adequate to retain most of the geometric information in the original space and, thus, could replace the initial 10 design variables.

When the threshold of *β* was set as 90%, the eigenvectors corresponding to the 4 largest eigenvalues were used to constitute the transformation matrix. The percentage ratio of the sum of the 4 largest eigenvalues to the sum of all eigenvalues was 94.1%. Figure 8 shows the line plan of the reconstructed hull surface (KLE-4).

When the threshold of *β* was changed to 85%, the number of eigenvectors retained in the transformation matrix decreased to 3, because the percentage ratio of the sum of the 3 largest eigenvalues to the sum of all eigenvalues was 89.8%. Figure 9 shows the line plan of the hull form reconstructed after dimensionality reduction (KLE-3).

The body plans of KLE-4 and KLE-3 exhibited unsmooth molded lines compared with that of KLE-5. Based on the principle of the K–L transform, a smaller threshold of *β* leads to a smaller number of retained eigenvalues, a smaller number of corresponding eigenvectors, and a lower dimensionality of the transformation matrix. Consequently, a lower dimensionality of the design space is achieved after dimensionality reduction. In addition, exclusion of eigenvectors corresponding to large eigenvalues leads to an increased MSE and loss of information and, thus, an unsmooth surface of the reconstructed hull form. Thus, an excessively small threshold of *β* for dimensionality reduction using the K–L transform will lead to retention of an inadequate number of principal components, loss of information, and finally, an unsmooth surface of the reconstructed hull form. For hull form optimization, a 95% threshold of *β* for dimensionality reduction is safe to ensure that the major geometric information in the design space is retained after dimensionality reduction.

#### **5. Optimization of Wave-Making Resistance Coefficient for DTMB 5415**

#### *5.1. Optimized Object*

Figure 10 shows a three-dimensional (3D) model of DTMB 5415. Table 4 lists the major parameters of the model. *Lwl* is the waterline length, *Bwl* is the maximum waterplane width, *Lcb* is the longitudinal center of buoyancy (LCB), and *T* is the draft. *Cb* is the block coefficient, *Cm* is the fullest cross-section area, ∇ is the volume of displacement, and *Swet* is the wetted surface area.

**Figure 10.** Schematic of a 3D model of DTMB 5415.

**Table 4.** Major parameters of DTMB 5415.


#### *5.2. Description of Optimization Problem*

#### (1) Objective of optimization

The objective of optimization was to minimize the wave-making resistance (WMR) coefficient *Cw* at Froude number *Fr* = 0.22.

 $\min f\_{obj} = \mathcal{C}\_w$ s.t.  $Fr = 0.22$ 

#### (2) Setting of optimization variables

The DTMB 5415 hull surface was discretized into a 40 × 30 point cloud. Some points on the bow, centerline of the stern, and deck side line were fixed. Figure 11 shows the locations of the fixed points (blue dots) on the hull surface. Ten points on the bow are selected as variable points *Y*1–*Y*10. The red dots in Figure 11 represent the positions of the variable points on the hull surface. The variable points are defined as variable in the *Y*-direction; that is, their *Y*-coordinates are the design variables to be optimized. Table 5 summarizes the variation ranges of the variable points.

Lower limit 0.1104 0.1581 0.0801 0.0331 0.0514 0.0946 0.0125 0.0211 0.1860 0.2502

**Figure 11.** Distributions of variable and fixed points (blue: fixed points; red: variable points).


**Table 5.** DTMB variation ranges of variables.

#### (3) Constraint conditions.

Hydrostatic constraint: Constraint on the volume of displacement: |∇−∇*opti*| <sup>∇</sup> <sup>≤</sup> 1%. Constraint on the LCB: <sup>|</sup>*Lcb*−*Lcbopti*<sup>|</sup> *Lcb* <sup>≤</sup> 1%.

Geometric constraints: Some points on the stern, deck side line, and centerline were fixed to ensure the ship length, width, and molded depth did not vary.

#### *5.3. CFD Simulation*

This study used the commercial software SHIPFLOW as CFD solver. Solve the wavemaking resistance coefficient based on potential flow theory. Create a surface mesh using the XMESH module in SHIPFLOW. Then the wave-making resistance coefficient is obtained by surface mesh with fine precision. The mesh distribution is shown in Figure 12.


**Figure 12.** SHIPFLOW meshes of DTMB 5415.

The wave resistance coefficient *Cw* and the friction coefficient *Cf* are calculated by the formulas of SHIPFLOW and the International Towing Tank Conference (ITTC). The results are then compared with the experimental data. The specific calculation formula is as follows:

$$C\_f = \frac{0.075}{\left(\log\_{10}{\text{Re}} - 2\right)^2} \tag{37}$$

Re is the Reynolds number.

The resistance coefficients calculated by SHIPFLOW and ITTC formulas are in good agreement with the experimental data. Therefore, SHIPFLOW meets the requirements of simulation optimization. The data comparison is shown in Figure 13. The relevant SHIPFLOW calculation results are verified in [34,35].

**Figure 13.** Data comparison chart of DTMB 5415.

#### *5.4. Optimization Process*

The whole optimization process is based on SHIPMDO-WUT, a comprehensive optimization platform for ship hydrodynamic performance developed by our team. The optimization process is shown in Figure 14.

#### (1) Direct optimization process

First, the original ship is discretized into 40 × 30 spatial point clouds, and the invariant points and variable points are selected according to the ship optimization experience. Then, based on the hull surface deformation method of radial basis function interpolation, the parametric deformation is realized, and the deformed hull is subjected to hydrostatic calculation to select the hull that meets the hydrostatic constraints. Finally, the hull that meets the hydrostatic constraints is imported into the SHIPFLOW software to calculate the wave-making resistance, and the iterative optimization is carried out based on the multiobjective particle swarm optimization (MOPSO) of the optimization algorithm module. If the convergence condition is satisfied, the optimized ship type is obtained, and if the convergence condition is not satisfied, the optimization variables need to be modified.

#### (2) Dimensionality reduction optimization process

The overall optimization process is consistent with the above. The difference is that the optimization variable in the direct optimization process is the variable point *Y*direction coordinate value. However, the optimization variables in the dimension reduction optimization process are based on the optimization variables in the direct optimization process, which are obtained by the K–L transformation method.

**Figure 14.** Optimization process based on the SHIPMDO-WUT platform.
