Next Article in Journal
T5-Based Model for Abstractive Summarization: A Semi-Supervised Learning Approach with Consistency Loss Functions
Previous Article in Journal
EDET: Entity Descriptor Encoder of Transformer for Multi-Modal Knowledge Graph in Scene Parsing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deriving Controllable Local Optimal Solutions through an Environment Parameter Fixed Algorithm

Advanced Visual Intelligence Laboratory, Department of Electronic Engineering, Yeungnam University, 280 Daehak-ro, Gyeongsan 38541, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(12), 7110; https://doi.org/10.3390/app13127110
Submission received: 17 May 2023 / Revised: 5 June 2023 / Accepted: 12 June 2023 / Published: 14 June 2023

Abstract

:
This paper addresses the challenge of optimizing objective functions in engineering problems influenced by multiple environmental factors, such as temperature and humidity. Traditional modeling approaches often struggle to capture the complexities of non-ideal situations. In this research, we propose a novel approach called the Environment Parameter Fixed Algorithm (EPFA) for optimizing the objective function of a deep neural network (DNN) trained in a specific environment. By fixing the environmental parameters in the DNN defined objective function, we transform the original optimization problem into a control parameter optimization problem. We integrate EPFA-CLS (Controllable local-Optimal Solution) with Gradient Descent and algorithms such as Adagrad to obtain the optimal solution. To demonstrate the concept, we apply our approach to an optimal course model and validate it using optimal course and Boston house price datasets. The results demonstrate the effectiveness of our approach in handling optimization problems in complex environments, offering promising outcomes for practical engineering applications.

1. Introduction

Optimization can arrive at the most suitable solution from among various solutions by finding the parameters that maximize or minimize the value of the function in an objective function [1]. Constraint optimization is a branch of optimization which deals with problems where the objective function is subject to certain constraints [2]. Another type of optimization problem is based on the specific definition of the objective function, such as quadratic programming (QP), which optimizes a quadratic objective function [3]. These different types of optimization problems find applications in solving engineering problems, including optimizing space vehicle trajectories, designing cost effective civil engineering structures such as frames, foundations, bridges, towers, chimneys, and dams, and addressing various forms of random loading [4].
However, accurately representing the objective function in non-ideal scenarios influenced by diverse environmental factors poses challenges in optimization problems. To address these challenges, we introduce a novel optimization problem called trained Deep Neural Network (DNN) objective optimization. This approach utilizes a trained DNN as the objective function.
Since the DNN reflects environmental constraints as inputs to the model in [5], environment parameters (E) and control parameters (C) have been defined. The term E refers to the diverse set of measured values associated with the environment, such as temperature and humidity. On the other hand, C refer to the factors that enable manipulation and control over the system. Additionally, parameters C and E exhibit an independent relationship. Figure 1 illustrates the relationship between the defined parameters and input parameters. A trained DNN Controllable Local optimal Solution (CLS) is defined as the solution of the trained DNN objective optimization and represents an optimal control parameter when a specific environment parameter is used.
One of the key advantages of defining the objective function as a trained DNN is its flexibility. Not only can functions be modeled-based on data through deep learning-based regression [6,7,8,9,10], but they can also incorporate non-ideal cases influenced by various environments [5]. Data-driven approaches can accurately capture the effects of diverse environments on both input and output data [11]. Moreover, objective function modeling is adaptable and can be applied to control problems, within the definition of regression. The function value of the objective function can be defined by the user. In Figure 2, the current input is represented by a red dot. In such a situation, the user can identify a position (blue dot) where the function value can be optimized and adjust the input parameter accordingly. This allows the user to acquire data aligned with their desired purpose and then model the objective function for control purposes.
To obtain the solution for optimizing the Trained Deep Neural Network (DNN) objective, we propose the Environment Parameter Fixed Algorithm with Controllable Local Optimal Solution (EPFA-CLS).
The EPFA-CLS algorithm reduces the original optimization problem to an optimization problem for control parameters by fixing the environment parameters in the DNN defined objective function [12]. Subsequently, the CLS is obtained through the application of Gradient Descent (or Gradient Ascent) optimization techniques. By incorporating environmental constraints and leveraging deep learning-based regression analysis, our proposed EPFA-CLS algorithm offers a promising approach to obtaining controllable local optimal solutions for complex engineering problems.
EPFA-CLS is a useful approach for determining the optimal course of a ship when evading infrared guided missiles [13]. To evade these missiles, the first step is to change the missile’s locked-on target [14]. Flares are commonly used as a deceptive measure, emitting a higher temperature than the ship and diverting the missile’s attention [15]. As shown in Figure 3, the ship fires the flare to change the locked-on object from the ship to the flare.
After deploying a flare, the ship needs to navigate an optimal course, considering its direction, velocity, and specific conditions. The course must be reachable based on the ship’s top speed, such as the FFG-2 large class ship with a maximum speed of 15.4 m/s [16]. Additionally, the miss distance, which represents the distance between the ship and the missile when the missile hits the flare, should be increased [17]. This miss distance is influenced by wind conditions, and an optimization problem needs to be formulated to determine the ship’s course. The objective function takes inputs such as ship direction, velocity, wind direction, and wind velocity, with the output being the miss distance (MD). Figure 3 illustrates miss distance.
In summary, EPFA-CLS provides advantages for modeling the problem, including non-ideal factors such as wind information, and enables precise control over the ship’s direction and velocity to navigate the optimal path.
A similar approach is presented in several prior works [18,19,20,21]. The work by [18] addresses the problem of finding the global optimum by employing a convex approach to transform the ICNN problem. However, our focus is on solving the problem of CLS derivation, which is distinct from their work. On the other hand, [19] does not adequately reflect the current point as it only considers a uniform initial condition and lacks the incorporation of control concepts. A structured prediction is achieved by leveraging the concept of an energy function commonly used for training neural networks. Furthermore, both [20] and [21] introduce the notion of a control variable and an uncontrollable variable. However, they establish a dependent relationship between these variables, where the past control variable becomes the present uncontrollable variable due to the inclusion of time conditions. In contrast, our method employs a different optimization approach by defining a new loss function as the objective function.
Our contributions in this paper can be summarized from three aspects:
First, we raise another problem of constraint optimization, named trained DNN objective optimization, and we define terms that fit the context of the problem.
Second, we propose the EPFA-CLS method, which enables the representation of a trained DNN under a specific environment parameter, effectively capturing the desired constraints. By fixing the environment parameter, we transform the original optimization problem into a problem of optimizing control parameters in the DNN defined objective function. Subsequently, well-established optimization algorithms such as Gradient Descent are employed to solve this problem. Once the environment parameter is fixed, the CLS can be obtained by capturing the current state of the trained DNN under the specific environment.
Third, the EPFA-CLS algorithm verifies feasibility with the optimal course dataset and with the Boston housing dataset.
This paper is structured as follows. Section 2 provides background on the optimization; the problem covered in this paper is explained in detail and the terms and concepts are organized. In Section 3, the proposed method is explained in detail with a flow chart of the algorithm. Section 4 shows whether the EPFA-CLS is valid by experimenting with the datasets. We verify that the CLS can be derived with the EPFA-CLS by using the Boston Housing Dataset and the optimal course dataset. Section 5 summarizes the results.

2. Background

This section consists of two parts. The section on regression analysis presents a brief description of regression and discusses how to measure accuracy. The section titled Optimization introduces the concept and the optimizers of deep learning algorithms.

2.1. Deep Learning-Based Regression

Deep learning regression is widely used to solve tasks where the goal is to predict continuous values through training. The training data are sample values from myriad data that explain social phenomena. Therefore, by following the principle of regression analysis, sample data can be trained and used as a continuous nonlinear mapping function [22]. To obtain a predicted value, you can output a value by providing input for the mapping function. The EPFA-CLS uses a DNN because it needs to construct a regression model when the data are one-dimensional. The DNN is one of the networks widely used in regression models in which one-dimensional data are the input [23]. The DNN has been used in many studies, such as for speech development.
DNN-based regression analysis also comes in several types, including simple regression and multiple regression [24,25]. This paper uses multiple regression analysis in which the number of dependent variables is 1, so the output neuron is 1 [26]. The intermediate layer between input and output layers is called the hidden layer. The structure of such a network is determined by the number of hidden layers and neurons, and the number is determined by the complexity of the problem, the amount of data, and the performance of the hardware [27].
The accuracy of deep learning regression is measured with R 2 . The sum of the squares of the deviations from the mean is a measure of the overall variance of the dependent variable. The sum of squared deviations for the regression line is a measure of the extent to which the regression fails to account for the dependent variable (a measure of noise). Therefore, R 2 is a measure of the degree to which the overall variation of the dependent variable is explained by regression. A high value for R 2 indicates that the regression model explains the variation in the dependent variable quite well, and is clearly important if the model is to be used for predictive purposes [28].
In Equation (1), total sum of squares ( S S T ) is total change, residual sum of squares ( S S R ) means unexplainable change, and explained sum of squares ( S S E ) is explainable change. In Equation (1), when S S R increases, the coefficient of determination R 2 increases, so an increase in S S R means a decrease in S S E . Therefore, the estimated regression model predicts dependent variable y quite well. The coefficient of determination R 2 is the rate of change in dependent variable y that can be explained by independent variable x in the regression model. The range of R 2 is 0 R 2 1 . If it is 0, there is no correlation between the independent variable and the dependent variable; the closer to 1, the greater the correlation [28].
R 2 = S S R S S T = 1 S S E S S T
S S T = i = 0 n y i y ¯ 2
S S E = i = 0 n y ^ i y ¯ 2
S S R = i = 0 n y i y ^ i 2
However, even if the value of R 2 is large, it may not be an important model. R 2 will increase as the proportion of S S R increases. When it is clear which x can explain dependent variable y (as in an experiment), S S R will have a large value. However, regression through machine learning is greatly affected by R 2 because the predicted value is important. In a DNN, prediction is the main goal. Therefore, it can be said that the larger the R 2 value, which is the criterion for accuracy in regression analysis, the higher the explanatory power of the model. After training, R 2 is considered important when trying to obtain training and validation accuracy [28].

2.2. Optimization

In the this section, the concept of optimization and the optimizer of the training algorithm are explained.
x ^ = a r g m a x x f ( x )
In Equation (5) [1], optimization is the set of variables x for which objective function f can reach an optimal value. Optimization problems arise in various areas of mathematics, including applied sciences, engineering, economics, medicine, and statistics [1]. These problems involve finding the set of variables, denoted as x, for which an objective function f can achieve an optimal value. Depending on the scope of the search area, optimization can be classified into global optimization and local optimization [2,29]. Global optimization aims to find the best solution over the entire search area, while local optimization focuses on finding the best solution within a specific region. When constraints are imposed on the allowable values of variables, such as independent variables, it becomes a constraint optimization problem [2].
Table 1 illustrates various problems that fall under the category of optimization problems. These problems require formulating objective functions and employing algorithms to obtain the optimal solutions. For instance, in linear programming problems, the production cost optimization can be achieved by formulating objective functions using appropriate formulas [30]. In the financial field, quadratic programming offers solutions by modeling the objective function as a quadratic function [31]. Consequently, problems are defined based on the structure of the objective function and the shape of the constraint function. However, the modeling of complex engineering systems is computationally expensive [32,33].
Thus, this paper focuses on constraint optimization where a data-driven objective function, reflecting multiple environments, is applied within a specific environment. The data-based modeling process is a form of meta-modeling that involves acquiring data and has the advantage of simplifying complex models [33].
We raise a new problem, the trained DNN objective optimization, from the existing problems. Trained DNN objective optimization is constraint optimization in which a data-driven objective function reflecting multiple environments is applied in a specific environment. As a type of metamodelling, [36], deep learning-based regression techniques offer the advantage of modeling situations, even in non-ideal circumstances where traditional modeling approaches face difficulties. For example, in the domain of fish weighing in turbid water, deep learning regression, such as the one proposed by [5], can accurately estimate weights under non-ideal conditions.
Trained deep neural network (DNN) objective optimization is a similar approach to modeling with metamodel-based simulation optimization (MBSO) [33,36] and metamodeling, but it differs in terms of its focus on considering a specific environment.
Modeling these non-ideal situations requires redefining the optimization variables. The outcome of deep learning regression, denoted as y ^ , is influenced by various inputs, including the environmental conditions during data acquisition.
We can solve the problem by defining an environment parameter ( E ) and a control parameter ( C ). The control parameters are defined as optimization variables, while the environment parameter represents the specific environmental conditions and is not subject to optimization. The consideration of C and E is necessary due to the data-driven modeling approach, and they are independent of each other within the input parameter X .
X = [ x 1 , x 2 , , x φ ] = [ E , C ]
C = [ c 1 , c 2 , , c m ]
E = [ e 1 , e 2 , , e n ]
φ = n + m
The term “controllable” originates from the concepts of environment parameters and control parameters. It refers to a constraint optimization problem with specific environment parameters. It can be understood in terms of control, where environment parameters cannot be controlled, but control parameters can be. Consequently, the optimization problem we need to solve can be derived by introducing the concepts of environment parameters and control parameters into Equation (5), leading to Equation (10).
a r g m a x C f ( [ C , E initial ] )
The proposed EPFA-CLS utilizes the existing gradient-based local optimization algorithm. So, the concept is explained as follows.
(1)
Gradient Descent [37]
This is one of the methods to find the local minimum of a function. It is a method of gradually finding parameters to minimize the value of the function by using the gradient specification to reach the minimum of the function. In Equation (11), the convergence speed of Gradient Descent controls the learning rate, α . To reach the local maximum, change the sign of Equation (11). This is called Gradient Ascent.
w n e w = w o l d α y w
(2)
Momentum [38]
Momentum is an optimization algorithm that gives inertia to the moving process during Gradient Descent. Giving inertia is a method of additionally moving by a certain value while remembering the slope value in the past. γ is a parameter corresponding to the momentum, and momentum is accumulated in the past value and becomes faster, resulting in faster convergence and reduced vibration. Similarly, by changing the sign of Equation (12), the local maximum can be reached.
v n e w = γ v o l d τ w C ( w ) w n e w = w o l d v n e w
(3)
Adagrad [39]
When the learning rate is too small, the learning time is too long, and when the learning rate is too large, Adagrad solves the problem of divergence through learning rate decay. The squares of the slopes in the term h are cumulatively added, and h term is used to adjust the learning rate in the formula to be updated. The downside of Adagrad is that the gradient accumulates in the denominator, making it too large. Similarly, by changing the sign of Equation (13), the local maximum can be reached.
v n e w = v o l d + ( w C ( w t ) ) 2 w n e w = w o l d η v n e w + ϵ w C ( w t )

3. The Proposed EPFA-CLS

In this section, we introduce the EPFA-CLS, a proposed algorithm for deriving the CLS. Figure 4 presents a flowchart illustrating the algorithm of EPFA-CLS. The algorithm proceeds as follows: build a dataset to model the desired engineering phenomenon. The dataset comprises various environments as independent variables. Model the objective function using DNN-based nonlinear regression analysis [36]. Define independent variables as environment parameters and control parameters to derive optimal control parameters in a specific environment. The CLS is obtained by keeping the environment parameters fixed and updating the control parameters iteratively until the gradient approaches zero. This update process is performed using the Gradient Descent method. The code implementation of our proposed method can be found at https://github.com/otaejang/EPFA-CLS, accessed on 10 June 2023.

3.1. Preparing the Dataset

The initial step involves deep learning regression, which requires the preparation of a dataset for training purposes. Regression aims to capture the relationship between the input and output variables. When preparing the dataset, it is crucial to examine causality and identify the input parameters that significantly influence the output. Sufficient data should be collected to ensure an accurate representation of the underlying relationship. This dataset will serve as the foundation for training the regression model.
Table 2 is a generalization of the dataset constructed. Because the EPFA-CLS’s deep learning regression utilizes multiple regression analysis, the output variable ( y ) is defined as 1, and the number of input variables ( φ ) is irrelevant. Collect the dataset according to the definitions of E and C in input parameter X . In Table 2, x 1 to x n are defined as E , and x n + 1 to x n + m are defined as C . Equation (9) is the total number of input variables, φ is the sum of the number of E parameters (n), and the number of C parameters (m). Additionally, since only the parameter is changed, the values in v are the same. Since deep learning regression needs to learn the corresponding dataset, v values are also important.

3.2. DNN-Based Regression

Figure 5 shows the structure of the DNN used for regression [26]. After setting E and C , a dataset was collected for DNN training.
Data input parameter X is used for the input node, and y is ground truth, which is used for the cost function with the last output node for supervised learning. An input node is required to match the number of datasets ( φ ) and an output node is set to 1. The DNN is composed of two hidden layers to prevent overfitting because the amount of data is not large. In addition, a rectified linear unit (ReLU) is an activation function, and Stochastic Gradient Descent (SGD) is an optimizer. After training is in progress, R 2 is used to determine the accuracy of the DNN. As mentioned in the Background section, after training, R 2 is used to determine prediction accuracy. If R 2 is close to 1, it means the DNN’s prediction has high accuracy and can be used as a mapping function.
Equations (14)–(17) are functions representing the relationship between input and output in a trained DNN and can be thought of as a black box [26]. After training is in progress, each weight (w) is learned. S stands for bias, and the values of the nodes of layer a are defined as A , while the values of the nodes of layer b are defined as B . Each w in Equations (14)–(16) is a different weight. The predicted value can be obtained through the following operation. Output y ^ is the ReLU function for nonlinear operation and the weighted sum of the learned weights. The above process is denoted as f. Going forward, in this paper, f is a function of output y from inputs E and C .
w 11 w 12 w 1 , n + m w 21 w 22 w 2 , n + m w σ 1 w σ 2 w σ , n + m e 1 e n c 1 c m + s 1 s σ = a 1 a σ W σ , n + m X + s = A
R e L U w 11 w 12 w 1 , σ w 21 w 22 w 2 , σ w ρ 1 w ρ 2 w ρ σ a 1 a ρ + s 1 s ρ = b 1 b ρ
W ρ σ A + s = B
R e L U w 1 w 2 w ρ b 1 b ρ + s = y ^
W ρ B + s = y ^
y ^ = f ( X ) = f ( [ E , C ] )

3.3. Environment Parameter Fixed

This section shows the limitations of optimization without fixation, as in Equation (5), and explains how fixation works in the algorithm. We fix the environment parameter and show what it means with the following formula:
a r g m a x E , C f ( [ E , C ] )
Equation (18) shows the equation when optimizing without fixing. However, the result of this equation is not the CLS. As mentioned in [17], multiple environments can be represented by incorporating environment parameters as inputs, and we are dealing with optimizations of specific environment parameters. Although optimal E plays a role in increasing the output of the regression model, it cannot be moved in practice.
E i n i t i a l = [ ε 1 , ε 2 , , ε n ]
y = f ( [ E i n i t i a l , C ] )
The following problems can be solved through parameter fixing, which is the core idea of the EPFA-CLS. The EPFA-CLS algorithm reduces the original optimization problem to an optimization problem for control parameters by fixing the environment parameters in the DNN-defined objective function [12]. Equation (20) demonstrates a method of deriving the CLS through fixation. One f is generated by training the dataset for all E and C . Fixing E i n i t i a l to a specific value in the algorithm is to substitute a value with a constant ( ε ) . Therefore, after training the regression model, input the value as ε .

3.4. Optimization

This section introduces the optimization process, explaining the meaning of local optimization in the EPFA-CLS and how it is reflected in the algorithm. The CLS can be derived through the optimization process. First, you need to select one of the optimizers introduced in the Background section.
The EPFA-CLS utilizes local optimization algorithms such as Gradient Descent and Adagrad by using the first derivative. Then, the current situation is reflected by inputting the initial value of γ into C . Therefore, the initial values of E and C are reflected in the EPFA-CLS. Another thing to consider after setting the optimizer to Gradient Descent is to use Gradient Descent if the parameter to be derived in the problem situation is the minimum value, and to use Gradient Ascent if the parameter is the maximum value. The expressions in (21) input the initial value of the control parameter. After setting the basics, you can update the input parameters through Gradient Ascent or Gradient Descent.
X i n i t i a l = [ E i n i t i a l , C i n i t i a l ] C i n i t i a l = [ γ 1 , γ 2 , , γ m ] E i n i t i a l = [ ε 1 , ε 2 , , ε n ]
The expressions in Equation (22) have the same effect as those in (23). The formula that combines the EPFA-CLS with the optimizer is shown. After finding the slope from the initial value, it is multiplied by ϵ to update the control parameter. As the iteration progresses, the output of the corresponding control parameter gradually reaches the local minimum/maximum value. This optimizer uses the first derivative and introduces a method of finding the first derivative of the trained DNN.
X n e w = X o l d + ϵ y ^ X
[ C new , E new ] = [ C old , E old ] + ϵ [ y ^ C , y ^ E ] E new = E initial C final = C new
The chain rule is used to find the rate of change of another parameter for a specific parameter [40]. We need to find the rate of change of the output with respect to input x of the DNN. This uses partial differentiation, and by using the gradient with each hidden layer in the DNN, o u t p u t x can be obtained with the chain rule. As the epoch increases, the C and E values are updated, but since the E value is always fixed as the initial value, C f i n a l can be derived.
y ^ X = y ^ c 1 , , y ^ c m , y ^ e 1 , , y ^ e n
y ^ c m = k = 1 σ i = 1 ρ A k c m B i A k y ^ B i
y ^ e n = k = 1 σ i = 1 ρ A k e n B i A k y ^ B i

3.5. Controllable Local Optimal Solution

After fixing E , the proposed algorithm to derive the optimal C is introduced. In this section, the formula for the final result is explained. As C is updated through optimization, each γ corresponding to C also reaches a final value.
CLS = C f i n a l = [ γ 1 f i n a l , γ 2 f i n a l , , γ m f i n a l ]
In optimization, the CLS can be derived through the updated γ value. The equations in (27) are a representative formula for a parameter where the output is a maximum. For a problem optimized through Gradient Descent, min is used instead of max.

4. Experiments

In this section, the datasets and working environment for the experiment are described, and the three experiments (C, D and E) we conducted are discussed. First, the experiment to verify the feasibility of the EPFA-CLS described in Experiment C is shown. Second, the reason for the fixed environment parameter as described in Experiment D is shown as an experiment comparing data with non-fixed optimization. Finally, an experiment using the Boston Housing Dataset shows that the EPFA-CLS can derive the CLS from other datasets.
The Boston Housing Dataset is one with 13 independent variables and one dependent variable, which satisfies the multiple regression condition. In addition, since parameters C and E can be clearly divided, they were adopted as experimental data.
While performing these three experiments, comparisons of several optimizers were conducted at the same time. During DNN training, the experimental environment used a GTX 1080 Ti 11GB 4way GPU. In addition, when optimizing the trained DNN, CPU operations used a 16-core AMD Ryzen Threadripper 1950X.
Experiments were conducted using the optimal course dataset in experiments C, D, and E. In Experiment E, the Boston Housing Dataset was used in addition to the optimal course dataset. Introducing the experimental conditions, the initial values of E and C are assumed as follows:
  • Scenario 1: SD = 150 deg, SV = 10 m/s, WD = 315 deg, WV = 10 m/s
  • Scenario 2: SD = 150 deg, SV = 10 m/s, WD = 90 deg, WV = 5 m/s
In addition, the optimizer with the first derivative was used for local optimization. In this experiment, four optimizers were used: Gradient Ascent, Adagrad, Momentum, and RmsProp [37].

4.1. The Prepared Dataset

The optimal course model dataset was constructed for this study by using a simulator based on an infrared synthetic image generation study [41]. First, it is necessary to accurately model the influence of wind when deploying a deception device such as a flare. The wind parameters indicate the strength of the wind in meters per second and its direction in degrees, as shown in Figure 6. For wind direction, in particular, blowing along the X-axis is 0 degrees.
When modeling the wind vector in this way, the velocity of the flare is expressed as a relative velocity by subtracting the wind vector from the original velocity vector of the flare, as shown in Equation (28) [42]:
V r ( t ) = V ( t ) W
Based on this, the influence of the wind is reflected in the infrared composite image generation simulator. Then, since the infrared guided missile follows the locked-on object in the simulator, a process for linking the tracker to the simulator is required [43]. After creating every frame in the simulator, the binary centroid tracker was immediately driven so that the missile followed the locked-on target, and the tracked target came to the center of the image. For this purpose, three items have been added: (1) missile dynamics modeling and reflection, (2) aim-point update through tracking, and (3) measuring the miss distance in 3D space through the tracking result [41].
Figure 7 shows a flow chart for generating the integrated composite image and processing the tracker model produced through this process. Synthesis tracking is performed every frame based on user-set parameters; the tracking process is shown on a 2D plot, and miss distance is provided. So far, wind modeling and tracker linkage have been described in the simulator. Based on this, the miss distance can be derived according to the input variables for ship direction, ship speed, and wind direction. The direction of the ship is a polar plot, and the speed of the ship is displayed in different colors. The closer the graph coordinates are to the center, the smaller the miss distance and the lower the survival probability of the ship; conversely, the farther from the center, the greater the miss distance, and the higher the survival probability.
Table 3 shows the results created by using a CSV file for deep learning. In particular, in this result, the miss distance according to the direction of the ship was measured. The number of input parameters ( φ ) was 4, and ground truth y was 1. Each parameter has a range: WD and SD range from 0 deg to 360 deg, while WV and SV range from 0 m/s to 15 m/s.
After building the dataset, we need to define E and C . In the optimal course dataset, E is defined as WD and WV, and C is defined as SD and SV; x 1 to x 2 are defined as E , and x 3 to x 4 are defined as C .

4.2. Training

In the previous section, 2407 training data points were prepared with CSV files through the simulator, with 85% used for training and 15% for testing. We used the model architecture in Figure 5. Parameters during training with the dataset included n = 2 , m = 2 , σ = 512 , ρ = 512 , with the ReLU as an activation function, and SGD as an optimizer. The training was conducted across 1000 epochs.
Figure 8 shows a training set graph after training the DNN with the optimal course dataset. The result for R 2 was 0.978, and R 2 from the test set was 0.849. A value closer to 1 means the input has a stronger relationship with the output. This means the DNN has high prediction accuracy and can be used as a mapping function.

4.3. The Robustness of the Trained DNN

Local optimization derives the CLS using the EPFA-CLS under the experimental conditions described above. In this experiment, we compared the predicted result (MD) of the trained DNN with the output (MD) of the real simulator in the derived CLS.
Table 4 and Table 5 show the results for scenarios 1 and 2, respectively. Local optimization was performed with the initial value ( E i n i t i a l ), and the result derived. Four local optimizers were used; the input to the simulator was the mode of the derived CLS. Using the results of the simulator as ground truth, we proved the robustness of the trained DNN. Table 4 shows the CLS derived using the local optimizer. Among the four optimizers, the most frequently derived CLS was from 182.67 deg for SD and 15 m/s for SV. So, each value was the simulator’s input. Naturally, the environment parameter was the same value as in scenario 1. The predicted MD through the trained DNN was 268 m. In addition, MD (the output of the simulator) was 271.3 m. We can see that the value of R 2 is predicting the correct value, as demonstrated by 0.97. Table 5 shows the most frequently obtained results from the CLS were SD at 169.7 deg and SV at 15 m/s. So, each value was the simulator’s input. The predicted MD through the trained DNN was 379.9 m, and the MD through simulator was 381.4 m. The trained DNN predicted exact values.
The results from the simulator were more accurate than the values predicted by the trained DNN. However, these results take quite a long time, as shown in Table 4 and Table 5. Additionally, it is not possible to find SD and SV in large MD values in one simulation. This is because the simulator outputs only the MD. In this experiment, we measured the accuracy and the reason for requiring the trained DNN objective optimization.
The following is a comparison between local optimizers in Table 4 and Table 5. In Table 4, the CLS is the same, and the time is also similar. Table 5 gives a similar CLS and time, but since the concept of the velocity of the gradient is reflected in Momentum, we can see that the result is a little bit more.

4.4. Fixed Environment Parameter

In this experiment, the CLS is induced by using the EPFA-CLS, and the effect of parameter fixation on the CLS is understood through the experiment with the conditions described above. This experiment shows the results of the CLS depending on whether or not it is fixed in the algorithm, which is the third step of the EPFA-CLS. Experimental conditions were all the same except for being fixed or not. Not fixing the CLS is all about finding the optimal environment and both control parameters. In some cases, this can also be a good result, but we are solving the constraint optimization problem in a specific environment.
Table 6 and Table 7 show the results for scenarios 1 and 2, respectively. Experiment D compared the results of the EPFA-CLS and an unfixed Equation (5) with the optimal course dataset. In the EPFA-CLS, E was fixed for ε 1 and ε 2 , and optimal C was derived using the local optimizer. In Equation (5), the optimal solution for all parameters of E and C is derived without being fixed.
First, Table 6 shows the results for scenario 1. From using the EPFA-CLS, the CLS is 182 deg for SD and 15 m/s for SV. Since the environment parameter was fixed here, WD and WV have the same values as the environment parameter in scenario 1. In the unfixed result excluding Momentum, SD is 168.17 deg, SV is 15 m/s, WD is 282.8 deg, and WV is 15 m/s. From Momentum, SD is 194.17 deg, SV is 15 m/s, WD is 215.8 deg, and WV is 15 m/s. Comparing miss distance in the results for scenario 1, the EPFA-CLS was 268.01 m, but when unfixed was 461 m and 589 m. The unfixed miss distances came out higher. However, what is important to consider here is the value of the environment parameter, rather than the miss distance.
In Table 6, the current environment parameter values are ε 1 = 315 deg and ε 2 = 10 m/s. The control parameter can be changed to the CLS to increase the miss distance, but the environment parameter cannot be changed for the ship. Therefore, even if the unfixed miss distance is higher than the EPFA-CLS, it is not a CLS because the environment parameter cannot be modified. In addition, since the miss distance is simultaneously affected by the environment parameter and the control parameter, it is necessary to derive the optimal control parameter from the current environment parameter value. Here, it was derived using the local optimizer. Therefore, it is not the CLS because environment parameters cannot be changed based on results derived from unfixed experiments. Therefore, we derived the CLS by fixing the environment parameter.
Table 7 shows the results for the second scenario. In these results, the same conclusion is reached as in scenario 1. However, the unique feature here is that the result of Momentum optimizer derived from unfixed values in scenario 1 and the result of Momentum optimizer derived from unfixed values in scenario 2 are the same. If you unfix and optimize based on this, you eventually find the same point. In addition, finding the same point without fixing it means finding the local maximum in the five-dimensional graph (four inputs, one output) of the DNN’s f as trained with the optimal course dataset. It can be experimentally confirmed that this is the same characteristic obtained by using differentiation in the optimization algorithm, which can derive the optimal solution without high-dimensional plots.
In the end, it can be confirmed that the EPFA-CLS, which was unfixed and optimized in Experiment D, is the same as the result derived from Equation (5) for optimization. The EPFA-CLS is an algorithm that optimizes by dividing solutions in Equation (5). It optimizes by fixing a specific environment parameter.
Experiment D also confirmed the results for several local optimizers. Compared with other optimizers, Momentum can confirm that the effect of speed is reflected in the EPFA-CLS. Therefore, if you want to maximize the output value, it is better to use the Momentum optimizer. In general, similar results can be confirmed, but in Experiment D, there is a large difference in values. Based on this point, a selection of the EPFA-CLS’s optimizer is the user’s (similar to a hyperparameter).

4.5. Additional Validation of the CLS

In this experiment, we show whether the EPFA-CLS can derive the CLS through two datasets. The Boston Housing Dataset consists of 13 input parameters [44] such as crime rate, number of rooms, and the local tax rate, and the output parameter is the house price in the suburbs of Boston in the mid-1970s. The other dataset is the optimal course dataset, which was prepared for this paper. The CLS is the local maximum reachable from the current location. Therefore, this experiment aims to validate the robustness of the DNN trained on the dataset, ensuring its ability to produce the CLS regardless of its position. With n = 2 , m = 2 , σ = 1024 , ρ = 1024 , the ReLU activation function, and the SGD optimizer, the training procedure before the experiment used 90% of the Boston Housing Dataset while testing used 10%. With the training set, R 2 showed accuracy of 0.971, and in testing, R 2 showed accuracy of 0.849. In Experiment E, the optimizer used Gradient Ascent to show how robustly the CLS was derived from any location by using all the EPFA-CLS conditions as is, changing only the dataset.
(1) Boston Housing Dataset
This Boston house price dataset includes the following independent variables: crime rate per capita (CRIM), the proportion of residential area exceeding 25,000 square bits (ZN), the proportion of land occupied by non-retail commercial areas (INDUS), a dummy variable for the Charles River (CHAS), concentrated monoxide per 10 ppm nitrogen (NOX), the average number of rooms per household (RM), the proportion of owned homes built before 1940 (AGE), an index of accessibility to five Boston career centers (DIS), an index of accessibility to radial roads (RAD), the property tax rate per $ 10 , 000 in assessed value (TAX), the student/teacher ratio by town (PTRATIO), 1000 ( B k 0.63 ) 2 where B k is the percentage of municipality (B), and a substratum of the population ratio (LSTAT).
This experiment focuses on examining how robustly a DNN trained on the Boston dataset can produce the CLS regardless of its position. The experimental procedure is as follows: firstly, the Boston dataset consists of 506 data points with a total of 13 independent variables. In this scenario, four random independent variables are defined as control parameters, and the entire dataset of 506 instances is utilized as initial values. Using EPFA-CLS, the optimization is performed from the initial values to obtain the CLS, resulting in a total of 4 ∗ 506 data points. The key points of interest are the distribution of control parameters at the initial values and after obtaining the CLS, as well as the distribution of housing prices at the initial control parameters and after obtaining the CLS.
We introduced the control parameters: nitric oxide concentration, average number of rooms per dwelling, weighted distances to five Boston employment centers, and full-value property tax rate per $ 10 , 000 . The results are intriguing. Upon examining the distributions in the left histograms and the distribution of housing prices on the right in Figure 9, it can be observed that the majority of values have moved towards optimal values compared to the black region representing the initial values. For instance, it is evident that a higher average number of rooms per dwelling leads to a higher housing price, given the same environmental conditions. Additionally, when analyzing the distribution of CLS, it can be seen that the values have reached higher levels compared to the initial values, resulting in an increase in housing prices. However, from a global optimization point of view, some outliers were also observed. This phenomenon occurred due to the optimization based on gradient descent. Since gradient descent seeks local optima, it considers closer points as more optimal. In conclusion, it has been demonstrated that the DNN can produce Controllable Local Optimal Solutions (CLS) from any position, emphasizing its ability to find CLS regardless of the starting point.
(2) Optimal course dataset
Table 8 and Figure 10 show the results when deriving the CLS from 12 random initial values that the trained DNN used in Section 3 and Section 4. It illustrates the scenario where instead of the one mentioned at the beginning of Section 4, the initial values are defined at twelve randomly chosen points, and the resulting CLS is derived.
We will describe the subgraphs from a to d in Figure 10. Starting from the initial red dot, it can be observed that EPFA-CLS reaches local optima. Graphs A, B, C, and D represent the graphs and CLS for scenarios 1, 2, 6, and 8, respectively, as shown in Table 8. Particularly in Graph B, there are three local maximum points, but optimization is achieved by considering the gradient direction. Additionally, in Graph D, the initial values are updated through optimization, leading to the derivation of CLS. This demonstrates that the trained neural network can be optimized using the Gradient Descent method, allowing it to reach local optima.
If you check the output variable for miss distance, you can see that MD of the CLS increases. Compared to the initial MD especially, when the initial value for SD was 101 deg, we can see that MD is much higher, even if it moves only 20 deg. In Figure 10, each heading displays its initial value. The size of the graph indicates the shape of the graph in which the initial value of the environment parameter is reflected. By conducting random tests for various scenarios, EPFA-CLS demonstrates its ability to be derived from initial values.

5. Summary and Conclusions

In this paper, in the classification of optimization problems according to existing objective functions such as LP and NLP, we present the constraint optimization problem in a DNN where the objective function is trained. In addition, we proposed the EPFA-CLS to solve this problem. Because multiple environments are reflected, multiple environment parameters are entered as input to the objective function. Therefore, we defined the input parameters as independent E and C , and derived the CLS through local optimization at a specific E . In conclusion, optimization was used two times in total. The first was training a deep neural regression network, defining the objective function as a loss function and finding the optimal weight. The second was to train the DNN defined as an objective function with the CLS derived according to the purpose desired by the user. The approach through fixing environment parameters transforms constrained optimization into unconstrained optimization. The algorithm was verified with the optimal course dataset and a Boston house price dataset. Through these experiments, an optimization approach was demonstrated by modeling non-ideal situations to solve various engineering problems.

Author Contributions

The contributions were distributed between authors as follows: O.J. wrote the text of the manuscript, developed whole concepts, and programmed the EPFA-CLS. S.J. performed the in-depth discussion of the related literature, and performed the accuracy experiments that are exclusive to this paper. S.K. performed the in-depth discussion of the related literature, and confirmed the accuracy experiments that are exclusive to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable. Ethical review and approval were not required for this study.

Informed Consent Statement

Not applicable.

Data Availability Statement

The Boston Housing Price data presented in this study are openly available in https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html, reference number [44]. Publicly available Optimal Course datasets were analyzed in this study. This data can be found here: https://github.com/otaejang/EPFA-CLS.

Acknowledgments

This work was supported by the 2023 Yeungnam University Research Grants.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this paper:
X Input parameters in the training dataset
yGround truth of the training dataset (output)
y ^ Result of a forward pass by a DNN
fTrained deep neural regression network
E Environment parameter
C Control parameter
E i n i t i a l Initial value of the environment parameter
C i n i t i a l Initial value of the control parameter
OC Results from the EPFA
vValues of a dataset
σ Count of the first hidden layer node
ρ Count of the second hidden layer node
wDNN-trained weights
S DNN-trained bias
A The DNN first hidden layer
B The DNN second hidden layer
A k First hidden layer node
B k Second hidden layer node
nNumber of environment parameters
mNumber of control parameters
φ Number of input parameters

References

  1. Diwekar, U.M. Introduction to Applied Optimization; Springer: Cham, Switzerland, 2020. [Google Scholar]
  2. Horst, R.; Panos, M.; Pardalos, N.V.T. Introduction to Global Optimization; Springer: New York, NY, USA, 2000. [Google Scholar]
  3. Frank, M.; Wolfe, P. An Algorithm for Quadratic Programming. Nav. Res. Logist. Q. 1956, 3, 95–110. [Google Scholar] [CrossRef]
  4. Rao, S.S. Engineering Optimization: Theory and Practice; John Wiley and Sons, Inc.: Hoboken, NJ, USA, 2019. [Google Scholar]
  5. Tengtrairat, N.; Woo, W.L.; Parathai, P.; Rinchumphu, D.; Chaichana, C. Non-Intrusive Fish Weight Estimation in Turbid Water Using Deep Learning and Regression Models. Sensors 2022, 22, 5161. [Google Scholar] [CrossRef] [PubMed]
  6. Agarwal, A.; Triggs, B. 3D Human Pose from Silhouettes by Relevance Vector Regression. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004. [Google Scholar] [CrossRef]
  7. Sun, M.; Kohli, P.; Shotton, J. Conditional regression forests for human pose estimation. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3394–3401. [Google Scholar] [CrossRef]
  8. Yan, S.; Wang, H.; Tang, X.; Huang, T.S. Learning Auto-Structured Regressor from Uncertain Nonnegative Labels. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; pp. 1–8. [Google Scholar] [CrossRef]
  9. Guo, G.; Fu, Y.; Dyer, C.R.; Huang, T.S. Image-Based Human Age Estimation by Manifold Learning and Locally Adjusted Robust Regression. IEEE Trans. Image Process. 2008, 17, 1178–1188. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Shen, W.; Guo, Y.; Wang, Y.; Zhao, K.; Wang, B.; Yuille, A.L. Deep Regression Forests for Age Estimation. In Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  11. Jiang, H.; Gu, Y.; Xie, Y.; Yang, R.; Zhang, Y. Solar Irradiance Capturing in Cloudy Sky Days—A Convolutional Neural Network Based Image Regression Approach. IEEE Access 2020, 8, 22235–22248. [Google Scholar] [CrossRef]
  12. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  13. Hobgood, J.; Madison, K.; Pawlowski, G.; Nedd, S.; Roberts, M.; Rumberg, P. System Architecture for Anti-Ship Ballistic Missile Defense (ASBMD); DTIC: Fort Belvoir, VA, USA, 2009.
  14. Walker, P.F. Precision-guided Weapons. Sci. Am. 1981, 245, 36–45. [Google Scholar] [CrossRef]
  15. Venkatesan, R.H.; Sinha, N.K. Key factors that affect the performance of flares against a heat-seeking air-to-air missile. J. Def. Model. Simul. 2014, 11, 387–401. [Google Scholar] [CrossRef]
  16. Kang, K.m.; Kim, D.J. Ship Velocity Estimation From Ship Wakes Detected Using Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2019, 12, 4379–4388. [Google Scholar] [CrossRef]
  17. Alpert, J. Miss distance analysis for command guided missiles. J. Guid. Control. Dyn. 1988, 11, 481–487. [Google Scholar] [CrossRef]
  18. Amos, B.; Xu, L.; Kolter, J.Z. Input Convex Neural Networks. CoRR 2016, arXiv:1609.07152. [Google Scholar]
  19. Belanger, D.; McCallum, A. Structured Prediction Energy Networks. CoRR 2015, arXiv:1511.06350. [Google Scholar]
  20. Chen, Y.; Shi, Y.; Zhang, B. Modeling and optimization of complex building energy systems with deep neural networks. In Proceedings of the 2017 51st Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 29 October–1 November 2017; pp. 1368–1373. [Google Scholar] [CrossRef] [Green Version]
  21. Chen, Y.; Shi, Y.; Zhang, B. Optimal Control Via Neural Networks: A Convex Approach. arXiv 2018, arXiv:1805.11835. [Google Scholar] [CrossRef]
  22. Lathuilière, S.; Mesejo, P.; Alameda-Pineda, X.; Horaud, R. A Comprehensive Analysis of Deep Regression. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2065–2081. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Fernández-Delgado, M.; Sirsat, M.; Cernadas, E.; Alawadi, S.; Barro, S.; Febrero-Bande, M. An extensive experimental survey of regression methods. Neural Netw. 2019, 111, 11–34. [Google Scholar] [CrossRef] [PubMed]
  24. Xu, Y.; Du, J.; Dai, L.R.; Lee, C.H. A Regression Approach to Speech Enhancement Based on Deep Neural Networks. IEEE/ACM Trans. Audio Speech Lang. Process. 2015, 23, 7–19. [Google Scholar] [CrossRef]
  25. Du, J.; Xu, Y. Hierarchical deep neural network for multivariate regression. Pattern Recognit. 2017, 63, 149–157. [Google Scholar] [CrossRef]
  26. Nielsen, M. Neural Networks and Deep Learning; Academia: San Francisco, CA, USA, 2006. [Google Scholar]
  27. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  28. Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. PeerJ Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef]
  29. Llewellyn, D.C.; Tovey, C.; Trick, M. Local optimization on graphs. Discret. Appl. Math. 1989, 23, 157–178. [Google Scholar] [CrossRef] [Green Version]
  30. Krynke, M.; Mielczarek, K. Applications of linear programming to optimize the cost-benefit criterion in production processes. MATEC Web Conf. 2018, 183, 04004. [Google Scholar] [CrossRef] [Green Version]
  31. Jackson, M.; Staunton, M.D. Quadratic programming applications in finance using Excel. J. Oper. Res. Soc. 1999, 50, 1256–1266. [Google Scholar] [CrossRef]
  32. Zhang, R.; Liu, Y.; Sun, H. Physics-informed multi-LSTM networks for metamodeling of nonlinear structures. Comput. Methods Appl. Mech. Eng. 2020, 369, 113226. [Google Scholar] [CrossRef]
  33. Parnianifard, A.; Azfanizam, A.; Ariffin, M.K.A.M.; Ismail, M.I.S.; Ale Ebrahim, N. Recent developments in metamodel based robust black-box simulation optimization: An overview. Decis. Sci. Lett. 2019, 8, 17–44. [Google Scholar] [CrossRef]
  34. Gass, S.I. Linear Programming: Methods and Applications; Courier Corporation: North Chelmsford, MA, USA, 2003. [Google Scholar]
  35. Ruszczynski, A. Nonlinear Optimization; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  36. Soares do Amaral, J.V.; Montevechi, J.A.B.; de Carvalho Miranda, R.; de Sousa Junior, W.T. Metamodel-based simulation optimization: A systematic literature review. Simul. Model. Pract. Theory 2022, 114, 102403. [Google Scholar] [CrossRef]
  37. Ruder, S. An overview of gradient descent optimization algorithms. CoRR 2016, arXiv:1609.04747. [Google Scholar]
  38. Qian, N. On the momentum term in gradient descent learning algorithms. Neural Netw. 1999, 12, 145–151. [Google Scholar] [CrossRef]
  39. Duchi, J.; Hazan, E.; Singer, Y. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
  40. Ambrosio, L.; Maso, G.D. A General Chain Rule for Distributional Derivatives. Proc. Am. Math. Soc. 1990, 108, 691–702. [Google Scholar] [CrossRef]
  41. Kim, S.; Yang, Y.; Choi, B. Realistic infrared sequence generation by physics-based infrared target modeling for infrared search and track. Opt. Eng. 2010, 49, 1–9. [Google Scholar] [CrossRef]
  42. Viau, C.R.; D’Agostinoa, I.; Cathala, T. Physical modelling of naval infrared decoys in TESS and SE-WORKBENCH-EO for ship self-protection. In Proceedings of the 10th International IR Target and Background Modeling and Simulation (ITBM&S), Ettlingen, Germany, 23–26 June 2014. [Google Scholar]
  43. Tremblay, J.P.; Viau, C.R. A MATLAB/Simulink methodology for simulating dynamic imaging IR missile scenarios for use in countermeasure development and evaluation. In Technologies for Optical Countermeasures VI; Titterton, D.H., Richardson, M.A., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2009; Volume 7483, pp. 182–192. [Google Scholar] [CrossRef]
  44. Harrison, D.; Rubinfeld, D. UCI Machine Learning Repository, Boston Housing Dataset; UCI: Irvine, CA, USA, 2018. [Google Scholar]
Figure 1. The relationship between input parameters, environment parameters, and control parameters. Environment parameters and control parameters are defined as input parameters.
Figure 1. The relationship between input parameters, environment parameters, and control parameters. Environment parameters and control parameters are defined as input parameters.
Applsci 13 07110 g001
Figure 2. Defining the objective function as a trained DNN can provide an example of how it can be used for control problems.
Figure 2. Defining the objective function as a trained DNN can provide an example of how it can be used for control problems.
Applsci 13 07110 g002
Figure 3. An illustration of miss distance and the locked-on object as defined when deriving an optimal course.
Figure 3. An illustration of miss distance and the locked-on object as defined when deriving an optimal course.
Applsci 13 07110 g003
Figure 4. Flowchart of EPFA-CLS.
Figure 4. Flowchart of EPFA-CLS.
Applsci 13 07110 g004
Figure 5. The DNN structure used for deep learning regression.
Figure 5. The DNN structure used for deep learning regression.
Applsci 13 07110 g005
Figure 6. Wind modeling for simulators.
Figure 6. Wind modeling for simulators.
Applsci 13 07110 g006
Figure 7. The simulator’s operational sequence. The meaning of *.csv is that it indicates reading an input file in CSV format.
Figure 7. The simulator’s operational sequence. The meaning of *.csv is that it indicates reading an input file in CSV format.
Applsci 13 07110 g007
Figure 8. Training accuracy of the optimal course dataset.
Figure 8. Training accuracy of the optimal course dataset.
Applsci 13 07110 g008
Figure 9. The figure represents the distribution of optimized results obtained by setting all values from the Boston Housing dataset as initial values and introducing four random control parameters. The histograms depict the control parameters before and after optimization using each of the NOX, RM, DIS, and TAX control parameters, compared to the initial values shown in black. The distribution of Boston Housing Prices is shown in black to observe the changes in housing prices after optimization. The red color represents NOX, yellow represents RM, blue represents DIS, and green represents TAX.
Figure 9. The figure represents the distribution of optimized results obtained by setting all values from the Boston Housing dataset as initial values and introducing four random control parameters. The histograms depict the control parameters before and after optimization using each of the NOX, RM, DIS, and TAX control parameters, compared to the initial values shown in black. The distribution of Boston Housing Prices is shown in black to observe the changes in housing prices after optimization. The red color represents NOX, yellow represents RM, blue represents DIS, and green represents TAX.
Applsci 13 07110 g009
Figure 10. Graph reformation of optimal course dataset in experiment 3; Figure 10 should be viewed in conjunction with Table 8. Figure 10 is the result of plotting a graph in a specific environment parameter (SD, SV vs MD). Figure 10 illustrates the results of CLS obtained through EPFA-CLS optimization for four random experiments from Table 8. The red dots represent the initial points, while the blue dots represent the resulting CLS.
Figure 10. Graph reformation of optimal course dataset in experiment 3; Figure 10 should be viewed in conjunction with Table 8. Figure 10 is the result of plotting a graph in a specific environment parameter (SD, SV vs MD). Figure 10 illustrates the results of CLS obtained through EPFA-CLS optimization for four random experiments from Table 8. The red dots represent the initial points, while the blue dots represent the resulting CLS.
Applsci 13 07110 g010
Table 1. Organized by type of optimization problem and explanations of the problems it raises.
Table 1. Organized by type of optimization problem and explanations of the problems it raises.
ProblemExplanation
Linear Programming [34]A problem in which the objective function and all constraint functions are linear
Non-Linear Programming [35]A problem in which either the objective function or any constraint function is nonlinear
Quadratic Programming [3]A problem in which the objective function is quadratic
Trained DNN objective optimization (proposed)Constraint optimization problem in a specific environment when the objective function is a trained DNN
Table 2. Dataset reflecting environment parameters.
Table 2. Dataset reflecting environment parameters.
e 1 e 2 e n c 1 c 2 c m y
v 11 e v 12 e v 1 n e v 1 , n + 1 c v 1 , n + 2 c v 1 , n + m c y 1
v 21 e v 22 e v 2 n e v 2 , n + 1 c v 2 , n + 2 c v 2 , n + m c y 2
v k 1 e v k 2 e v k n e v k , n + 1 c v k , n + 2 c v k , n + m c y k
Table 3. This is an example of the data acquired through the simulator, and it consists of a total of 2407 training data points.
Table 3. This is an example of the data acquired through the simulator, and it consists of a total of 2407 training data points.
SD [deg]SV [m/s]WD [deg]WV [m/s]MD [m]
000064.89487486
000319.02400837
000524.66201077
1500514.66237099
3000514.75703005
4500515.70170131
6000530.37929284
7500528.28568179
9000530.14618032
10500529.61211902
12000527.89175802
13500523.66285094
15000521.35341014
16500524.42776638
18000514.68399028
19500514.37340086
21000514.52094585
22500515.05996721
24000518.23890991
Table 4. Derived CLS results in Scenario 1: among the four results, the mode value is input to the simulator. The predicted MD and the MD of the simulator are compared with time and output values. Ground Truth (GT).
Table 4. Derived CLS results in Scenario 1: among the four results, the mode value is input to the simulator. The predicted MD and the MD of the simulator are compared with time and output values. Ground Truth (GT).
TypeParametersEPFA-CLSSimulator (GT)
AdagradGAMomentumRMSProp
CLSSD [deg]182.67182.57182.67182.68182.67
SV [m/s]1515151515
OutputMD [m]268267.98268267.9271.3
Time [s]8.699.4710.10610.4219 [min]
Table 5. Derived CLS results in Scenario 2: the description of the table is the same as in Table 4.
Table 5. Derived CLS results in Scenario 2: the description of the table is the same as in Table 4.
TypeParametersEPFA-CLSSimulator (GT)
AdagradGAMomentumRMSProp
CLSSD [deg]169.7169.9174.9169.7169.7
SV [m/s]1515151515
OutputMD [m]379.9379.9380.5379.98381.4
Time [Sec]9.588.819.9611.519 [min]
Table 6. With scenario 1, results of the EPFA-CLS and optimization without fixing.
Table 6. With scenario 1, results of the EPFA-CLS and optimization without fixing.
TypeParametersAdagradGAMomentumRmsProp
EPFA-CLSCLSSD [deg]182.67182.57182.67182.68
SV [m/s]15151515
WD [deg]315315315315
WV [m/s]10101010
OutputMD [m]268.01268.2268.01267.9
UnfixedCLSSD [deg]168,17168.15194.69168.13
SV [m/s]15151515
WD [deg]282.8282.78215.02282.77
WV [m/s]15151515
OutputMD [m]461461.23589.5461.23
Table 7. With scenario 2, results of EPFA-CLS and results of optimization without fixing.
Table 7. With scenario 2, results of EPFA-CLS and results of optimization without fixing.
TypeParametersAdagradGAMomentumRmsProp
EPFA-CLSCLSSD [deg]169.7169.9174.9169.7
SV [m/s]15151515
WD [deg]90909090
WV [m/s]5555
OutputMD [m]379.9379.9380.5379.98
UnfixedCLSSD [deg]167.76194.7194.69194.78
SV [m/s]15151515
WD [deg]90.56215.02215.02215.01
WV [m/s]2.9151515
OutputMD [m]398.02461.23589.5589.5
Table 8. Table illustrates the results of CLS derivation for 12 random initial values using EPFA-CLS.
Table 8. Table illustrates the results of CLS derivation for 12 random initial values using EPFA-CLS.
Initial Value (Random)Final Value (CLS)
No.SD [deg]SV [m/s]WD [deg]WV [m/s] MD initial [m] CLS SD [deg] CLS SV [m/s] MD maximum [m]
1101815279.3849180.952515387.5419
211711322657.7895188.033415279.0536
3142142425308.4076174.366515381.5254
417142371187.3034149.032612.1728416.7096
5209123561250.1607172.672915404.1164
622042817109.1765195.615915262.6963
723272317203.9565196.037915319.9787
823861613120.9584148.917415424.4204
925291483141.9169148.066915411.2176
10254722315332.7779196.876115586.8834
1126312327107.3001195.991815319.0679
123117256670.8144189.769815335.1269
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jang, O.; Jo, S.; Kim, S. Deriving Controllable Local Optimal Solutions through an Environment Parameter Fixed Algorithm. Appl. Sci. 2023, 13, 7110. https://doi.org/10.3390/app13127110

AMA Style

Jang O, Jo S, Kim S. Deriving Controllable Local Optimal Solutions through an Environment Parameter Fixed Algorithm. Applied Sciences. 2023; 13(12):7110. https://doi.org/10.3390/app13127110

Chicago/Turabian Style

Jang, Ohtae, Sangho Jo, and Sungho Kim. 2023. "Deriving Controllable Local Optimal Solutions through an Environment Parameter Fixed Algorithm" Applied Sciences 13, no. 12: 7110. https://doi.org/10.3390/app13127110

APA Style

Jang, O., Jo, S., & Kim, S. (2023). Deriving Controllable Local Optimal Solutions through an Environment Parameter Fixed Algorithm. Applied Sciences, 13(12), 7110. https://doi.org/10.3390/app13127110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop