Next Article in Journal
Mechanical Behavior and Sealing Performance Study of Subsea Connector Core-Sealing Components under the Combined Action of Internal Pressure, Bending Moment, and Axial Load
Previous Article in Journal
Weakening of the Geostrophic Component of the Gulf Stream: A Positive Feedback Loop on the Melting of the Arctic Ice Sheet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiple Feature Extraction Long Short-Term Memory Using Skip Connections for Ship Electricity Forecasting

1
Contents AI Research Center, Romantique, 27 Daeyeong-ro, Busan 49227, Republic of Korea
2
Division of Marine System Engineering, Korea Maritime and Ocean University, 727 Taejong-ro, Yeong-do-gu, Busan 49112, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2023, 11(9), 1690; https://doi.org/10.3390/jmse11091690
Submission received: 25 July 2023 / Revised: 22 August 2023 / Accepted: 23 August 2023 / Published: 27 August 2023
(This article belongs to the Section Ocean Engineering)

Abstract

:
The power load data of electric-powered ships vary with the ships’ operational status and external environmental factors such as sea conditions. Therefore, a model is required to accurately predict a ship’s power load, which depends on changes in the marine environment, weather environment, and the ship’s situation. This study used the power data of an actual ship to predict the power load of the ship. The research on forecasting a ship’s power load fluctuations has been quite limited, and the existing models have inherent limitations in predicting these fluctuations accurately. In this paper, A multiple feature extraction (MFE)-long short-term memory (LSTM) model with skip connections is introduced to address the limitations of existing deep learning models. This novel approach enables the analysis and forecasting of the intricate load variations in ships, thereby facilitating the prediction of complex load fluctuations. The performance of the model was compared with that of a previous convolutional neural network-LSTM network with a squeeze and excitation (SE) model and deep feed-forward (DFF) model. The metrics used for comparison were the mean absolute error, root mean squared error, mean absolute percentage error, and R-squared, wherein the best, average, and worst performances were evaluated for both models. The proposed model exhibited a superior predictive performance for the ship’s power load compared to that of existing models, as evidenced by the performance metrics: mean absolute error (MAE) of 55.52, root mean squared error of (RMSE) 125.62, mean absolute percentage error (MAPE) of 3.56, and R-squared (R2) of 0.86. Therefore, the proposed model is expected to be used for power load prediction during electric-powered ship operations.

1. Introduction

Many diesel-powered propulsion vessels currently used for maritime transportation have the disadvantage that they emit high levels of sulfur oxides (SOx), nitrogen oxides (NOx), and carbon monoxide [1,2]. To protect the marine environment, the Marine Environment Protection Committee of the International Maritime Organization regulates air pollution through the International Convention for the Prevention of Pollution from Ships [3]. In eco-friendly electric-powered vessels, the entire power load of the vessel is supplied by fuel cells or batteries, and the vessel operations aim to meet zero emission goals [4,5]. Furthermore, research on green ship technologies has been conducted to reduce fuel consumption and mitigate CO2 emissions. These include aspects such as hull design [6,7], engine models [8,9], propulsion propeller selection [10,11], steering gear design [12], alternative fuels [13,14], post-treatment technologies to reduce CO2 emissions [15], heat recovery systems [16], power distribution systems [17], and ship operation systems [18,19]. However, there exists a research gap regarding the forecasting of the variations in a ship’s power loads. Thus, for a comprehensive study on the control of electric-powered ships, it is essential to study the power load prediction model of the ship, which changes according to sea conditions. Previous research on power prediction investigated various power prediction models, centering on urban power load prediction [20], power load prediction for a specific country [21], solar power generation prediction [22], and building power demand prediction [23]. However, unlike the power load data from the previous research, the power load data of current vessels are characterized by rapid variations in response to changes in the vessel’s operational status and external environmental factors. Furthermore, it is difficult to identify the trends, seasonality, and periodicity that can indicate changes in power load. Lastly, several proposed ship power load prediction models have yielded limited results in interpreting and expressing the characteristics of a ship’s power load [24,25,26]. Therefore, it is crucial to develop a model that can accurately interpret and predict a ship’s power load according to changes in the marine environment, weather conditions, and the ship’s situation.
Vessels use various types of equipment that utilize electrical loads to ensure the reliable transportation of cargo. The power loads of vessels are characterized by three factors. The first factor is the marine environment. Changes in the marine environment, such as wind speed, waves, and currents, have a direct impact on the hull resistance of ships sailing in the ocean [27]. The change in hull resistance affects the travel direction and speed of the ship; thus, continuous control is required to maintain the speed and direction of travel. Consequently, changes occur in the amount of power supplied to power-consuming equipment [28], such as steering equipment [29,30] and auxiliary blowers of the main engine [31] installed on the ship. The second factor is weather conditions. Wind direction and wind speed have a direct impact on the drag of a ship [32]. Moreover, the drag of the hull can increase considerably in foggy or rainy weather [33]. As the ship requires continuous control because of changes in its resistance, its power consumption also changes continuously. The third factor is the ship’s situation. The ship’s power load is characterized by considerable changes based on the ship’s situation [34]. For example, ships use a hydraulic crane installed on the ship when unloading or receiving cargo at a port. Here, the power consumption of the electric motor installed in the hydraulic crane changes according to the weight of the cargo [35]. Furthermore, when the ship enters or leaves the port, it uses a bow thruster to control its lateral movement [36]. Because the bow thruster consumes a lot of power, the power load of the ship also changes considerably. When the ship anchors, the heavy anchor is controlled by the winch motor [37], which also changes the power load of the ship. Hence, the ship’s power load is also dependent on changes in the ship’s situation.
This study used the power data measured on an actual ship to predict the power load of the ship. A multiple feature extraction (MFE)-long short-term memory (LSTM) model based on skip connections was developed. The performance of the model was compared with that of a previous convolutional neural network (CNN)-LSTM network with an SE model and DFF model. The comparison test results showed that the proposed model outperformed other models in predicting the ship’s power loads. Thus, the proposed model can be useful for power load prediction during ship operations.
Table 1 lists the nomenclature used in this study.

2. Theoretical Background

2.1. One-Dimensional (1D) CNN

A convolutional neural network (CNN) [38] is a deep learning model that mimics the structure of the human optic nerve and generates a feature map of the data. Examples of data to which CNN models have been applied include videos in 3D format [39], images in 2D format [40], and signal data in 1D format [41]. For many years, the application of CNN models has been focused on image classification, face recognition, and object recognition, which require computer vision using 2D CNN. The 1D CNN [42], which features a 1D format, has been widely used for characterizing time series data such as particulate matter [43], individual residential loads [44], loads of commercial buildings [45], and ATM cash demand [46], yielding good performance results. As the ship’s power load data are also time series data, this study used 1D CNN to extract and analyze the features of the ship’s power load data.
The 1D CNN model consists of a convolutional layer and a pooling layer. The filters in the convolutional layer are used to extract features from the input data. The following equation describes the behavior of the convolutional layer:
y = σ i = 1 n c o n v 1 D W x i + b i ,
where n is the number of feature maps in the layer; c o n v 1 D is the behavior of a one-dimensional convolutional neural network within the same padding, W is a trainable 1D convolutional kernel, x i is the i-th feature map, b i is the bias of the i-th feature map, and σ ( ) denotes the activation function. The 1D CNN model proposed in this study uses a rectified linear unit (ReLU) [47] as the activation function. The ReLU can be expressed by the following equation:
R e L U x = m a x 0 ,   x
The ReLU activation function outputs a straight line with a slope of 1 if the value of the input x is greater than 0, and it outputs 0 otherwise. The feature maps extracted from the convolution layer are input to the max pooling layer [48,49]. Max pooling has the advantage that it can suppress the overfitting problems and excessive computation that may occur during the training of deep learning models by down-sampling the input feature data.

2.2. Long Short-Term Memory

LSTM [50] adopts an extended structure of the memory cell in RNN (recurrent neural network) to store and retrieve data. Furthermore, it possesses the advantage that it can train temporal relationships based on long timescales. The LSTM models are mainly used in time series data processing and natural language processing. It overcomes the key problem of the traditional recurrent neural network, which is the failure to remember information that is far from the output. Figure 1 depicts the structure of the LSTM cell.
LSTMs utilize the concept of gating, which involves component-wise multiplication of the input. The LSTM cell state is updated based on the activation of these gates. The input provided to an LSTM is processed through various gates, such as the input gate, output gate, or forget gate, each controlling specific operations on the cell memory. Here, C t 1 is the previous cell state, h t 1 is the previous hidden state, x t is the data input to the LSTM, C t is a new cell state, h t is a new hidden state, and are vector operations, and σ represents the sigmoid function, which can be expressed as follows:
s i g m o i d x = 1 1 + e x ,
where x is the input data, and e is a natural constant. Lastly, tanh stands for the hyperbolic tangent [36], which can be expressed by the following equation:
t a n h x = e x e x e x + e x ,
where x is the input data, and e is a natural constant.
The forget gate of LSTM uses the computation of the current information and the value of the past hidden layer to determine how much past information to forget. The operation of the forget gate at time t can be expressed by the following equation:
f t = σ W f h t 1 , x t   + b f ,
where x t is the input vector, h t 1 is the vector of a past hidden layer, W f is the weight, b f is the bias value, f t is the output of the forget gate, and σ is the sigmoid operation. The output from the forget gate is input to the input gate, which decides the importance of the data at hand and writes them to a cell. The layers of the input gate can be represented by the following equation:
i t = σ W i h t 1 , x t   + b i .
In addition, the operations of the input gate layer and the tanh layer can be expressed as follows:
C ^ t = t a n h W c h t 1 , x t   + b c .
The output gate obtains a new cell state C t using the following expression of the input gate:
C t = f t × C t 1 + i t × C ^ t .
Lastly, the output gate uses the following equation to determine the new hidden state h t :
o t = σ t W o h t 1 , x t + b t ,
h t = σ t × tan h C t .

2.3. Skip Connection

Deep learning models with deep architectures generally have better learning outcomes. However, the deeper the architecture of a deep learning model, the worse the performance. Skip connections, which emerged to solve this problem, proved effective in preventing the performance degradation of deep learning models with deep layers.
The skip connection method skips input data in a deep learning model and connects them directly to the output. Examples include addition skip connection [51] and concatenation [52] skip connection; their structures are depicted in Figure 2.
The addition skip connection method skips the convolutional layer and adds the input data directly to the output, which allows information from the input data to flow to the output even in deep learning models, preventing the performance of the model from declining. In Figure 2a, x is the input data, F x is the result output from Layer 2, and F x + x is the result of the addition skip connection method, which adds the input data to the result output from Layer 2. Unlike the addition skip connection in Figure 2b, the concatenation skip connection performs the concatenation of the vector of the input data with the vector value of the output from Layer 1. Thus, the maximum amount of information is stored in each layer of the deep learning model, thereby improving the model accuracy. This study adopted the concatenation skip connection method.

2.4. Dataset

The data used in this study were gathered from a 6800 twenty-foot equivalent unit (TEU) container ship named Hyundai Bangkok that was in actual operation between 15 November 2014 and 9 April 2015. The vessel consisted of one MAN B&W diesel engine to propel the vessel and four 3800 kW generators to feed the ship’s power load. The detailed specifications of the vessel are presented in Table 2.
Vessels use rudders for direction control. The power consumption of the rudder fluctuates according to the hull resistance, where rudder angle, water speed, wind speed, and wind angle can be classified as hull resistance variables. The ship utilizes an electric hydraulic system for rudder angle control. It is composed of a hydraulic pump with an electric pump, which exploits the rudder angle by controlling the hydraulic cylinder. The multivariate auto regressive eXogeneous model is implemented for rudder control, which can be expressed as follows.
X n = m = 1 M A m X n m + m = 1 M B m Y n m + U n
where X n is a two-dimensional vector which has yaw and roll. Y n is a one-dimensional rudder’s vector that can be controlled. U n refers to Gaussian white noise; X n m and Y n m represent the difference between the instrumented input and output vector values. M is the value of the M th command. The established heading set by the multivariate auto regressive eXogeneous model remains constant. However, the ship is influenced by consistent waves and wind conditions, and under severe wave and wind conditions, the heading angle can change. Four external environmental data were selected in this study, and all the data were measured and acquired every 10 min, with a total acquisition of 20,935 data. The types of data used in this study are listed in Table 3.
The operational states of a ship can be classified into underway, standby, and inport. First, the underway state indicates that the ship is in motion, utilizing all equipment for navigation. It is characterized by minimal fluctuations in power load. Second, the standby state represents that the ship is in the process of entering or leaving the port. It is marked by significant variations in the total power consumption of the auxiliary devices, depending on the ship’s speed changes. Third, the inport state denotes that the ship is either loading or unloading cargo. During this time, both the power consumption and load variations are minimal due to the stationary situation of the ship. Figure 3 depicts the total power load of the ship measured in 10 min increments. Rapid changes in load were observed as the ship operated. Particularly during the ship’s entry and departure, significant changes were observed in the power consumption of the electrically operated steering gear and bow thrusters, leading to substantial fluctuations in the overall power load of the vessel.

3. Proposed Model

The MFE-LSTM model based on the skip connection model was developed to improve the performance of the existing ship power consumption prediction model. The proposed model is largely composed of a data input layer, multiple feature extraction, concatenation layer, LSTM layer, skip connection layer, dense layer, and forecasts of the ship’s power. Figure 4 illustrates the structure of the proposed MFE-LSTM model based on the skip connection model.
The model process is described as follows.
Step 1. Input layer
The collected ship data are subjected to data scaling and are input to the input layer to train the model. The MinMaxScaler [53] was used, which can be expressed as follows:
M i n M a x S c a l e r :   x = x min x max x min x ,
where x is the new value obtained by the MinMaxScaler; x is the original input data; min x is the minimum value of the original data column; and max x is the maximum value of the original data column.
Step 2. MFE
MFE plays a key role in the proposed model. The ship’s power load data are characterized by large variations. Hence, extracting various features of the data improves the performance of the model, and three CNNs with different kernel sizes and filters are connected in parallel to extract various features of the ship’s power load data. Each CNN has the following structure: 1D convolution layer—1D convolution layer—1D convolution layer—1D convolution layer—max pooling layer—batch normalization. Each layer uses the ReLU as an activation function. The structure of each 1D convolutional layer of the CNN is presented in Table 4.
Next, the feature values output from the 1D convolution layers are down-sampled by max pooling, and batch normalization [54] is performed to prevent internal covariate shifts.
Step 3. Concatenation layer
For the concatenation layer, the concatenation skip connection method is adopted. The advantage of this approach is that the feature values output from the MFE can all be utilized in the subsequent step.
Step 4. LSTM layer
The LSTM layer predicts the power load of the ship according to the input values from the concatenation layer and is composed of two LSTMs Hwith 1024 neurons. The activation function is ReLU. The values output from the LSTM layer are input to the skip connection layer in the next step.
Step 5. Skip connection layer
The skip connection layer combines the ship’s power load feature values collected from the concatenation layer and the ship’s power load predicted using the LSTM layer into one vector using the concatenation skip connection method. The combined vector value is input to the dense layer. This method enables the next layer to utilize both the analyzed feature values of the ship’s power load and the power load values predicted by the LSTM layer.
Step 6. Dense layer
The dense layer analyzes the values input through the skip connection layer and predicts the power load value of the vessel. The layer consists of six perceptrons in total, each with 1024, 512, 256, 128, 32, and 1 neurons. Here, the dense layer can be expressed as follows:
θ 1 = R e l u W 1 S k i p C o n n e c t i o n L a y e r + b 1 ,
θ n = R e l u W n θ n 1 + b n ,
where θ 1 is the first perceptron; R e l u is the activation function; S k i p C o n n e c t i o n L a y e r is the vector input through the skip connection layer; and W and b are weight and bias, respectively. Lastly, the predicted value of the ship’s power load is input with inverse scaling.
Step 7. Ship power forecasts and inverse scaling
Because the values input through the dense layer are output according to the data-scaled values, their predicted values are small. Consequently, inverse scaling must be performed to obtain the ship’s power value predicted by the model. Here, inverse scaling can be expressed as follows:
I n v e r s e S c a l e : x = y max x min x + min x ,
where y is the ship’s power forecast value with MinMaxScaler applied, and x is the actual value obtained after inverse scaling.

4. Methods

4.1. Model Training Process

The proposed MFE-LSTM model based on skip connections was trained using the data collected from the vessel. The data in the dataset were measured every 10 min, and the model utilized the previous 50 min of data to predict the next 10 min of the ship’s power load. Hence, the time step was 5. To prevent the model from overfitting, the dataset was classified into training, validation, and test sets for training and evaluation. The training and evaluation sets accounted for 80% and 20% of the total dataset, respectively. Furthermore, the training dataset was divided into 70% and 30% for the testing and validation sets, respectively.

4.2. Evaluation Metrics

Four evaluation metrics were selected to evaluate the performance of the model. The selected metrics have characteristics that are commonly used in regression and forecasting models. They consist of mean absolute error (MAE), root mean squared error (RMSE), mean absolute percentage error (MAPE), and R-squared (R2). The evaluation metrics can be expressed by the following equations:
M A E = 1 N i = 1 N y i y i ^ ,
R M S E = 1 N i = 1 N y i y i ^ 2 ,
M A P E = 1 N i = 1 N y i y i ^ y i × 100 ,
R 2 = 1 y i y i ^ 2 y i y i ¯ 2 ,
where N is the number of data points used to evaluate the model performance, y i is the i -th correct answer, y i ^ is the i -th predicted value, and y i ¯ is the average value of the correct answers. MAE is the average of the differences between the predicted and actual values. RMSE is the square root of the averages of the squared differences between the predicted and actual values. MAPE indicates the relative difference between the predicted and actual values by dividing the difference between each predicted value and actual value by the actual value, then taking the absolute values and averaging them. R2 indicates how well the model describes the data and can be used to determine the correlation between the predicted and actual values.

4.3. Model Performance Evaluation Process

The objective performance of the model was evaluated using the MAE, RMSE, MAPE, and R 2 metrics, under the following conditions:
  • The CNN-LSTM neural network with the SE model and DFF model, which are reputed to be the best predictors of the ship’s power load among the existing models, was selected for the performance evaluation comparison of the proposed MFE-LSTM model based on skip connections.
  • The same data were used for training and evaluation of the model, and the training was repeated five times.
  • The experimental results of the proposed model and the comparison model were compared in terms of the selected evaluation metrics, and a detailed comparative evaluation was performed using the maximum, average, and minimum performance results.

5. Results and Discussion

In this chapter, the performance of the proposed model is assessed by comparing it with the conventional models. The comparison experiments were conducted using Python and TensorFlow libraries. First, the experimental results of each model were examined. Table 5 presents the results of five experiments conducted with the proposed MFE-LSTM model based on skip connections. The highest performance was an MAE of 47.19, an RMSE of 122.26, a MAPE of 3.01, and an R 2 of 0.87. The lowest performance was an MAE of 69.40, an RMSE of 130.98, a MAPE of 4.41, and an R 2 of 0.85. Lastly, the average MAE was 55.52, the average RMSE was 125.62, the average MAPE was 3.56, and the average R 2 was 0.86. These results indicated that the overall performance of the ship’s power load prediction was excellent.
Table 6 summarizes the results of five experiments for predicting the ship’s power load with the CNN-LSTM neural network with the SE model selected for performance evaluation comparison. The highest performance was an MAE of 78.28, an RMSE of 146.64, a MAPE of 5.18, and an R 2 of 0.82. The lowest performance was an MAE of 90.6, an RMSE of 148.96, a MAPE of 6.21, and an R 2 of 0.81. The average MAE was 85.64, the average RMSE was 149.02, the average MAPE was 5.75, and the average R 2 was 0.81.
Table 7 shows the results of five experiments for forecasting the ship’s power load with the DFF model for a comparison test. The highest performance was an MAE of 80.71, an RMSE of 152.65, a MAPE 5.63 and an R 2 of 0.81. The lowest performance was an MAE of 91.75, an RMSE of 160.81, a MAPE 6.47 and an R 2 of 0.78. The average MAE was 85.97, the average RMSE was 156, the average MAPE was 6.04, and the average R 2 was 0.79.
Figure 5, Figure 6 and Figure 7 illustrate the results of the ship’s power consumption prediction experiments using the MFE-LSTM model based on skip connections, the CNN-LSTM neural network with the SE model, and the DFF model. Overall, the latter was a better predictor of the ship’s power load; however, it achieved an inferior performance in predicting the ship’s power load.
Table 8 summarizes a comparison of the best performances of the models. When the MFE-LSTM model based on skip connections was used to predict the ship’s power load, the model improved the ship’s power load prediction performance by an MAE of 31.09, an RMSE of 24.38, a MAPE 2.14, and an R 2 of 0.05 compared to the CNN-LSTM neural network with SE, and the proposed model reinforced the prediction performance by an MAE of 33.52, an RMSE of 30.39, a MAPE 2.62, and an R 2 of 0.06 compared to the DFF model.
Table 9 summarizes a comparison of the lowest performances of the models. Compared to the performance of the CNN-LSTM neural network with the SE model, the performance of the MFE-LSTM model based on skip connections was improved by an MAE of 21.2, an RMSE of 17.98, a MAPE of 1.8, and an R 2 of 0.04 in predicting the ship’s power load, and the proposed model improved the performance by an MAE of 22.35, an RMSE of 29.83, a MAPE of 2.06, and an R 2 of 0.07 compared to the DFF model.
Table 10 summarizes a comparison of the average performances of the models. The MFE-LSTM model based on skip connections improved the performance of the prediction of the ship’s power load by an MAE of 30.13, an RMSE of 23.4, a MAPE of 2.19, and an R 2 of 0.05 compared to the CNN-LSTM neural network with SE, and the proposed model improved the performance by an MAE of 30.45, an RMSE of 30.38, a MAPE 2.48, and an R 2 of 0.07 compared to the DFF model.
The results of the experimental analysis reveal that the proposed model, MFE-LSTM using skip connections, obtained the lowest RMSE, MAE, MAPE, and R 2 values. In particular, the proposed model demonstrated its capability to extract and analyze significantly more intricate load data from ships compared to the existing models. Therefore, it can be concluded that the proposed model is the most effective in forecasting the complex variations in the power loads of ships.

6. Conclusions

This paper describes an MFE-LSTM model based on skip connections that is capable of comprehensively extracting various features from a ship’s power load. The proposed model leverages the advantages of LSTM and CNN structures to address the large variations in the power loads of ships. LSTM excels at managing time series data, and CNN models are better suited for extracting features from data. Furthermore, the skip connection layer preserves the information in the input data to prevent the performance degradation of the model. The performance of the model was compared with previous models, i.e., the CNN-LSTM neural network with the SE and DFF models. The results of the comparative test indicate that the proposed model outperformed the others, with the best values of MAE = 55.52, RMSE = 125.62, MAPE = 3.56, and R 2 = 0.86. The main conclusions drawn from this study are summarized as follows:
  • The ship’s power load prediction performance was improved by extracting various features of the ship’s power load using MFE.
  • The skip connection layer combines the results of the MFE from the concatenate layer into a singular vector representing the ship’s power load predicted by the LSTM. Consequently, the MFE-LSTM model based on skip connections excels in predicting the intricate dynamics of the ship’s power load compared to the conventional models.
  • A dedicated feature extraction model for predicting heavy loads intermittently is required to improve the performance of intermittent heavy load prediction.
This study demonstrated that multiple feature extraction models can extract and analyze various features from the data to provide an improved forecasting of the power load performance of vessels. However, the power load prediction performance of the heavy load generated by ships was insufficient. Therefore, in forthcoming research, alternative deep learning models will be explored to improve the power load prediction performance for heavy loads in ships.

Author Contributions

Conceptualization, methodology, and software, J.-Y.K.; project administration, funding acquisition, J.-S.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Technology Innovation Program (RS-2023-00252794, Development of Efficient Power Energy Management System and Verification for Hybrid Propulsion Ship) funded by the Ministry of Trade, Industry, and Energy of Korea.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Eide, M.S.; Dalsøren, S.B.; Endresen, Ø.; Samset, B.; Myhre, G.; Fuglestvedt, J.; Berntsen, T. Reducing CO2 from shipping–do non-CO2 effects matter? ACP 2013, 13, 4183–4201. [Google Scholar] [CrossRef]
  2. Eyring, V.; Isaksen, I.S.; Berntsen, T.; Collins, W.J.; Corbett, J.J.; Endresen, O.; Grainger, R.G.; Moldanova, J.; Schlager, H.; Stevenson, D.S. Transport impacts on atmosphere and climate: Shipping. Atmos. Environ. 2010, 44, 4735–4771. [Google Scholar] [CrossRef]
  3. Buhaug, Ø.; Corbett, J.; Endresen, Ø.; Eyring, V.; Faber, J.; Hanayama, S.; Lee, D.S.; Lee, D.; Lindstad, H.; Markowska, A.Z.; et al. Second IMO GHG Study 2009. Available online: https://wwwcdn.imo.org/localresources/en/OurWork/Environment/Documents/SecondIMOGHGStudy2009.pdf (accessed on 23 April 2023.).
  4. Jeong, B.; Jeon, H.; Kim, S.; Kim, J.; Zhou, P. Evaluation of the Lifecycle Environmental Benefits of Full Battery Powered Ships: Comparative Analysis of Marine Diesel and Electricity. J. Mar. Sci. Eng. 2020, 8, 580. [Google Scholar] [CrossRef]
  5. Nguyen, H.P.; Hoang, A.T.; Nizetic, S.; Nguyen, X.P.; Le, A.T.; Luong, C.N.; Chu, V.D.; Pham, V.V. The electric propulsion system as a green solution for management strategy of CO2 emission in ocean shipping: A comprehensive review. Int. Trans. Electr. Energy Syst. 2021, 31, e12580. [Google Scholar] [CrossRef]
  6. Yang, C.; Huang, F. An overview of simulation-based hydrodynamic design of ship hull forms. J. Hydrodyn. 2016, 12, 28. [Google Scholar] [CrossRef]
  7. Liu, Z.; Zhao, W.; Wan, D. Resistance and wake distortion optimization of JBC considering ship-propeller interaction. Ocean Eng. 2022, 244, 110376. [Google Scholar] [CrossRef]
  8. Wu, X.; Feng, Y.H.; Xu, G.; Zhu, Y.; Ming, P.; Dai, L. Numerical investigations on charge motion and combustion of natural gas-enhanced ammonia in marine pre-chamber lean-burn engine with dual-fuel combustion system. Int. J. Hydrogen Energy 2023, 48, 11476–11492. [Google Scholar] [CrossRef]
  9. Zhang, B.; Jiang, Y.; Chen, Y. Research on Calibration, Economy and PM Emissions of a Marine LNG–Diesel Dual-Fuel Engine. J. Mar. Sci. Eng. 2022, 10, 239. Available online: https://www.mdpi.com/2077-1312/10/2/239 (accessed on 24 April 2023.). [CrossRef]
  10. Sezen, S.; Uzun, D.; Ozyurt, R.; Turan, O.; Atlar, M. Effect of biofouling roughness on a marine propeller’s performance including cavitation and underwater radiated noise (URN). Appl. Ocean Res. 2021, 107, 102491. [Google Scholar] [CrossRef]
  11. Han, S.; Wang, P.; Jin, Z.; An, X.; Xia, H. Structural design of the composite blades for a marine ducted propeller based on a two-way fluid-structure interaction method. Ocean Eng. 2022, 259, 111872. [Google Scholar] [CrossRef]
  12. Tang, X.; Wu, C.; Xu, X. Learning-Based Nonlinear Model Predictive Controller for Hydraulic Cylinder Control of Ship Steering System. J. Mar. Sci. Eng. 2022, 10, 2033. [Google Scholar] [CrossRef]
  13. Tadros, M.; Ventura, M.; Guedes, S.C. Review of current regulations, available technologies, and future trends in the green shipping industry. Ocean Eng. 2023, 280, 114670. [Google Scholar] [CrossRef]
  14. Yang, N.; Deng, X.; Liu, B.; Li, L.; Li, Y.; Li, P.; Tang, M.; Wu, L. Combustion Performance and Emission Characteristics of Marine Engine Burning with Different Biodiesel. Energies 2022, 15, 5177. [Google Scholar] [CrossRef]
  15. Irena, K.; Ernst, W.; Alexandros, C.G. The cost-effectiveness of CO2 mitigation measures for the decarbonisation of shipping. The case study of a globally operating ship-management company. J. Clean. Prod. 2021, 316, 128094. [Google Scholar] [CrossRef]
  16. Konur, O.; Yuksel, O.; Korkmaz, S.A.; Colpan, C.O.; Saatcioglu, O.Y.; Muslu, I. Thermal design and analysis of an organic rankine cycle system utilizing the main engine and cargo oil pump turbine based waste heats in a large tanker ship. J. Clean. Prod. 2022, 368, 133230. [Google Scholar] [CrossRef]
  17. Ulissi, G.; Kim, S.; Dujic, D. Solid-State Technology for Shipboard DC Power Distribution Networks. IEEE Trans. Ind. Electron. 2021, 12, 68. [Google Scholar] [CrossRef]
  18. Nzualo, T.N.M.; de Oliveira, C.E.F.; Pérez, T.O.A.; Eduardo, G.G.; Rosman, P.C.C.; Qassim, R.Y. Ship speed optimisation in green approach to tidal ports. Appl. Ocean Res. 2021, 8, 115. [Google Scholar] [CrossRef]
  19. Tadros, M.; Vettor, R.; Ventura, M.; Soares, C.G. Coupled Engine-Propeller Selection Procedure to Minimize Fuel Consumption at a Specified Speed. J. Mar. Sci. Eng. 2021, 9, 59. [Google Scholar] [CrossRef]
  20. Ran, P.; Dong, K.; Liu, X.; Wang, J. Short-term load forecasting based on CEEMDAN and Transformer. Electr. Power Syst. Res. 2023, 214, 108885. [Google Scholar] [CrossRef]
  21. Lee, Y.G.; Oh, J.Y.; Kim, D.; Kim, G. Shap value-based feature importance analysis for short-term load forecasting. J. Electr. Eng. Technol. 2023, 18, 579–588. [Google Scholar] [CrossRef]
  22. Lan, H.; Gao, J.; Hong, Y.Y.; Yin, H. Interval forecasting of photovoltaic power generation on green ship under Multi-factors coupling. Sustain. Energy Technol. Assess. 2023, 56, 103088. [Google Scholar] [CrossRef]
  23. Sekhar, C.; Dahiya, R. Robust framework based on hybrid deep learning approach for short term load forecasting of building electricity demand. Energy 2023, 268, 126660. [Google Scholar] [CrossRef]
  24. Kim, J.-Y.; Oh, J.-S. Electric consumption forecast for ships using multivariate Bayesian optimization-SE-CNN-LSTM. J. Mar. Sci. Eng. 2023, 11, 292. [Google Scholar] [CrossRef]
  25. Kim, J.-Y.; Lee, J.-H.; Oh, J.-H.; Oh, J.-S. A comparative study on energy consumption forecast methods for electric propulsion ship. J. Mar. Sci. Eng. 2022, 10, 32. [Google Scholar] [CrossRef]
  26. Lee, J.-B.; Roh, M.-I.; Kim, K.-S. Prediction of ship power based on variation in deep feed-forward neural network. Int. J. Nav. Archit. Ocean Eng. 2021, 8, 641–649. [Google Scholar] [CrossRef]
  27. Elkafas, A.G.; Elgohary, M.M.; Zeid, A.E. Numerical study on the hydrodynamic drag force of a container ship model. Alex. Eng. J. 2019, 58, 849–859. [Google Scholar] [CrossRef]
  28. Bakar, N.N.A.; Bazmohammadi, N.; Çimen, H.; Uyanik, T.; Vasquez, J.C.; Guerrero, J.M. Data-driven ship berthing forecasting for cold ironing in maritime transportation. Appl. Energy 2022, 326, 119947. [Google Scholar] [CrossRef]
  29. Kim, J.H.; Choi, J.E.; Choi, B.J.; Chung, S.H. Twisted rudder for reducing fuel-oil consumption. Int. J. Nav. Archit. 2014, 6, 715–722. [Google Scholar] [CrossRef]
  30. Tran, V.T.; Nguyen, H.T.; Hoang, T.X.; Nguyen, T.M.H.; Cu, X.T.; Nguyen, V.P. An optimal autopilot for ships using a regressive exogenous model. In Proceedings of the IEEE International Symposium on Communications and Information Technology, ISCIT 2004, Sapporo, Japan, 26–29 October 2004; Volume 2, pp. 913–918. [Google Scholar]
  31. Shiraishi, K.E.; Ono, Y.O.; Yamashita, Y.U.; Sakamoto, M.U. Energy savings through electric-assist turbocharger for marine diesel engines. Mitsubishi Heavy Ind. Tech. Rev. 2015, 52, 36–41. [Google Scholar]
  32. Kim, M.; Hizir, O.; Turan, O.; Day, S.; Incecik, A. Estimation of added resistance and ship speed loss in a seaway. Ocean Eng. 2017, 141, 465–476. [Google Scholar] [CrossRef]
  33. Kim, J.; Park, I.R.; Kim, K.S.; Van, S.H.; Kim, Y.C. Development of a numerical method for the evaluation of ship resistance and self-propulsion performances. J. Soc. Nav. Archit. Korea 2011, 48, 147–157. [Google Scholar] [CrossRef]
  34. Lee, J.-H.; Oh, J.-H.; Oh, J.-S. Application of generator capacity design technique considering the operational characteristics of container ships. Electronics 2022, 11, 1703. [Google Scholar] [CrossRef]
  35. Dickinson, E.D. Electric auxiliaries on merchant ships. J. Am. Inst. Electr. Eng. 1921, 40, 777–785. [Google Scholar] [CrossRef]
  36. Tarasiuk, T.; Pilat, A.; Szweda, M.; Gorniak, M.; Troka, Z. Experimental study on impact of ship electric power plant configuration and load variation on power quality in the ship power systems. In Proceedings of the World Congress on Engineering 2014, WCE 2014, London, UK, 2–4 July 2014; Volume I. [Google Scholar]
  37. Lee, K.J.; Shin, D.; Yoo, D.W.; Choi, H.K.; Kim, H.J. Hybrid photovoltaic/diesel green ship operating in standalone and grid-connected mode–Experimental investigation. Energy 2013, 49, 475–483. [Google Scholar] [CrossRef]
  38. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  39. Tran, D.; Wang, H.; Torresani, L.; Ray, J.; LeCun, Y.; Paluri, M. A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6450–6459. [Google Scholar]
  40. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef]
  41. Wu, Y.; Yang, F.; Liu, Y.; Zha, X.; Yuan, S. A comparison of 1-D and 2-D deep convolutional neural networks in ECG classification. arXiv 2018, arXiv:1810.07088. [Google Scholar]
  42. Saini, M.; Satija, U.; Upadhayay, M.D. Light-weight 1-D convolutional neural network architecture for mental task identification and classification based on single-channel EEG. arXiv 2020, arXiv:2012.06782. [Google Scholar]
  43. Li, T.; Hua, M.; Wu, X.U. A hybrid CNN-LSTM model for forecasting particulate matter (PM2.5). IEEE Access 2020, 8, 26933–26940. [Google Scholar] [CrossRef]
  44. Rajabi, R.; Estebsari, A. Deep learning based forecasting of individual residential loads using recurrence plots. In Proceedings of the 2019 IEEE Milan PowerTech, Milan, Italy, 23–27 June 2019; pp. 1–5. [Google Scholar]
  45. Kaligambe, A.; Fujita, G. Short-term load forecasting for commercial buildings using 1D convolutional neural networks. In Proceedings of the 2020 IEEE PES/IAS PowerAfrica, Nairobi, Kenya, 25–28 August 2020; pp. 1–5. [Google Scholar]
  46. Sarveswararao, V.; Ravi, V.; Vivek, Y. ATM cash demand forecasting in an Indian bank with chaos and hybrid deep learning networks. Expert Syst. Appl. 2023, 211, 118645. [Google Scholar] [CrossRef]
  47. Agarap, A.F. Deep learning using rectified linear units (relu). arXiv 2018, arXiv:1803.08375. [Google Scholar]
  48. Wu, H.; Gu, X. Max-Pooling Dropout for Regularization of Convolutional Neural Networks. In Neural Information Processing, Proceedings of the 22nd International Conference, ICONIP 2015, Istanbul, Turkey, 9–12 November 2015; Springer: Berlin/Heidelberg, Germany, 2015; Part I, Volume 22, pp. 46–54. [Google Scholar]
  49. Zhao, J.; Mao, X.; Chen, L. Learning deep features to recognise speech emotion using merged deep CNN. IET Signal Process. 2018, 12, 713–721. [Google Scholar] [CrossRef]
  50. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  51. Roy, S.K.; Manna, S.; Dubey, S.R.; Chaudhuri, B.B. LiSHT: Non-parametric linearly scaled hyperbolic tangent activation function for neural networks. In Proceedings of the International Conference on Computer Vision and Image Processing 2022, Nagpur, India, 4–6 November 2022; pp. 462–476. [Google Scholar]
  52. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  53. Patro, S.G.; Sahu, K.K. Normalization: A preprocessing stage. arXiv 2015, arXiv:1503.06462. [Google Scholar] [CrossRef]
  54. Zhu, Q.; He, Z.; Zhang, T.; Cui, W. Improving classification performance of softmax loss function based on scalable batch-normalization. Appl. Sci. 2020, 10, 2950. [Google Scholar] [CrossRef]
Figure 1. Structure of an LSTM cell.
Figure 1. Structure of an LSTM cell.
Jmse 11 01690 g001
Figure 2. Structures of addition and concatenation skip connections.
Figure 2. Structures of addition and concatenation skip connections.
Jmse 11 01690 g002
Figure 3. Total power load of vessel measured in 10 min intervals.
Figure 3. Total power load of vessel measured in 10 min intervals.
Jmse 11 01690 g003
Figure 4. Structure of the MFE-LSTM model based on skip connections.
Figure 4. Structure of the MFE-LSTM model based on skip connections.
Jmse 11 01690 g004
Figure 5. MFE-LSTM using skip connections.
Figure 5. MFE-LSTM using skip connections.
Jmse 11 01690 g005
Figure 6. CNN-LSTM neural network with SE.
Figure 6. CNN-LSTM neural network with SE.
Jmse 11 01690 g006
Figure 7. DFF.
Figure 7. DFF.
Jmse 11 01690 g007
Table 1. Nomenclature used in this study.
Table 1. Nomenclature used in this study.
NomenclatureDefinition
MFEMultiple Feature Extraction
LSTMLong Short-Term Memory
SESqueeze and Excitation
DFFDeep Feed-Forward
CNNConvolutional Neural Network
RELURectified Linear Unit
TEUTwenty-foot Equivalent Unit
MAEMean Absolute Error
RMSERoot Mean Squared Error
MAPEMean Absolute Percentage Error
R2R-squared
Table 2. Detailed specifications of the vessel.
Table 2. Detailed specifications of the vessel.
Vessel TypeContainer Ship
Length303 m
Breadth40 m
Draught12.1 m
Engine power91,886 BHP
Generator power3800 kW × 4
TEU6800 TEU
Table 3. Data used in experiments.
Table 3. Data used in experiments.
Type of DataUnit
Electric LoadkW
No. 1 Diesel Generator PowerkW
No. 2 Diesel Generator PowerkW
No. 3 Diesel Generator PowerkW
No. 4 Diesel Generator PowerkW
No. 1 Reefer Container Group LoadkW
No. 2 Reefer Container Group LoadkW
No. 3 Reefer Container Group LoadkW
No. 4 Reefer Container Group LoadkW
No. 5 Reefer Container Group LoadkW
No. 6 Reefer Container Group LoadkW
Rudder AngleDegree
Water Speedm·s−1
Wind Speedm·s−1
Wind AngleDegree
Table 4. Structures of 1D convolution layers of CNN models.
Table 4. Structures of 1D convolution layers of CNN models.
Number of CNNStructure of Each 1D Convolution Layer (Filters, Kernel Size)
CNN1 (256, 128)—(128, 64)—(64, 32)
CNN2 (512, 256)—(256, 128)—(128, 64)
CNN3 (1024, 512)—(512, 256)—(256, 128)
Table 5. Experimental results of MFE-LSTM model based on skip connections.
Table 5. Experimental results of MFE-LSTM model based on skip connections.
Evaluation Metric1st2nd3rd4th5th
MAE47.1951.1769.4055.9253.93
RMSE122.26125.07130.98123.64126.19
MAPE3.013.484.413.493.43
R 2 0.870.870.850.870.86
Table 6. Experimental results of CNN-LSTM neural network with SE model.
Table 6. Experimental results of CNN-LSTM neural network with SE model.
Evaluation Metric1st2nd3rd4th5th
MAE86.6490.6090.6078.2881.16
RMSE151.53148.96148.96146.64149.01
MAPE5.686.216.215.185.47
R 2 0.810.810.810.820.81
Table 7. Experimental results of DFF model.
Table 7. Experimental results of DFF model.
Evaluation Metric1st2nd3rd4th5th
MAE86.1386.5680.7184.7291.75
RMSE155.33156.30152.65154.92160.81
MAPE6.126.145.635.836.47
R 2 0.800.790.810.800.78
Table 8. Best performance comparison of both models.
Table 8. Best performance comparison of both models.
ModelMAERMSEMAPE R 2
MFE-LSTM using skip connection47.19122.263.010.87
CNN-LSTM neural network with SE78.28146.645.150.82
DFF80.71152.655.630.81
Table 9. Worst performance comparison of both models.
Table 9. Worst performance comparison of both models.
ModelMAERMSEMAPE R 2
MFE-LSTM using skip connection69.40130.984.410.85
CNN-LSTM neural network with SE90.60148.966.210.81
DFF91.75160.816.470.78
Table 10. Average performance comparison of both models.
Table 10. Average performance comparison of both models.
ModelMAERMSEMAPE R 2
MFE-LSTM using skip connection55.52125.623.560.86
CNN-LSTM neural network with SE85.65149.025.750.81
DFF85.971566.040.79
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J.-Y.; Oh, J.-S. Multiple Feature Extraction Long Short-Term Memory Using Skip Connections for Ship Electricity Forecasting. J. Mar. Sci. Eng. 2023, 11, 1690. https://doi.org/10.3390/jmse11091690

AMA Style

Kim J-Y, Oh J-S. Multiple Feature Extraction Long Short-Term Memory Using Skip Connections for Ship Electricity Forecasting. Journal of Marine Science and Engineering. 2023; 11(9):1690. https://doi.org/10.3390/jmse11091690

Chicago/Turabian Style

Kim, Ji-Yoon, and Jin-Seok Oh. 2023. "Multiple Feature Extraction Long Short-Term Memory Using Skip Connections for Ship Electricity Forecasting" Journal of Marine Science and Engineering 11, no. 9: 1690. https://doi.org/10.3390/jmse11091690

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop