Next Article in Journal
Post-Processing Maritime Wind Forecasts from the European Centre for Medium-Range Weather Forecasts around the Korean Peninsula Using Support Vector Regression and Principal Component Analysis
Previous Article in Journal
Research on Fourier Coefficient-Based Energy Capture for Direct-Drive Wave Energy Generation System Based on Position Sensorless Disturbance Suppression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Slow Failure Particle Swarm Optimization Long Short-Term Memory for Significant Wave Height Prediction

1
Hubei Key Laboratory of Digital Finance Innovation, Hubei University of Economics, Wuhan 430205, China
2
School of Information Engineering, Hubei University of Economics, Wuhan 430205, China
3
Hubei Internet Finance Information Engineering Technology Research Center, Hubei University of Economics, Wuhan 430205, China
4
Faculty of Computer and Information Sciences, Hosei Universituy, Tokyo 184-8584, Japan
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(8), 1359; https://doi.org/10.3390/jmse12081359
Submission received: 15 July 2024 / Revised: 5 August 2024 / Accepted: 8 August 2024 / Published: 9 August 2024
(This article belongs to the Section Physical Oceanography)

Abstract

:
Significant wave height (SWH) prediction is crucial for marine safety and navigation. A slow failure particle swarm optimization for long short-term memory (SFPSO-LSTM) is proposed to enhance SWH prediction accuracy. This study utilizes data from four locations within the EAR5 dataset, covering 1 January to 31 May 2023, including variables like wind components, dewpoint temperature, sea level pressure, and sea surface temperature. These variables predict SWH at 1-h, 3-h, 6-h, and 12-h intervals. SFPSO optimizes the LSTM training process. Evaluated with R2, MAE, RMSE, and MAPE, SFPSO-LSTM outperformed the control group in 13 out of 16 experiments. Specifically, the model achieved an optimal RMSE of 0.059, a reduction of 0.009, an R2 increase to 0.991, an MAE of 0.045, and an MAPE of 0.032. Our results demonstrate that SFPSO-LSTM provides reliable and accurate SWH predictions, underscoring its potential for practical applications in marine and atmospheric sciences.

1. Introduction

The prediction of ocean wave heights has been an important topic in the field of ocean climate and engineering technology. The significance of ocean wave height prediction is to protect the safety of human and marine ecosystems and to provide a basis for decision-making in related industries and activities [1]. High waves may pose a threat to marine navigation and to the lives of coastal residents and travelers [2,3]. Accurate prediction of high waves can help people to avoid water activities in dangerous sea conditions and reduce drowning accidents and maritime disasters. On the other hand, high waves have a significant impact on the safety of commercial, fishing, and other vessels [4]. Shipping companies and vessel operators can plan routes, adjust vessel speeds, or take other precautions based on high wave forecasts to reduce the risk of collisions, capsizes, and other accidents [5,6]. When constructing projects such as seawalls, harbors, offshore wind farms, and oil and gas platforms, the frequency and intensity of high waves need to be predicted to ensure the safety and reliability of the structures.
The impact of high waves on marine ecosystems is also important. Powerful waves can damage coral reefs, coastal wetlands, and other ecosystems, negatively impacting the survival and reproduction of marine life [7]. Accurate high wave prediction can help protect these fragile ecosystems by taking appropriate measures to mitigate the impact of high waves on biodiversity and ecological balance. In summary, the significance of marine high wave prediction is to protect human safety, ensure the safety of shipping, guide coastal engineering projects, assist in flood control and disaster management, and protect the health of marine ecosystems [8]. This predictive information can provide relevant stakeholders with a basis for decision-making to reduce risks.
Many researchers have made outstanding contributions to the study of ocean wave height prediction. These contributions can be classified into three major groups: numerical models, machine learning (ML) models, and hybrid models (a combination of the two). The first category is numerical-based prediction methods, which are widely used for global ocean state prediction. These models obtain the information of wave height, period, etc., by solving the wave spectral equations of ocean physical processes. For example, Simanesew [9] have employed laboratory and numerical experiments in order to investigate propagation of waves in both long- and short-crested wave fields in deep water. Lu [10] built a predictive model for the propagation of ISWs in the southern Andaman Sea through deep learning using simulated data. Demetriou [11] proposed an alternative approach that uses a combination of meteorological and structural data to forecast significant wave heights in the coastal zone when training supervised machine learning models. Huang [12] proposed a decomposition method combined with a long- and short-term memory network to forecast significant wave heights. Tang [13] proposed the application of the particle swarm algorithm in a wave compensation system. The superiority of the method after application is verified, which provides a new research reference for the subsequent research of wave compensation control systems. Son [14] calibrated and optimized the SWAN third-generation numerical wave model. The results show that the ST6 model has smaller root mean square errors and higher correlation coefficients than the default model for significant wave height prediction. Altunkaynak [15] proposed a spatial interpolation technique based on geostatistical theory called Slope Point Cumulative Semi-Variogram (SPCSV). The method is used to predict significant wave heights at 22 stations in the Pacific Ocean off the west coast of the United States. Huchet [16] proposed a new deterministic wave prediction method using horizontal velocity profiles over the water column as boundary conditions for a dedicated nonlinear wave model. Celik [17] presented an SVD-based fine-grained algorithm for decomposing raw data into relevant features with hierarchical energy. The SVD-fuzzy model was developed by mixing the Adaptive Neuro-Fuzzy Inference System (ANFIS) model with the proposed algorithm. Yang [18] set up 31 research coordinate points in the Bohai Sea waters for assessing the potential/trend of wave energy flux (WEF). The results of the study provide ocean parameter characterization for wave energy design and deployment. Kim [19] presented a comprehensive study of deterministic real-time wave forecasting in directional seas. A good balance between computational efficiency and model accuracy is achieved by using a wave model based on a Lagrangian description. Son [20] presented a procedure for improving wave forecasts in marginal seas by correcting numerical sea breeze products (i.e., ERA5 wind reanalysis data generated around the Korean Peninsula in this case). The procedure can be applied to other marginal seas where the sea breeze is strongly influenced by neighboring continents and is not well reproduced by numerical methods. The development of accurate, real-time weakly nonlinear multidirectional wave field prediction models, combined with control strategies that utilize the predictions to increase absorbed power, will enable significant reductions in the cost of wave energy [21]. However, it is difficult to describe the complex and variable ocean environment in a comprehensive and detailed way, the numerical prediction of waves under extreme ocean conditions especially is not satisfactory.
The second category is machine learning-related methods. These methods have been used for wave height prediction and have shown potential for predicting future wave heights by learning from long-term accurate wave height measurements. Zhan [22] proposed a stacked ensemble learning method to improve the prediction accuracy of wave height. Sadeghifar [23] examined the capability of artificial neural networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), M5P, and Random Forest (RF) soft computing methods for wave height prediction in the Persian Gulf. Meng [24] presented a bidirectional gated recurrent unit (BiGRU) network for predicting wave heights during tropical cyclones (TCs). Meng [25] proposed a new method for long-term accurate prediction of wave trains of ocean waves using the long short-term memory (LSTM) method. Hao [26] proposed a deep neural network model, CBA-Net, for regional SWH forecasting based on the attention mechanism. Zhang [27] introduced a novel framework for ocean wave height sequence prediction called numerical long short-term memory (N-LSTM). The experimental results verify that the N-LSTM effectively improves the accuracy of SWAN numerical wave height prediction for the Bohai Sea and Wheat Island. Adytia [28] proposed a deep learning-based downscaling method to predict significant wave heights in coastal areas from global wave forecast data. Wang [29] used the multivariate sine function decomposition neural network (MSFDNN) to forecast the monthly SWH for the next 10 years. The prediction results show that MSFDNN performs well in forecasting the monthly SWH. Dakar [30] presents an artificial neural network (ANN) system using an artificial neural network consisting of two sub-networks for the prediction of hourly significant wave heights at a specific wave measurement station in the central Mediterranean coast of Israel (Hadera). Li [31] proposed a deep neural network model, CLTS-Net, for multivariate time-series SWH prediction. Fu [32] proposed an innovative hybrid model to predict wave heights, including a two-layer decomposition framework and long short-term memory (LSTM). Adytia [33] utilized the deep learning technique of BiLSTM for wave forecasting to save computational time and make accurate predictions. Joreges [34] developed a novel deep convolutional neural network (CNN) for 2D mixed data for spatial SWH prediction in the nearshore region of Norderney, Germany. Overall, the potential for this category of methods lies in the fact that their application is largely limited to the United States, Germany, or other special coastal regions, and that they may need to be used in conjunction with other models and observations to improve the accuracy of the predictions.
The last category of methods is the hybrid method, which mixes two different methods together and has been the trend in recent years for wave height prediction. Ma [35] proposes a wave height forecasting algorithm by combining the numerical weather prediction model WRF and the deep learning model WRF-CLSF (Convolution-LSTM-FC). Feng [36] compared the SWH prediction performance of RNN, LSTM, and GRU. The results show that the performance of LSTM and GRU networks based on the gating mechanism outperforms the traditional RNN network, and the performance of LSTM and GRU networks is comparable. Zhou [37] developed a two-dimensional (2D) significant wave height (SWH) forecast model for the South China Sea and East China Sea based on the ConvLSTM algorithm using WaveWatch III reanalysis data.
The results show that the application of ConvLSTM in 2D wave forecasting has high accuracy and efficiency. Song [38] proposed a regionally significant wave height prediction model with high temporal and spatial resolution based on the ConvLSTM algorithm. Altunkaynak [39] combined a novel hybrid model of singular spectrum analysis and fuzzy logic, based on previous work [15]. This method mixes mathematical decomposition techniques and soft computing methods and uses them successfully to predict significant wave height (SWH) data. A data-driven approach combining mathematical interpolation techniques and statistical analysis (EOF analysis) was also developed, with a focus on improving the efficiency and accuracy of the SWH monitoring network by optimizing buoy positions [40]. The results show that through this approach, effective monitoring of SWH can be achieved while reducing the number of buoys required and the corresponding costs.
Particle swarm algorithm (PSO) is an optimization algorithm based on population intelligence, which searches for the optimal solution by simulating the foraging behavior of a flock of birds. Guo [41] introduced a novel hermit crab optimization algorithm (HCOA). The results show that HCOA exhibits high accuracy and robustness on high-dimensional optimization problems. Guo [42] presented TBBPSO. The results demonstrate that the proposed method can provide high-accuracy results for various types of optimization problems. Guo’s [43] BPSO-CM results demonstrate that BPSO-CM can provide high-accuracy results for global optimization problems, can better capture the complex characteristics of ocean waves, and thus improve the performance of wave height prediction.
Current research on ocean wave prediction faces several limitations [44], including the scarcity and quality issues of data, the accuracy and adaptability of models, computational resource constraints, the complexity of multi-scale and multi-factor coupling, regional and seasonal variations, and the impact of human activities. These challenges highlight the need for improved data acquisition technologies, optimized model algorithms, enhanced computational capabilities, and a comprehensive understanding of multi-factor interactions and anthropogenic influences to achieve more accurate and reliable predictions.
The main contribution of this work can be summarized as follows:
Initially, this study introduces the slow failure particle swarm optimization (SFPSO) algorithm, which innovatively implements adaptive adjustment of particle search ranges based on iteration count. This approach effectively addresses the inherent limitations of traditional particle swarm optimization (PSO), notably its propensity for premature convergence to local optima due to restricted early-stage search ranges and diminished search precision in later stages.
Subsequently, the research progresses with the development of a long short-term memory (LSTM) neural network model, specifically tailored to forecast significant wave heights (SWHs) using extensive marine meteorological datasets. This advancement marks a notable leap in the field of predictive oceanography. In the concluding phase, the LSTM model undergoes a comprehensive optimization process, where the SFPSO algorithm is employed to refine its training parameters. This strategic optimization targets enhancing the LSTM’s predictive accuracy and efficiency in estimating SWH, thereby illustrating the synergistic potential of integrating SFPSO with sophisticated neural network models for applications in marine forecasting.
In the model training part, four distinct points, each situated in a unique marine environment, have been strategically selected. These locations encompass a spectrum of oceanic settings, ranging from enclosed seas to expansive open oceans, and extending from temperate zones to the polar regions. The diversity of these environments is crucial in ensuring the model’s robustness and adaptability across a wide array of maritime conditions. Additionally, the model encapsulates a broad range of latitudinal zones and oceanic conditions, resulting in a comprehensive data representation. This inclusiveness significantly enhances the predictive accuracy and generalizability of the model, providing it with a versatile applicability across varying oceanographic scenarios. The rest of this paper is organized as follows: Section 2 introduces the SFPSO-LSTM methods; Section 3 displays the experimental details and discussion; Section 4 presents the conclusion of this work.

2. Materials and Methods

2.1. Long Short-Term Memory

A traditional Recurrent Neural Network (RNN) is able to deal with the simple time sequences and works effectively. However, RNNs become extremely powerless when facing some complex time sequences problems. When time sequences become longer, RNNs cannot save much useful information. In an RNN, the gradients are dominated by near-range gradients and the far-range gradients are small, making it difficult for the model to learn information at long distances. Therefore, gated recurrent unit (GRU) and long short-term memory (LSTM) models were proposed to solve the problem of gradient disappearance in RNNs. The GRU and LSTM models introduce the gate mechanisms. Compared to the GRU model, LSTM has a superior gate mechanism and thus handles long sequences better. The LSTM model works in three main parts, including the forget gate, the input gate, and the output gate. The structure of the LSTM and the LSTM unit are shown in Figure 1.
The forget gate is able to drop some useless information, which includes the current information and the previous state information by Equation (1).
p t = [ h t 1 , x t ] f t = σ ( W f · p t + b f ) C t ( f ) = f t C t 1
where p t is the input of forget gate, it includes the previous state h t 1 and the current input. The W f presents the weighted information of the forget gate. The σ ( ) means the sigmoid activative function which is an s-curve with output in the range [ 0 , 1 ] and the C t ( f ) is the current state of the unit after passing through the forget gate.
The input gate is used to update the unit state by determining the state of those messages into the current moment by Equation (2).
i t = σ ( W i · p t + b i ) C t ( i ) = t a n h ( W C · p t + b C ) C t = C f ( i ) + i t C t ( i )
where W i , b i , and i t are the weight information, the bias, and the memory factor of the input gate, respectively. W C and b C present the weight information and bias of the unit state. t a n h ( ) is the tanh activation function. The C t ( i ) and C t mean the state of input gate and the current unit.
The output gate is a gate that determines how many current units of information are available for output to the next unit by Equation (3).
O t = σ ( W o · p t + b o ) h t = O t t a n h ( C t )
In Equation (3), O t is the output factor that controls how much information to output to the next unit. W o and b o represent the weight information and bias of the output gate, individually. t a n h ( ) is the tanh activation function.

2.2. Slow Failure Particle Swarm Optimization

Traditional particle swarm optimization (PSO) is a meta-heuristic optimization algorithm that is inspired by the behavior of birds while feeding. However, the PSO suffers from a lack of early search scope and inadequate late search refinement. To deal with these problems, we propose a slow failure particle swarm optimization (SFPSO) such that the search range adjusts itself according to the iterations. In the early search, we give the particle velocity more weight in the position update, which allows the particle to have a wider search space as it moves. As the number of iterations increases, SFPSO gradually reduces the share of velocity, allowing the particles to obtain more accurate results at a later stage. The particles of SFPSO update velocity and position by using Equation (4). To describe more visually the difference in search scope between PSO and SFPSO, the schematic diagram of the search ranges of the two algorithms is shown in Figure 2.
a a = ( P b e s t t 1 ( i ) x t 1 ( i ) ) b b = ( g b e s t t 1 x t 1 ( i ) ) v t ( i ) = w v t 1 ( i ) + c 1 r 1 a a + c 2 r 2 b b x t + 1 ( i ) = x t ( i ) + ( 1 + sin ( t ) ( t 2 ) 2 + sin 2 ( t ) ) v t ( i )
where v t ( i ) and x t ( i ) mean the ( i ) th particle velocity and position at the iteration t, respectively. P b e s t t 1 ( i ) and g e b s t t 1 present the personal best position of the ( i ) t h particle and the global best position of the swarm at the iteration t 1 . The c 1 and c 2 are learning factors of the P b e s t and g b e s t . r 1 and r 2 are random numbers in the range of [ 0 , 1 ] , giving more randomness to the update of the particle velocity.

2.3. SFPSO-LSTM

When facing the different sequence problems, such as power loads, natural language, and processing of carbon emissions, it is necessary to set the parameters of the LSTM to obtain the best performance. However, in response to how to choose the optimal parameters of LSTM for training, the researchers tune the parameters only by empirical judgment or extensive experiments. The accuracy of the significant wave height prediction model is affected by the different parameters. In particular, different parameters have different ranges and it is impractical to train an LSTM model with all permutations of parameter values. To utilize better training parameters for LSTM, we integrate SFPSO with LSTM. The four training parameters of LSTM, learning rate, batch size, number of LSTM layers, and hidden units are optimized by SFPSO. The SFPSO-LSTM includes four main parts: data processing, searching the optimal solution, training the model with the optimal solution, and predicting the results. The flowchart of the SFPSO-LSTM is shown in Figure 3.
The input data are a time series of length 24 with 11 dimensions, with the relevant influences of the last 24 h as input, and the output data are also a time series of length 24; the output has only one dimension, which is significant wave height. Normalization mitigates the risk of certain features dominating others due to differing scales, ensuring equal contribution from each feature to the learning process. This is particularly important when dealing with diverse input variables, as in this study, where variables like wind components, temperature, and sea level pressure are measured in different units. By normalizing the data, the model is enabled to treat all input features uniformly, leading to more stable and effective training. These reasons underscore the necessity of normalization for achieving reliable and accurate predictions. The specific steps of SFPSO- LSTM are as follows:
  • Step 1: Normalize all the data within the range −1 to 1 by Equation (5). The normalization eliminates the influence of the scale between indicators and increases the speed of calculations.
    X n o r m = 2 ( X X m i n ) / ( X m a x X m i n ) 1
    where X n o r m is the data point after normalization. The X, X m i n , and X m a x denote the data points for a dimension and the maximum and minimum values for that dimension, respectively.
  • Step 2: In the SFPSO process, the mean square error (MSE) is set as the fitness function for SFPSO. All particles continuously update their velocity and position within the range and thus search for the optimal possible combination of parameters. The fitness function of the SFPSO is as follows:
    F = 1 n i = 1 n ( P i T i ) 2
    where n is the number of samples, and the P i and T i present the ( i ) t h predicted and the true significant wave heights.
  • Step 3: The LSTM model is trained with the parameters searched by SFPSO.
  • Step 4: The results are predicted using a trained SFPSO-LSTM.

3. Experiment and Results

3.1. Data Preparation

Waves are generated by a combination of various factors [36]. A total of 11 factors, which includes 10 m u-component of wind, 10 m v-component of wind, 2 m dewpoint temperature, 2 m temperature, mean sea level pressure, mean wave direction, mean wave period, sea surface temperature, significant height of combined wind waves and swell, surface pressure, and total precipitation were selected to predict the wave height. A heatmap of correlation matrix is shown in Figure 4. Each cell in the heatmap corresponds to a correlation value between two variables, with the intensity of the color indicating the strength and direction of the correlation. Typically, the correlation values range from −1 to 1, where 1 indicates a perfect positive correlation, −1 indicates a perfect negative correlation, and 0 indicates no correlation.
The sea data used in this study are fifth generation ECMWF reanalysis (ERA5) for the global climate and weather. The data in ERA5 combine model data with observations from around the world through the laws of physics to form a globally complete and consistent dataset. Also, to demonstrate the rigor of the experiment, four positions are selected to analyze the results predicted by the model. All the experimental data were divided into a training set and test set in the ratio of 80% and 20%. Details of the four position are listed.
  • P1, latidude and longitude: 42.0, 29.0, Date: 1 January 2023–31 May 2023, sampling method: hour, 3625 sets of data
  • P2, latidude and longitude: 25.0, 165.0, Date: 1 January 2023–31 May 2023, sampling method: hour, 3625 sets of data
  • P3, latidude and longitude: −31.0, 74.0, Date: 1 January 2023–31 May 2023, Sampling method: hour, 3625 sets of data
  • P4, latidude and longitude: 74.5, 37.5, Date: 1 January 2023–31 May 2023, sampling method: hour, 3625 sets of data
This diverse selection is aimed at testing the robustness of the SFPSO-LSTM model under varying conditions, which is crucial for practical applications where data diversity is often limited. The advantages of using data from these four points to train the SFPSO-LSTM model are manifold:
  • Diverse Environmental Conditions: The selected points offer a wide range of oceanic environments, from enclosed seas to open oceans, and from temperate to polar regions. This diversity ensures the model’s robustness and adaptability to various maritime conditions.
  • Comprehensive Data Representation: By encompassing different latitudinal zones and oceanic conditions, the model can be trained on a comprehensive dataset, enhancing its predictive accuracy and generalizability.
  • Enhanced Model Validation: The geographical diversity allows for extensive validation of the model across a range of conditions, ensuring that the model is not overfitted to a specific type of environment.
  • Global Applicability: Training the model on data from these varied locations increases the likelihood that it can be effectively applied to wave height prediction in other parts of the world’s oceans, thereby enhancing its global utility.
Overall, the selection of these points for model training is a strategic choice that balances geographic diversity with the representation of different oceanographic conditions, thereby aiming to create a versatile and globally applicable wave prediction model.

3.2. Results

To validate the performance of the SFPSO-LSTM model in predicting significant wave height (SWH), three time-series prediction models, gated recurrent units (GRUs), long short-term memory (LSTM), and particle swarm optimized long short-term memory (PSO-LSTM), were chosen as the control groups. The GRU and LSTM trained with commonly used parameters, the PSO-LSTM and SFPSO-LSTM utilized the same parameters. The parameters of the control and experimental groups are shown in Table 1. To guarantee the fairness of the experiments, all experiments were conducted within Pytorch.
The output SWH has four prediction structures including 1 h, 3 h, 6 h, and 12 h. The R2 and RMSE with time steps are shown in Figure 5 (left) and (right). The test set prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM are shown in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13.
In the experiments, the mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), and r-squared (R2) are chosen for evaluating the performance of these algorithms. MAE, RMSE, MAPE, and R2 are calculated by Equations (7), (8), (9), and (10), respectively.
M A E = 1 n i = 1 n | P i T i |
R M S E = 1 n i = 1 n ( P i T i ) 2
M A P E = 1 n i = 1 n | P i T i T i | 100 %
R 2 = 1 i = 1 n ( P i T i ) 2 i = 1 n ( T i T ¯ ) 2
where n is the number of samples. The P i and T i represent the prediction value and true value, respectively. T ¯ denotes the average of all true values.
Among these four evaluation indicators, the lower the MAE, RMSE, and MAPE indicators, the narrower the deviation of the overall predicted value from the true value, and the larger the R2, the higher the correlation between the predicted value and the true value. The results of the specific metrics for the four algorithms are shown in Table 2 and Table 3.
Upon examination of Table 2, it is evident that the SFPSO-LSTM algorithm demonstrates superior predictive accuracy when compared to the GRU, LSTM, and PSO-LSTM algorithms across various forecasting horizons for significant wave height (SWH) at positions P1 and P2. This superiority is quantified through a comparative analysis of standard performance metrics, namely mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), and the coefficient of determination (R2).
The SFPSO-LSTM algorithm consistently exhibits lower values of MAE, RMSE, and MAPE, alongside higher R2 values, indicative of its enhanced ability to generate predictions that closely align with the actual observations. For instance, at the 1-h forecast interval at position P1, SFPSO-LSTM achieves an MAE of 0.035, which is a reduction of approximately 52.7%, 30%, and 17.4% compared to the MAE values reported for GRU, LSTM, and PSO-LSTM, respectively. Similarly, the RMSE for SFPSO-LSTM stands at 0.043, reflecting a decrease of 57%, 40.3%, and 24.6% relative to the other three algorithms in the same order.
Furthermore, the SFPSO-LSTM algorithm’s MAPE of 7.61% at the 1-h prediction horizon for P1 signifies an improvement of more than 58.8% over GRU, over 48.9% over LSTM, and approximately 17.6% over PSO-LSTM. In terms of the coefficient of determination, SFPSO-LSTM’s R2 value of 0.988 surpasses those of GRU, LSTM, and PSO-LSTM by 5.6%, 2.3%, and 0.9%, respectively, at the same prediction interval and position, suggesting a significantly better fit to the variance in the observed data.
For the 1-h forecast at position P3, the SFPSO-LSTM algorithm achieves an MAE of 0.054, which represents a 12.90% improvement over the GRU algorithm. The RMSE is improved by 26.23% with a value of 0.090 for SFPSO-LSTM compared to GRU’s 0.122. In terms of MAPE, the improvement is 8.89%, with SFPSO-LSTM recording a value of 1.782% against GRU’s 1.956%. Moreover, the R² value for SFPSO-LSTM is 0.980, a 1.77% increase over GRU’s 0.963, indicating a more robust explanatory power regarding the variance in wave heights.
Starting with the 1-h forecast for P4, SFPSO-LSTM’s MAE is 0.045, which is lower than GRU’s 0.055 (an 18.18% improvement), LSTM’s 0.060 (a 25% improvement), and PSO-LSTM’s 0.054 (a 16.67% improvement). For RMSE, SFPSO-LSTM reports a value of 0.069, which is lower than GRU’s 0.096 (a 28.12% improvement), LSTM’s 0.071 (a 2.82% improvement), and PSO-LSTM’s 0.068 (a 4.41% improvement). In terms of MAPE, SFPSO-LSTM has a 2.591% error, which is better than GRU’s 3.142% (a 17.51% improvement), LSTM’s 3.409% (a 23.98% improvement), and PSO-LSTM’s 2.929% (a 11.54% improvement). Lastly, the R² value for SFPSO-LSTM is 0.991, which is higher than GRU’s 0.988 (a 0.30% improvement), LSTM’s 0.987 (a 0.40% improvement), and PSO-LSTM’s 0.988 (a 0.30% improvement).
Examining the longer forecasts, SFPSO-LSTM maintains its advantage with consistently lower error metrics and higher R2 values at the 3 h, 6 h, and 12 h predictions. The algorithm shows its robustness and stability over varying prediction horizons, underpinning its efficacy in capturing the dynamics of the time-series data for SWH.

3.3. Analyses

To visualize the performance of SFPSO-LSTM with respect to the output time step more intuitively, we plot the variation image with the time step as the horizontal coordinate, while R2 and RMSE are considered as the vertical coordinates, respectively. In Figure 5, the R2 values exhibit a high degree of predictive accuracy at the 1-h forecast across all positions, indicating a strong correlation between the predicted and actual SWH values. However, there is a discernible decrease in R² values as the forecast interval extends to 3, 6, and 12 h, underscoring the increased difficulty in capturing the variability of SWH over longer temporal spans. This reduction in R² is particularly pronounced at position P4, where the values exhibit a precipitous decline, suggesting a substantial diminution in the model’s explanatory power at extended forecasting horizons.
Conversely, the RMSE graph illustrates an inverse trend, where the errors are minimal at the 1-h forecast, denoting a close match between the forecasted and observed values. As the forecast horizon progresses, the RMSE values ascend for each position, which is indicative of escalating forecast errors. This increment in RMSE is especially marked for position P4, which experiences the largest errors at the 12-h forecast interval, reflecting the algorithm’s diminishing precision in longer-term predictions for this location.
In essence, the SFPSO-LSTM algorithm’s performance in predicting SWH demonstrates a reduction in accuracy and an amplification of predictive errors as the length of the forecasting period increases, which a pattern that aligns with the intrinsic complexities of time-series forecasting. This phenomenon is particularly evident at position P4, which may suggest geographical or environmental factors at this position that exacerbate the challenges inherent in predicting SWH over extended horizons.

3.4. Limitations and Future Recommendation

Despite the promising results, the proposed SFPSO-LSTM model has several limitations. First, the model’s performance is heavily dependent on the quality and quantity of the input data, which may not always be available or reliable. Second, while SFPSO improves the LSTM training process, it also introduces significant computational complexity and time requirements, making it less suitable for real-time applications. Third, the model’s evaluation was limited to data from four locations within a specific period, which may not fully capture the variability of ocean conditions across different regions and seasons. Lastly, the model primarily focuses on significant wave height (SWH) prediction and may not generalize well to other related oceanographic parameters without further modification and tuning.
Future research should address these limitations by expanding the dataset to include more diverse geographical locations and extended time periods to improve the model’s robustness and generalizability. Additionally, efforts should be made to reduce the computational complexity and training time of the SFPSO algorithm to enhance its applicability in real-time scenarios. Integrating other advanced optimization techniques and hybrid models could also be explored to further improve prediction accuracy. Moreover, extending the model to predict other important oceanographic and atmospheric parameters would provide a more comprehensive tool for marine safety and navigation.

4. Conclusions

A novel SFPSO-LSTM method for significant wave height prediction is proposed in this paper. The SFPSO-LSTM is a model that combines the SFPSO and LSTM neural networks. The LSTM method solves the time sequence problems more efficiently than traditional neural networks. To enhance the prediction ability of LSTM, the training parameters of LSTM need to be optimized. I n the early iteration of traditional PSO, the search range is small and it cannot search the search space effectively; in the late iteration, it cannot accurately determine the optimal solution. Therefore, we propose an SFPSO, which is dynamically adjusted according to the number of iterations, has a larger search space in the early search period, and obtains an accurate optimal solution in the late search period. Experiments were conducted under the guidance of GRU, LSTM, PSO-LSTM, and SFPSO. The eleven factors associated with SWH from EAR5 at four different points around the globe were selected as experimental data. We also chose SWH after 1 h, 3 h, 6 h, and 12 h as labels for the experiments. Four metrics, MEA, RMSE, MAPE, and R2, were used to evaluate the performance of these models. The results show that SFPSO-LSTM had a lower error in predicting SWH than the control group. In 16 experiments, SFPOS-LSTM obtained the best 13 trials. Some evaluation metrics were not as good as the control group in the remaining three experiments.The best RMSE of SFPSO-LSTM was 0.059, which was decreased by 0.009 compared to the best of 0.068 in the control group, while the R2 increased by 0.003 to 0.991 compared to the best of 0.988 in the control group. Therefore, SFPSO-LSTM is a better tool to predict the SWH. Key findings of this work can be summarized as follows:
  • Enhanced Predictive Accuracy: The integration of the slow failure particle swarm optimization (SFPSO) algorithm with the long short-term memory (LSTM) neural network significantly improves the accuracy of significant wave height (SWH) predictions compared to traditional methods.
  • Adaptive Algorithm Efficiency: The SFPSO algorithm effectively addresses the limitations of traditional particle swarm optimization (PSO) by adaptively adjusting particle search ranges based on iteration count, reducing premature convergence and improving search precision.
  • Robust Model Performance: The LSTM model, optimized with SFPSO, demonstrates robust performance across diverse marine environments, ensuring reliable predictions in various oceanic conditions, including enclosed seas, open oceans, temperate zones, and polar regions.

Author Contributions

Conceptualization, J.G. and Z.Y.; methodology, B.S.; software, J.G. and Y.S.; validation, J.G., Z.Y. and B.S.; formal analysis, J.G.; investigation, J.G.; resources, J.G.; data curation, J.G. and Y.S.; writing—original draft preparation, J.G.; writing—review and editing, B.S.; visualization, J.G.; supervision, Y.S.; project administration, Y.S.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

The research was conducted under the auspices of the Hosei International Fund (HIF) Foreign Scholars Fellowship; Natural Science Foundation of Hubei Province 2023AFB003, 2024AFB002; the School Youth Fund Program of Hubei University of Economics (XJZD202305); JSPS KAKENHI Grant Numbers JP22K12185.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Lou, R.; Wang, W.; Li, X.; Zheng, Y.; Lv, Z. Prediction of Ocean Wave Height Suitable for Ship Autopilot. IEEE Trans. Intell. Transp. Syst. 2022, 23, 25557–25566. [Google Scholar] [CrossRef]
  2. Hlophe, T.; Taylor, P.H.; Kurniawan, A.; Orszaghova, J.; Wolgamot, H. Phase-resolved wave prediction in highly spread seas using optimised arrays of buoys. Appl. Ocean. Res. 2023, 130, 103435. [Google Scholar] [CrossRef]
  3. Liao, Z.; Sun, T.; Al-Ani, M.; Jordan, L.B.; Li, G.; Wang, Z.; Belmont, M.; Edwards, C.; Zhan, S. Modelling and Control Tank Testing Validation for Attenuator Type Wave Energy Converter—Part II: Linear Noncausal Optimal Control and Deterministic Sea Wave Prediction Tank Testing. IEEE Trans. Sustain. Energy 2023, 14, 1758–1768. [Google Scholar] [CrossRef]
  4. Zhang, C.; Chen, Z.; Zhao, C.; Chen, X.; Wei, Y.; He, J. Deterministic Sea Wave Prediction Based on Least Squares With Regularization Algorithm Using Coherent Microwave Radar. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4209809. [Google Scholar] [CrossRef]
  5. Al-Ani, M.; Belmont, M.; De Paolo, T. Spectral Algorithm in Waves Profiling and Prediction from Radar Backscatter. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5104711. [Google Scholar] [CrossRef]
  6. Al-Ani, M.; Belmont, M.; Christmas, J. Sea trial on deterministic sea waves prediction using wave-profiling radar. Ocean Eng. 2020, 207, 107297. [Google Scholar] [CrossRef]
  7. Fisher, A.; Thomson, J.; Schwendeman, M. Rapid deterministic wave prediction using a sparse array of buoys. Ocean Eng. 2021, 228, 108871. [Google Scholar] [CrossRef]
  8. Previsic, M.; Karthikeyan, A.; Lyzenga, D. In-Ocean Validation of a Deterministic Sea Wave Prediction (DSWP) System leveraging X-Band Radar to Enable Optimal Control in Wave Energy Conversion Systems. Appl. Ocean. Res. 2021, 114, 102784. [Google Scholar] [CrossRef]
  9. Simanesew, A.; Trulsen, K.; Krogstad, H.E.; Nieto Borge, J.C. Surface wave predictions in weakly nonlinear directional seas. Appl. Ocean. Res. 2017, 65, 79–89. [Google Scholar] [CrossRef]
  10. Lu, K.; Wang, J.; Zhang, M. Study on prediction of internal solitary waves propagation in the southern Andaman Sea. J. Oceanogr. 2021, 77, 607–613. [Google Scholar] [CrossRef]
  11. Demetriou, D.; Michailides, C.; Papanastasiou, G.; Onoufriou, T. Coastal zone significant wave height prediction by supervised machine learning classification algorithms. Ocean Eng. 2021, 221, 108592. [Google Scholar] [CrossRef]
  12. Huang, W.; Dong, S. Improved short-term prediction of significant wave height by decomposing deterministic and stochastic components. Renew. Energy 2021, 177, 743–758. [Google Scholar] [CrossRef]
  13. Tang, G.; Lu, P.; Hu, X.; Men, S. Control system research in wave compensation based on particle swarm optimization. Sci. Rep. 2021, 11, 15316. [Google Scholar] [CrossRef] [PubMed]
  14. Son, B.; Do, K. Optimization of SWAN Wave Model to Improve the Accuracy of Winter Storm Wave Prediction in the East Sea. J. Ocean. Eng. Technol. 2021, 35, 273–286. [Google Scholar] [CrossRef]
  15. Altunkaynak, A. Prediction of significant wave height using spatial function. Ocean Eng. 2015, 106, 220–226. [Google Scholar] [CrossRef]
  16. Huchet, M.; Babarit, A.; Ducrozet, G.; Gilloteaux, J.C.; Ferrant, P. Nonlinear deterministic sea wave prediction using instantaneous velocity profiles. Ocean Eng. 2021, 220, 108492. [Google Scholar] [CrossRef]
  17. Çelik, A. Improving prediction performance of significant wave height via hybrid SVD-Fuzzy model. Ocean Eng. 2022, 266, 113173. [Google Scholar] [CrossRef]
  18. Yang, H.; Wang, H.; Ma, Y.; Xu, M. Prediction of Wave Energy Flux in the Bohai Sea through Automated Machine Learning. J. Mar. Sci. Eng. 2022, 10, 1025. [Google Scholar] [CrossRef]
  19. Kim, I.C.; Ducrozet, G.; Bonnefoy, F.; Leroy, V.; Perignon, Y. Real-time phase-resolved ocean wave prediction in directional wave fields: Enhanced algorithm and experimental validation. Ocean Eng. 2023, 276, 114212. [Google Scholar] [CrossRef]
  20. Son, D.; Jun, K.; Kwon, J.I.; Yoo, J.; Park, S.H. Improvement of wave predictions in marginal seas around Korea through correction of simulated sea winds. Appl. Ocean. Res. 2023, 130, 103433. [Google Scholar] [CrossRef]
  21. Hlophe, T.; Wolgamot, H.; Taylor, P.H.; Kurniawan, A.; Orszaghova, J.; Draper, S. Wave-by-wave prediction in weakly nonlinear and narrowly spread seas using fixed-point surface-elevation time histories. Appl. Ocean. Res. 2022, 122, 103112. [Google Scholar] [CrossRef]
  22. Zhan, Y.; Zhang, H.; Li, J.; Li, G. Prediction Method for Ocean Wave Height Based on Stacking Ensemble Learning Model. J. Mar. Sci. Eng. 2022, 10, 1150. [Google Scholar] [CrossRef]
  23. Sadeghifar, T.; Lama, G.F.; Sihag, P.; Bayram, A.; Kisi, O. Wave height predictions in complex sea flows through soft-computing models: Case study of Persian Gulf. Ocean Eng. 2022, 245, 110467. [Google Scholar] [CrossRef]
  24. Meng, F.; Song, T.; Xu, D.; Xie, P.; Li, Y. Forecasting tropical cyclones wave height using bidirectional gated recurrent unit. Ocean Eng. 2021, 234, 108795. [Google Scholar] [CrossRef]
  25. Meng, Z.F.; Chen, Z.; Khoo, B.C.; Zhang, A.M. Long-time prediction of sea wave trains by LSTM machine learning method. Ocean Eng. 2022, 262, 112213. [Google Scholar] [CrossRef]
  26. Hao, P.; Li, S.; Yu, C.; Wu, G. A Prediction Model of Significant Wave Height in the South China Sea Based on Attention Mechanism. Front. Mar. Sci. 2022, 9, 895212. [Google Scholar] [CrossRef]
  27. Zhang, X.; Li, Y.; Gao, S.; Ren, P. Ocean wave height series prediction with numerical long short-term memory. J. Mar. Sci. Eng. 2021, 9, 514. [Google Scholar] [CrossRef]
  28. Adytia, D.; Saepudin, D.; Tarwidi, D.; Pudjaprasetya, S.R.; Husrin, S.; Sopaheluwakan, A.; Prasetya, G. Modelling of Deep Learning-Based Downscaling for Wave Forecasting in Coastal Area. Water 2023, 15, 204. [Google Scholar] [CrossRef]
  29. Wang, H.; Fu, D.; Liu, D.; Xiao, X.; He, X.; Liu, B. Analysis and Prediction of Significant Wave Height in the Beibu Gulf, South China Sea. J. Geophys. Res. Ocean. 2021, 126, e2020JC017144. [Google Scholar] [CrossRef]
  30. Dakar, E.; Fernández Jaramillo, J.M.; Gertman, I.; Mayerle, R.; Goldman, R. An artificial neural network based system for wave height prediction. Coast. Eng. J. 2023, 65, 309–324. [Google Scholar] [CrossRef]
  31. Li, S.; Hao, P.; Yu, C.; Wu, G. CLTS-net: A more accurate and universal method for the long-term prediction of significant wave height. J. Mar. Sci. Eng. 2021, 9, 1464. [Google Scholar] [CrossRef]
  32. Fu, Y.; Ying, F.; Huang, L.; Liu, Y. Multi-step-ahead significant wave height prediction using a hybrid model based on an innovative two-layer decomposition framework and LSTM. Renew. Energy 2023, 203, 455–472. [Google Scholar] [CrossRef]
  33. Adytia, D.; Saepudin, D.; Pudjaprasetya, S.R.; Husrin, S.; Sopaheluwakan, A. A Deep Learning Approach for Wave Forecasting Based on a Spatially Correlated Wind Feature, with a Case Study in the Java Sea, Indonesia. Fluids 2022, 7, 39. [Google Scholar] [CrossRef]
  34. Jörges, C.; Berkenbrink, C.; Gottschalk, H.; Stumpe, B. Spatial ocean wave height prediction with CNN mixed-data deep neural networks using random field simulated bathymetry. Ocean Eng. 2023, 271, 113699. [Google Scholar] [CrossRef]
  35. Ma, J.; Xue, H.; Zeng, Y.; Zhang, Z.; Wang, Q. Significant wave height forecasting using WRF-CLSF model in Taiwan strait. Eng. Appl. Comput. Fluid Mech. 2021, 15, 1400–1419. [Google Scholar] [CrossRef]
  36. Feng, Z.; Hu, P.; Li, S.; Mo, D. Prediction of Significant Wave Height in Offshore China Based on the Machine Learning Method. J. Mar. Sci. Eng. 2022, 10, 836. [Google Scholar] [CrossRef]
  37. Zhou, S.; Xie, W.; Lu, Y.; Wang, Y.; Zhou, Y.; Hui, N.; Dong, C. ConvLSTM-Based Wave Forecasts in the South and East China Seas. Front. Mar. Sci. 2021, 8, 680079. [Google Scholar] [CrossRef]
  38. Song, T.; Han, R.; Meng, F.; Wang, J.; Wei, W.; Peng, S. A significant wave height prediction method based on deep learning combining the correlation between wind and wind waves. Front. Mar. Sci. 2022, 9, 983007. [Google Scholar] [CrossRef]
  39. Altunkaynak, A.; Çelik, A.; Mandev, M.B. Hourly significant wave height prediction via singular spectrum analysis and wavelet transform based models. Ocean Eng. 2023, 281, 114771. [Google Scholar] [CrossRef]
  40. Çelik, A.; Altunkaynak, A. Optimal Significant Wave Height Monitoring Network Identification via Empirical Orthogonal Function Analysis with QR Column Pivoting Algorithm. J. Waterw. Port Coastal Ocean. Eng. 2023, 149, 04023018. [Google Scholar] [CrossRef]
  41. Guo, J.; Zhou, G.; Yan, K.; Shi, B.; Di, Y.; Sato, Y. A novel hermit crab optimization algorithm. Sci. Rep. 2023, 13, 9934. [Google Scholar] [CrossRef] [PubMed]
  42. Guo, J.; Shi, B.; Yan, K.; Di, Y.; Tang, J.; Xiao, H.; Sato, Y. A twinning bare bones particle swarm optimization algorithm. PLoS ONE 2022, 17, e0267197. [Google Scholar] [CrossRef] [PubMed]
  43. Guo, J.; Zhou, G.; Di, Y.; Shi, B.; Yan, K.; Sato, Y. A Bare-bones Particle Swarm Optimization with Crossed Memory for Global Optimization. IEEE Access 2023, 11, 31549–31568. [Google Scholar] [CrossRef]
  44. Zhang, J.; Zhao, X.; Jin, S.; Greaves, D. Phase-resolved real-time ocean wave prediction with quantified uncertainty based on variational Bayesian machine learning. Appl. Energy 2022, 324, 119711. [Google Scholar] [CrossRef]
Figure 1. Structure of the LSTM.
Figure 1. Structure of the LSTM.
Jmse 12 01359 g001
Figure 2. Structure of the SFPSO.
Figure 2. Structure of the SFPSO.
Jmse 12 01359 g002
Figure 3. Flowchart of SFPSO-LSTM.
Figure 3. Flowchart of SFPSO-LSTM.
Jmse 12 01359 g003
Figure 4. Heatmap of correlation matrix for input variables.
Figure 4. Heatmap of correlation matrix for input variables.
Jmse 12 01359 g004
Figure 5. The R2 (left) and RMSE (right) with time step for SFPSO-LSTM.
Figure 5. The R2 (left) and RMSE (right) with time step for SFPSO-LSTM.
Jmse 12 01359 g005
Figure 6. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 1 h and 3 h in P1.
Figure 6. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 1 h and 3 h in P1.
Jmse 12 01359 g006
Figure 7. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 6 h and 12 h in P1.
Figure 7. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 6 h and 12 h in P1.
Jmse 12 01359 g007
Figure 8. The prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 1 h and 3 h SWH in P2.
Figure 8. The prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 1 h and 3 h SWH in P2.
Jmse 12 01359 g008
Figure 9. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 6 h and 12 h in P2.
Figure 9. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 6 h and 12 h in P2.
Jmse 12 01359 g009
Figure 10. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 1 h and 3 h in P3.
Figure 10. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 1 h and 3 h in P3.
Jmse 12 01359 g010
Figure 11. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 6 h and 12 h in P3.
Figure 11. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 6 h and 12 h in P3.
Jmse 12 01359 g011
Figure 12. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 1 h and 3 h in P4.
Figure 12. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 1 h and 3 h in P4.
Jmse 12 01359 g012
Figure 13. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 6 h and 12 h in P4.
Figure 13. The SWH prediction results of the GRU, LSTM, PSO-LSTM, and SFPSO-LSTM for 6 h and 12 h in P4.
Jmse 12 01359 g013
Table 1. The parameters of the control and experimental groups.
Table 1. The parameters of the control and experimental groups.
ParametersGRULSTMPSO-LSTMSFPSO-LSTM
Number of epochs500500500500
SolverAdamAdamAdamAdam
Dropout probability0.20.20.20.2
Batch size6464[16, 128][16, 128]
Number of hidden layer units6060[10, 120][10, 120]
Number of layers22[1, 5][1, 5]
Initial learning rate0.0010.001[0.0001, 0.005][0.0001, 0.005]
PopularizationNoneNone55
Number of iterationsNoneNone2020
Table 2. Values of MEA, RMSE, MAPE, and R2 for GRU, LSTM, PSO-LSTM, and SFPSO-LSTM at 1 h, 3 h, 6 h, and 12 h of P1 and P2. Best results are shown in bold.
Table 2. Values of MEA, RMSE, MAPE, and R2 for GRU, LSTM, PSO-LSTM, and SFPSO-LSTM at 1 h, 3 h, 6 h, and 12 h of P1 and P2. Best results are shown in bold.
PositionPredicted TimeIndicatorsGRULSTMPSO-LSTMSFPSO-LSTM
P11 hMAE0.0740.0500.0430.035
RMSE0.1000.0720.0570.043
MAPE18.447%14.888%9.246%7.610%
R20.9360.9660.9790.988
3 hMAE0.1190.1190.1420.104
RMSE0.1680.0240.1820.130
MAPE19.587%22.905%22.721%16.496%
R20.8170.8450.7870.891
6 hMAE0.1970.2100.1680.131
RMSE0.2540.2570.2480.175
MAPE25.807%28.565%20.758%20.472%
R20.5830.5750.6040.802
12 hMAE0.2790.2550.2480.206
RMSE0.3450.3350.3220.277
MAPE43.992%41.964%33.521%30.268%
R20.2290.2740.3300.503
P21 hMAE0.0470.0340.0280.019
RMSE0.0530.0490.0340.023
MAPE2.701%1.973%1.574%1.060%
R20.9580.9640.9830.992
3 hMAE0.0750.1160.0640.061
RMSE0.0960.1560.0760.087
MAPE4.337%6.815%3.602%3.273%
R20.8610.6360.9140.888
6 hMAE0.1290.1210.1230.105
RMSE0.1610.1540.1550.130
MAPE7.687%7.232%7.081%5.661%
R20.6110.6440.6390.746
12 hMAE0.2040.1650.1500.131
RMSE0.2560.2110.1940.165
MAPE12.058%9.449%8.615%7.399%
R20.0220.3350.4340.593
Table 3. Values of MEA, RMSE, MAPE, and R2 for GRU, LSTM, PSO-LSTM, and SFPSO-LSTM at 1 h, 3 h, 6 h, and 12 h of P3 and P4.Best results are shown in bold.
Table 3. Values of MEA, RMSE, MAPE, and R2 for GRU, LSTM, PSO-LSTM, and SFPSO-LSTM at 1 h, 3 h, 6 h, and 12 h of P3 and P4.Best results are shown in bold.
PositionPredicted TimeIndicatorsGRULSTMPSO-LSTMSFPSO-LSTM
P31 hMAE0.0620.0630.0590.054
RMSE0.1220.1070.0910.090
MAPE1.956%2.073%1.994%1.782%
R20.9630.9720.9790.980
3 hMAE0.2000.1800.1700.138
RMSE0.3300.2560.2440.226
MAPE6.778%6.259%5.944%4.741%
R20.7310.8380.8530.874
6 hMAE0.2740.2850.2660.252
RMSE0.3810.4040.3760.343
MAPE9.853%10.163%9.657%9.079%
R20.6410.5970.6510.709
12 hMAE0.3740.3760.3650.318
RMSE0.5190.5330.4850.487
MAPE13.601%13.973%13.535%10.609%
R20.3340.2990.4180.414
P41 hMAE0.0550.0600.0540.045
RMSE0.0690.0710.0680.059
MAPE3.142%3.409%2.929%2.591%
R20.9880.9870.9880.991
3 hMAE0.2640.3300.2850.260
RMSE0.3300.4160.3610.327
MAPE15.007%18.993%16.686%14.560%
R20.7160.5480.6590.721
6 hMAE0.4630.4720.4400.449
RMSE0.5650.5680.5530.540
MAPE27.143%27.897%26.549%26.256%
R20.1680.1580.2000.240
12 hMAE0.5060.4970.4650.429
RMSE0.6000.5860.5770.575
MAPE30.405%30.264%28.325%24.086%
R20.0600.1030.1320.136
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, J.; Yan, Z.; Shi, B.; Sato, Y. A Slow Failure Particle Swarm Optimization Long Short-Term Memory for Significant Wave Height Prediction. J. Mar. Sci. Eng. 2024, 12, 1359. https://doi.org/10.3390/jmse12081359

AMA Style

Guo J, Yan Z, Shi B, Sato Y. A Slow Failure Particle Swarm Optimization Long Short-Term Memory for Significant Wave Height Prediction. Journal of Marine Science and Engineering. 2024; 12(8):1359. https://doi.org/10.3390/jmse12081359

Chicago/Turabian Style

Guo, Jia, Zhou Yan, Binghua Shi, and Yuji Sato. 2024. "A Slow Failure Particle Swarm Optimization Long Short-Term Memory for Significant Wave Height Prediction" Journal of Marine Science and Engineering 12, no. 8: 1359. https://doi.org/10.3390/jmse12081359

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop