Next Article in Journal
Analysis of Arctic Sea Ice Concentration Anomalies Using Spatiotemporal Clustering
Previous Article in Journal
A Slow Failure Particle Swarm Optimization Long Short-Term Memory for Significant Wave Height Prediction
Previous Article in Special Issue
Real-Time Infrared Sea–Sky Line Region Detection in Complex Environment Based on Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Post-Processing Maritime Wind Forecasts from the European Centre for Medium-Range Weather Forecasts around the Korean Peninsula Using Support Vector Regression and Principal Component Analysis

1
School of Software, Kwangwoon University, 20 Kwangwoon-ro, Nowon-gu, Seoul 01897, Republic of Korea
2
Ara Consulting & Technology, 30 Songdomirae-ro, Yeonsu-gu, Incheon 21990, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(8), 1360; https://doi.org/10.3390/jmse12081360
Submission received: 15 July 2024 / Revised: 2 August 2024 / Accepted: 8 August 2024 / Published: 9 August 2024
(This article belongs to the Special Issue Machine Learning Methodologies and Ocean Science)

Abstract

:
Accurate wind data are crucial for successful search and rescue (SAR) operations on the sea surface in maritime accidents, as survivors or debris tend to drift with the wind. As maritime accidents frequently occur outside the range of wind stations, SAR operations heavily rely on wind forecasts generated by numerical models. However, numerical models encounter delays in generating results due to spin-up issues, and their predictions can sometimes exhibit inherent biases caused by geographical factors. To overcome these limitations, we reviewed the observations for the first 24 h of the 72-hour forecast from the ECMWF and then post-processed the forecast for the remaining 48 h. By effectively reducing the dimensionality of input variables comprising observation and forecast data using principal component analysis, we improved wind predictions with support vector regression. Our model achieved an average RMSE improvement of 16.01% compared to the original forecast from the ECMWF. Furthermore, it achieved an average RMSE improvement of 5.42% for locations without observation data by employing a model trained on data from the nearest wind station and then applying an adaptive weighting scheme to the output of that model.

1. Introduction

Accurate wind data are crucial for successful search and rescue (SAR) operations when a maritime accident occurs. As wind impacts the drift patterns of survivors and debris, precise wind information is essential for predicting their locations [1,2,3]. Without reliable wind data, search efforts become less effective, potentially delaying survivor rescue and debris recovery.
However, maritime wind stations are sparsely distributed, making it difficult to obtain real-time wind data in many areas. As a result, numerical weather prediction (NWP) models that generate forecasts for all grid points are typically used to estimate wind conditions at the accident site [3,4]. These NWP models may not always provide precise wind predictions at the appropriate times for effective SAR operations since they have spin-up issues [5,6,7] and systematic biases [8,9,10]. We need more accurate wind prediction models to better support SAR efforts for maritime accidents.
As wind speed prediction plays a crucial role in SAR operations, air mobility, air pollution forecasting, and wind power generation, there have been numerous efforts to enhance its accuracy [11,12,13,14,15,16,17,18,19]. Many of these efforts have focused on improving predictions by post-processing NWP models. Sweeney et al. [20] compared seven adaptive approaches to post-processing wind speed forecasts over Ireland, demonstrating that combined forecasts improved accuracy by reducing root-mean-squared errors (RMSE) compared to traditional NWPs. Xu et al. [21] post-processed the Weather Research Forecast (WRF) model with the gradient boosting decision tree [22] to improve wind speed forecast over the original WRF results. Phipps et al. [23] post-processed weather elements such as temperature, wind speed and both the u- and v-components of wind from weather ensembles of NWPs to enhance wind power forecasts. Duan et al. [24] proposed a graph-based wind speed prediction model that post-processes the WRF model. Bouallègue et al. [25] used machine learning methods to optimize ECMWF’s operational medium-range forecasts for 10 m wind speed by correcting past errors and reducing forecast uncertainties, resulting in a 10–15% improvement in RMSE across various methods. Zhang et al. [10] developed a deep learning model to improve real-time wind forecasts by using a spatiotemporal method for nonlinear mapping between the forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) and the fifth-generation ECMWF atmospheric reanalysis, resulting in reductions in wind speed and direction biases. These previous research results suggest that post-processing NWP data to improve wind prediction accuracy could help in predicting the drift patterns of survivors and debris during maritime accidents, thereby enhancing the efficiency of SAR operations.
Supervised learning has been widely used for post-processing NWPs [10,20,21,24]. This machine learning technique creates a function to map input data to output data based on given instances of such mappings. The resulting function can then predict the output for new input data, and is referred to as a classifier if the output is categorical, or a regression function if the output is continuous. For instance, in our context, a classifier would predict whether or not strong winds will occur, whereas a regression function might predict wind speed in meters per second.
In machine learning, dimensionality reduction techniques construct an effective set of features from high-dimensional input data. As the dimension of input data increases, the amount of time or memory required by machine learning techniques can increase significantly. This phenomenon is referred to as the curse of dimensionality [26], which can be alleviated by dimensionality reduction techniques. These techniques include feature selection [27] and feature extraction [28]. Feature selection selects a subset of input variables, while feature extraction projects high-dimensional features to a lower-dimensional space.
In this paper, we predict the u- and v-components of wind by post-processing the forecasts from the ECMWF high-resolution model. We evaluate the first 24 h forecast from the ECMWF and apply post-processing techniques to the subsequent 48 h. First, we apply principal component analysis (PCA) [29] for feature extraction on wind observation and forecast data around the Korean Peninsula for the initial 24 h. We use these data to train the support vector regression (SVR) [30] model, which then makes wind predictions for the following 48 h. Furthermore, we devise an adaptive weighting scheme that dynamically combines predictions from locations without wind stations with those from the nearest wind station. This approach successfully improved the accuracy of predictions for locations without wind stations. Using observations to post-process forecasts from the ECMWF is a novel approach compared to previous studies. However, this method has the limitation that it cannot be applied to locations where no nearby wind stations exist.
The rest of this paper is organized as follows: In Section 2, we introduce the datasets and their features. In Section 3, we present the details of the PCA, SVR, and the adaptive weighting scheme. In Section 4, we present the experimental setup and results. Finally, in Section 5, we draw conclusions and discuss future research directions.

2. Data

Observation data were collected from seven offshore wind stations around the Korean Peninsula, each equipped with sensors to detect the wind speed and direction. We convert these measurements into the u- and v-components of wind, as this decomposition is known to be beneficial for wind forecasting [31]. Figure 1 shows the seven stations we use: two are in the East Sea, two in the Yellow Sea, and the remaining three in the Korea Strait. Wind data for 1 June 2022 to 31 May 2023 at these locations are used for training and evaluating our scheme. Table 1 lists the details of these stations.
We post-process the ECMWF high-resolution forecasts from 00:00 UTC. Each forecast that we use predicts the u- and v-components of surface wind at hourly intervals, from 0 to 71 h . Every day at 00:00 UTC, we use the first 24 h (0 h –23 h ) of forecast data from the ECMWF released at the previous day and the latest 24 h of observations (00:00 UTC to 23:00 UTC) to correct the last 48 h (24 h –71 h ) of the ECMWF’s 72-hour forecast. Table 2 lists the independent variables used in this study. To mitigate the dissimilarity between January 1st (1) and December 31st (365), we applied trigonometric functions to the cyclic data representing days (1–365).

3. Methods

We use a machine learning approach, in which features are extracted from the original dataset using principal component analysis (PCA), and then support vector regression (SVR) is used to correct the u- and v-components of wind data for each forecasting interval. The proposed method is evaluated by the root-mean-squared error (RMSE).

3.1. Nomenclature

The nomenclature used in this section is given below.
  • ( x , y ) is a tuple in which x contains values of the M(=101) random variables X m ( 1 m M ) , which correspond to the quantities listed in Table 2, and y is the target value which can be either the u- or v-component of wind at t h ( 24 t = X 99 71 ).
  • T = { ( x n , y n ) } n = 1 N is a training set with N instances, where x n is the M-tuple and y n is the u or v-component of the wind of the n-th instance.

3.2. Principal Component Analysis

A large number of input variables can significantly increase computational time and memory usage as well as cause overfitting, which degrades performance on unseen data. To resolve this issue, two types of dimensional reduction techniques can be used: feature extraction and feature selection. Based on preliminary experiments, which are not covered in this paper, we chose PCA for feature extraction because it performed better than other dimensional reduction techniques, such as various wrapper methods [32] and the correlation-based feature selection [33], on our dataset. When raw input data have little classification power, feature extraction tends to be preferred over feature selection [27,34].
PCA is a feature extraction technique that uses orthogonal transformation to convert possibly correlated variables into principal components (PCs), which are linearly uncorrelated variables. It provides an informative view of the data by introducing a new coordinate system and reduces the dimensionality of the data. For example, PCA is used in wavelet denoising [35], extracting features from facial images [36], and constructing early warning systems for heavy rainfall [37] by discarding insignificant features from the feature space.
Let w ( p ) = ( w 1 , w 2 , , w M ) be the p-th PC and x n = ( x 1 , x 2 , , x M ) represent the variable values of the n-th instance in a training set T , where M is the number of variables associated with T and x m is the value of the m-th variable. The first PC w ( 1 ) is computed to maximize variance:
w ( 1 ) = arg max w = 1 n ( x n · w ) 2 .
The remaining PCs are subsequently constructed to maximize variance while being orthogonal to the previous components. The number of PCs is less than or equal to M, and the dimensionality of the data can be reduced by selecting the first s (<M) PCs without significant loss of information. After selecting s PCs, each x n is transformed to x ^ n = ( w ( 1 ) x n , w ( 2 ) x n , , w ( s ) x n ) .
In general, the PCs are computed using the singular value decomposition (SVD) [38]. The SVD of an m × n matrix X is a factorization of the form X = U Σ V T , where U is an m × r matrix containing the left singular vectors, V is an n × r matrix containing the right singular vectors, and Σ is an r × r diagonal matrix with singular values. The right singular vectors V are the PCs, which represent the directions of maximum variance in X . Typically, the time complexity of PCA is O ( M 2 N ) , where M is the number of variables and N is the number of instances. Detailed information on the computation of SVD can be found in the works of Jolliffe [38] and Leskovec et al. [39]. An illustrative example of PCA is shown in Figure 2.
The most common criterion for choosing the value of s is based on the cumulative percentage of total variation. Specifically, s is determined to be as small as possible while ensuring that the percentage of variation accounted for by the first s PCs exceeds a specified cutoff. Although a sensible cutoff typically lies between 70% and 90%, it can vary depending on the properties of the dataset [38]. In this study, we set the cutoff at 98%, which, based on our preliminary experiments, uses approximately 19 variables. Reducing the cutoff from 98% to 90% decreased the average number of PCs s from 19.0 to 4.7 for each wind station, but significantly degraded the performance of wind data correction.

3.3. Support Vector Regression

SVR is a machine learning algorithm that extends the principles of support vector machines [40] to regression problems. SVR aims to find a function that approximates the relationship between input variables and output variables by minimizing the prediction error, while also maintaining a model complexity that is as simple as possible. SVR is widely used in marine engineering, including applications such as predicting the maneuvering motion of an unmanned surface vehicle [41] and nonparametric modeling of ship dynamics [42].
The SVR maps the input data onto a high-dimensional feature space using a kernel function and then performs linear regression in that space. This approach enables SVR to effectively handle nonlinear relationships between the input and output variables. Commonly used kernels include linear, polynomial, and radial basis functions [43].
Another feature of SVR is the use of an epsilon tube, which ignores errors that are within a certain distance ϵ from the true value. This creates a “tube” around the regression line where errors are not penalized. SVR also manages the trade-off between achieving a low error rate on the training dataset and minimizing model complexity through the regularization parameter C. A large C value tries to fit the training data as well as possible, while a smaller C value leads to a simpler model.The optimization problem for SVR can be formulated as follows:
min w , b 1 2 w 2 + C i = 1 N max ( 0 , | y i ( w · x i + b ) | ϵ )
where w and b are the parameters of the regression function, C is the regularization parameter, ϵ is the width of the epsilon tube, y i is the target values, and x i is the input variables. Figure 3 illustrates the support vector regression with the epsilon tube. The central solid line indicates the regression function, representing the best fit within the epsilon tube. The shaded area around this line is the epsilon tube. Points falling within this tube are considered well predicted. The epsilon tube defines a margin of tolerance, where no penalty is given to errors.
In our study, we employ SVR to correct the u- and v-components of wind data for each forecasting interval. The input features for SVR include both observed and forecasted wind data. As standard practice, we standardized the input variables [44] by rescaling them to have a mean of zero and a standard deviation of one, ensuring equal contribution from each variable. We then performed PCA to reduce dimensionality, capturing the most significant variance while minimizing noise and computational complexity before training the SVR models. Figure 4 shows a flowchart of the wind prediction correction process using PCA and SVR.

3.4. Adaptive Weighting Scheme

In our study, we corrected the wind forecasts for the next 48 h by post-processing the observations and NWPs from the past 24 h. Therefore, it is not possible to create a wind data correction model for locations without observational data. In such situations, we used the model trained at the nearest wind station to correct the wind data of the stations without observations. The wind station provides the model with the first 98 input variables (day, recent 24 h forecasts and observations), while the remaining three variables (the target hour of correction and the forecasts for that time) are provided by the location without observations.
However, since the two locations are not exactly the same, we found that combining the original predictions with the corrections from the model trained at the nearest wind station yields better results. To achieve this, we devised an adaptive weighting scheme.
Cosine similarity is a metric used to measure the degree to which the directions of the two vectors align. The cosine similarity ranges from −1 (indicating opposite directions) to 1 (indicating perfectly aligned directions) and is calculated as follows:
cosine similarity = S C ( x 1 , x 2 ) : = cos ( θ ) = x 1 · x 2 x 1 x 2 ,
where x 1 and x 2 are the two vectors, x 1 · x 2 is the dot product of the vectors, and x 1 and x 2 are the magnitudes (or lengths) of the vectors.
The adaptive weighting scheme calculates the cosine similarity between the past 24-hour forecasts at the location without observations and those at the nearest wind station. Using this adaptive weighting scheme, the corrected u-component of wind, u c , is calculated as follows:
u c = ( 1 w ) u o + w u n if w > 0 , u o otherwise ,
where u o is the original forecast at the location without observations, u n is the corrected u-component of wind using the model from the nearest wind station, and w is the cosine similarity between the past 24-hour u-component forecasts at the location without observations and those at the nearest wind station. Each past forecast can be interpreted as a vector of length 24, allowing us to calculate the cosine similarity between two forecasts. The more similar these vectors are, the closer the correction will be to the post-processed value using observations from the nearest wind station. If the vectors differ significantly, the original forecast is used instead. The v-component of the wind at locations without observations can be determined in a similar manner. Figure 5 shows a flowchart of the wind prediction correction process at locations without observational data using the adaptive weighting scheme.

4. Results

4.1. Experimental Setup

We used the first three days of the high-resolution forecasts from the ECMWF, released daily at 00:00 UTC, for the seven locations shown in Figure 1. This dataset covers the period from June 2022 to May 2023. To evaluate the performance of wind forecast corrections, we used 12-fold cross-validation (CV), with each fold consisting of one month. For example, in the first validation, data from June 2022 to April 2023 was used as the training set, while data from May 2023 was used as the test set. This procedure was repeated for all months, and the results were averaged to produce a single performance estimate for each model.
We tested the performance of SVR without any dimensionality reduction techniques, and SVR was preceded by PCA, which uses an orthogonal transformation to convert possibly correlated variables into linearly uncorrelated variables. By projecting high-dimensional features into lower-dimensional space and introducing a new coordinate system, PCA reduced the number of variables in a dataset.
We also tested linear regression, random forest [45], and light gradient-boosting machine (LightGBM) [22]. Linear regression models the relationship between input variables and the target variable by fitting a linear equation to the observed data. Random forest is an ensemble learning method that constructs multiple decision trees during training and outputs the mean prediction of the individual trees, providing robustness against overfitting. LightGBM is a gradient-boosting framework that uses tree-based learning algorithms. Each of these techniques was evaluated to compare their effectiveness in correcting wind forecasts.
We implemented the machine learning techniques used in this study with scikit-learn [46]. All inputs were standardized, and hyperparameters were kept at their default settings, except for SVR, which used a linear kernel with a maximum of 16,000 iterations. Since conventional machine learning techniques only generate a single output, separate correction models were trained for the u- and v-components of the wind. All experiments were conducted on an Intel® Core™ i9–12900K processor.

4.2. Performance Criterion

We use the root-mean-squared error (RMSE) to evaluate the performance of wind correction. In this study, the errors in the u- and v-components of the wind are assessed simultaneously as follows:
RMSE = 1 N i = 1 N ( u i u ^ i ) 2 + ( v i v ^ i ) 2 ,
where N is the number of test cases, u i and v i are the true u- and v-components of the wind, and u ^ i and v ^ i are the predicted u- and v-components of the wind.

4.3. Comparative Analysis

First, we compared the performance of each machine learning model without using any dimensionality reduction methods. Table 3 shows the average RMSE values from 24 h to 71 h forecasts, obtained using 12-fold cross-validation. The ECMWF high-resolution model served as the baseline, and we compared it with linear regression, random forest, LightGBM, and SVR. Each model’s performance is evaluated at several stations in the East Sea, Yellow Sea, and Korea Strait. Without dimensionality reduction techniques, LightGBM achieved the lowest RMSE (2.81), demonstrating the best performance among the compared techniques. Linear regression, random forest, and SVR also achieved RMSE values below 2.92, representing an improvement of over 12% compared to ECMWF baseline.
Next, Table 4 shows the results of wind correction using machine learning techniques after applying PCA for dimensionality reduction. Here, SVR achieved the best performance with a lowest average RMSE (2.78), improving on ECMWF by 16.01%, and outperforming LightGBM without PCA on average. Specifically, SVR improved the RMSE at all stations, with the average RMSE decreasing from 3.31 to 2.78. These findings highlight the robustness of SVR in refining wind forecasts, which is critical for applications like SAR operations. The performance differences based on the types of SVR kernels can be found in Appendix A, and the performance for wind direction is given in Appendix B.
Linear regression also showed improvement, with an RMSE of 2.82, which is over 3% better than the results before using PCA. The performance of the linear regression was not significantly worse than that of the best model, which is consistent with the findings of Bouallègue et al. [25]. LightGBM, which performed the best without PCA, showed an increase in RMSE of over 12% after dimensionality reduction. Additionally, the RMSE value for random forest became higher than the ECMWF. These tree-based learning models do not seem to perform well with dimensionality reduction through PCA, which is consistent with the experimental results of Moon et al. [37].

4.4. Analysis of PCA

Table 5 shows the number of input variables and the corresponding average RMSE values by varying the PCA cutoff values when correcting wind data using SVR. The cutoff values for PCA range from 0.90 to 0.99. The cutoff determines the cumulative percentage of variance that must be preserved in the data after dimensionality reduction. As the cutoff value increases, more input variables are retained, preserving more information from the original dataset. As the cutoff value increases, the average RMSE generally decreases, indicating improved performance up to a certain point. The lowest RMSE value of 2.7801 is achieved at a cutoff of 0.98, indicating this is the optimal cutoff value for this experiment. Therefore, all subsequent experiments were conducted using SVR with a linear kernel and PCA with a cutoff of 0.98, utilizing approximately 19 variables. With this setting, the model takes approximately 100 s to train on one year of data from a single location and about 2 s to correct the 23 h–71 h forecast for a single day (the source code is available at https://github.com/uramoon/wind-correct, accessed on 9 August 2024.).

4.5. Monthly Performance Evaluation

Figure 6 illustrates the monthly RMSE values for wind forecasts from the ECMWF model and the improvements achieved through post-processing these forecasts using PCA and SVR for three regions: East Sea, Yellow Sea, and Korea Strait. The RMSE value for each region is the average of the RMSE values of the stations within that region.
In the East Sea, the RMSE values for forecasts from the ECMWF range from 2.52 in January to 7.22 in May. After applying PCA and SVR, the RMSE values show a significant reduction, ranging from 2.23 in January to 6.33 in May. This demonstrates a consistent improvement across all months, with the post-processed RMSE values being lower than the original forecasts from the ECMWF.
For the Yellow Sea, the ECMWF RMSE values are generally high, with values such as 5.36 in January and 5.07 in April, compared to those of the East Sea. The application of PCA and SVR reduces these values to 3.59 in January and 3.60 in April. This substantial decrease in RMSE highlights the effectiveness of the post-processing method in improving the accuracy of wind forecasts in the Yellow Sea.
In the Korea Strait, ECMWF RMSE values range from 2.23 in January to 4.12 in May. After post-processing, the RMSE values improve to a range of 2.06 in January to 3.66 in May. The reduction in RMSE values across all months indicates the robustness of the PCA and SVR approach in providing more accurate wind forecasts for the Korea Strait.
Overall, post-processing forecasts from the ECMWF with PCA and SVR consistently improved the RMSE values across all three regions and throughout the year. The reductions in RMSE values are evident in every month, suggesting that the post-processing method is effective in enhancing the accuracy of wind forecasts.

4.6. Analysis of Forecast Horizon

We corrected the 24 h –71 h portion of the 72 h forecast released the previous day, which aligns with the 0 h –48 h forecast released on the current day. Due to spin-up issues, the forecast for the current day is released several hours after 00:00. Therefore, our corrected forecast can be useful during the prediction gap. Furthermore, as shown in Figure 7, our corrected forecast is more accurate than the forecast released on the current day, making it highly beneficial for SAR operations.

4.7. Stations without Observational Data

So far, we have successfully post-processed the remaining 48 h forecasts by utilizing the first 24 h of forecast data and observational data at locations with observations. However, most maritime accidents occur in areas where wind observational data are not available, and we need to account for them. To address this, we use a model trained at the nearest wind station to the accident site, evaluating the similarity between the recent 24 h forecasts of both locations. We then correct the wind data using an adaptive weighting scheme.
Table 6 compares the results of different methods, assuming that each location has no observational data and using the model trained at the nearest station. In this table, ECMWF denotes the baseline NWPs for the target station. SVR represents the correction result obtained using the model trained at the nearest station. Averaging refers to the arithmetic mean of the ECMWF and SVR predictions. Adaptive weighting is the correction result obtained using an adaptive weighting scheme, which evaluates the cosine similarity between the forecasts of the target station and its nearest wind station.
In the East Sea, SVR produced the best results, but the adaptive weighting scheme also improved upon the baseline ECMWF. For the Yellow Sea, both SVR and Averaging showed higher RMSE values than the baseline, indicating a failure in correction; however, the adaptive weighting scheme successfully improved upon the baseline. Lastly, in the Korea Strait, the adaptive weighting scheme demonstrated the best results on average. The adaptive weighting scheme achieved an average RMSE improvement of 5.42%, outperforming the baseline at all locations. These promising results suggest that it can enhance NWP for SAR operations even at sites without observational data. The results of using the Euclidean norm instead of cosine similarity in the adaptive weighting scheme can be found in Appendix C.

5. Conclusions and Future Work

In this study, we explored various machine learning techniques to improve wind forecast accuracy in areas with or without observational data. Our primary methods included SVR for regression and PCA for dimensionality reduction. Through rigorous experimentation, we identified the optimal cutoff value for PCA, achieving an optimal balance between dimensionality reduction and information retention. This balance was crucial in ensuring that the most relevant features were preserved while minimizing computational complexity, thereby enhancing the performance of the wind prediction models.
Additionally, we introduced an adaptive weighting scheme to refine wind predictions for locations without observational data. This scheme utilized the cosine similarity between forecasts from target locations and their nearest wind stations to dynamically correct predictions. Our results demonstrated that this approach enhanced wind forecasts, which is essential for SAR operations.
Ensemble models are generally known to outperform single NWP models in terms of prediction accuracy [47]. For future research, we aim to improve wind prediction performance by post-processing various ensemble models. Additionally, preliminary experiments indicated that the performance of a simple artificial neural network was underwhelming. Therefore, our goal is to improve the correction model’s performance by applying various neural networks, such as convolution neural networks, recurrent neural networks, and Transformers.

Author Contributions

Conceptualization, D.-Y.K.; methodology, S.-H.M.; software, S.-H.M.; validation, S.-H.M., D.-Y.K. and Y.-H.K.; formal analysis, S.-H.M.; investigation, S.-H.M. and Y.-H.K.; resources, D.-Y.K. and Y.-H.K.; data curation, D.-Y.K.; writing—original draft preparation, S.-H.M.; writing—review and editing, D.-Y.K. and Y.-H.K.; visualization, S.-H.M.; supervision, Y.-H.K.; project administration, Y.-H.K.; funding acquisition, Y.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Korea Institute of Marine Science and Technology Promotion(KIMST), funded by the Ministry of Oceans and Fisheries, Korea (RS-2022-KS221629).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data will be made available upon reasonable request.

Acknowledgments

The present research has been conducted by a research grant from Kwangwoon University in 2020.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Performance Comparison of SVR Based on Kernel Types

The performance of wind correction with SVR may vary by kernel type. To investigate this, we conducted experiments comparing different kernel types, including linear, polynomial, and radial basis function (RBF) kernels. The polynomial kernel did not outperform the original ECMWF predictions. The comparison between the linear kernel and the RBF kernel is shown in Figure A1. While the RBF kernel outperforms the linear kernel in some cases, there are locations where the linear kernel provides a significant advantage, leading to an overall better performance.
Figure A1. Comparison of linear and RBF kernels in SVR.
Figure A1. Comparison of linear and RBF kernels in SVR.
Jmse 12 01360 g0a1

Appendix B. Performance Comparison for Wind Direction

In the main text, our model is trained to predict the u- and v-components of wind, but accurate wind direction prediction is also important. To evaluate wind direction prediction, we computed the mean absolute error (MAE) as follows:
MAE = 1 n i = 1 n min ( θ i θ ^ i , θ i θ ^ i + 360 ° , θ i θ ^ i 360 ° ) ,
where n is the number of observations, θ i is the observed wind direction, and θ ^ i is the predicted wind direction. Figure A2 compares the MAE values for wind direction between the ECMWF and SVR models. While the original forecasts from the ECMWF had large variations depending on the location, the SVR models showed less variation across different locations, and SVR which was preceded by PCA exhibited the best average performance.
Figure A2. Comparison of MAE for wind direction prediction.
Figure A2. Comparison of MAE for wind direction prediction.
Jmse 12 01360 g0a2

Appendix C. Euclidean Norm for Adaptive Weighting Scheme

The Euclidean norm measures vector similarity based on the magnitude of the vectors, while cosine similarity relies on the cosine of the angle between the vectors. We calculated the similarity between two vectors, x 1 and x 2 , using the Euclidean norm, as follows:
Euclidean similarity = S E ( x 1 , x 2 ) : = 1 2 x 1 x 2 x 1 + x 2 ,
where x 1 and x 2 are the two vectors, and x 1 and x 2 are the magnitudes or Euclidean norms of the vectors. The more similar the recent 24 h forecasts of the target location are to those of the nearest wind station, the closer the Euclidean similarity will be to 1. In the case that the similarity is less than 0, the adaptive weighting scheme uses the uncorrected forecast for the target location. Figure A3 compares the RMSE values based on different similarity measures at locations without observations. While Euclidean similarity occasionally performs better, cosine similarity significantly outperforms it at certain locations, leading to better overall performance.
Figure A3. Performance comparison based on similarity measures in adaptive weighting scheme.
Figure A3. Performance comparison based on similarity measures in adaptive weighting scheme.
Jmse 12 01360 g0a3

References

  1. Breivik, Ø.; Allen, A.A. An operational search and rescue model for the Norwegian Sea and the North Sea. J. Mar. Syst. 2008, 69, 99–113. [Google Scholar] [CrossRef]
  2. Zhang, J.; Teixeira, Â.P.; Guedes Soares, C.; Yan, X. Probabilistic modelling of the drifting trajectory of an object under the effect of wind and current for maritime search and rescue. Ocean Eng. 2017, 129, 253–264. [Google Scholar] [CrossRef]
  3. Nam, Y.W.; Cho, H.Y.; Kim, D.Y.; Moon, S.H.; Kim, Y.H. An Improvement on Estimated Drifter Tracking through Machine Learning and Evolutionary Search. Appl. Sci. 2020, 10, 8123. [Google Scholar] [CrossRef]
  4. Zhang, X.; Cheng, L.; Zhang, F.; Wu, J.; Li, S.; Liu, J.; Chu, S.; Xia, N.; Min, K.; Zuo, X.; et al. Evaluation of multi-source forcing datasets for drift trajectory prediction using Lagrangian models in the South China Sea. Appl. Ocean Res. 2020, 104, 102395. [Google Scholar] [CrossRef]
  5. Mecklenburg, S.; Joss, J.; Schmid, W. Improving the nowcasting of precipitation in an Alpine region with an enhanced radar echo tracking algorithm. J. Hydrol. 2000, 239, 46–68. [Google Scholar] [CrossRef]
  6. Ulmer, F.G.; Balss, U. Spin-up time research on the weather research and forecasting model for atmospheric delay mitigations of electromagnetic waves. J. Appl. Remote Sens. 2016, 10, 016027. [Google Scholar] [CrossRef]
  7. Short, C.J.; Petch, J. Reducing the spin-up of a regional NWP system without data assimilation. Q. J. R. Meteorol. Soc. 2022, 148, 1623–1643. [Google Scholar] [CrossRef]
  8. Xu, W.; Liu, P.; Cheng, L.; Zhou, Y.; Xia, Q.; Gong, Y.; Liu, Y. Multi-step wind speed prediction by combining a WRF simulation and an error correction strategy. Renew. Energy 2021, 163, 772–782. [Google Scholar] [CrossRef]
  9. Laloyaux, P.; Kurth, T.; Dueben, P.D.; Hall, D. Deep learning to estimate model biases in an operational NWP assimilation system. J. Adv. Model. Earth Syst. 2022, 14, e2022MS003016. [Google Scholar] [CrossRef]
  10. Zhang, W.; Jiang, Y.; Dong, J.; Song, X.; Pang, R.; Guoan, B.; Yu, H. A deep learning method for real-time bias correction of wind field forecasts in the Western North Pacific. Atmos. Res. 2023, 284, 106586. [Google Scholar] [CrossRef]
  11. Wang, X.; Guo, P.; Huang, X. A Review of Wind Power Forecasting Models. Energy Procedia 2011, 12, 770–778. [Google Scholar] [CrossRef]
  12. Chrit, M.; Sartelet, K.; Sciare, J.; Pey, J.; Nicolas, J.B.; Marchand, N.; Freney, E.; Sellegri, K.; Beekmann, M.; Dulac, F. Aerosol sources in the western Mediterranean during summertime: A model-based approach. Atmos. Chem. Phys. 2018, 18, 9631–9659. [Google Scholar] [CrossRef]
  13. Chrit, M.; Majdi, M. Improving wind speed forecasting for urban air mobility using coupled simulations. Adv. Meteorol. 2022, 2022, 2629432. [Google Scholar] [CrossRef]
  14. Chrit, M.; Majdi, M. Using objective analysis for the assimilation of satellite-derived aerosol products to improve PM2.5 predictions over Europe. Atmosphere 2022, 13, 763. [Google Scholar] [CrossRef]
  15. Chrit, M. Reconstructing urban wind flows for urban air mobility using reduced-order data assimilation. Theor. Appl. Mech. Lett. 2023, 13, 100451. [Google Scholar] [CrossRef]
  16. Jiao, X.; Zhang, D.; Song, D.; Mu, D.; Tian, Y.; Wu, H. Wind Speed Prediction Based on VMD-BLS and Error Compensation. J. Mar. Sci. Eng. 2023, 11, 1082. [Google Scholar] [CrossRef]
  17. Wan, A.; Gong, Z.; Wei, C.; AL-Bukhaiti, K.; Ji, Y.; Ma, S.; Yao, F. Multistep Forecasting Method for Offshore Wind Turbine Power Based on Multi-Timescale Input and Improved Transformer. J. Mar. Sci. Eng. 2024, 12, 925. [Google Scholar] [CrossRef]
  18. Chrit, M.; Majdi, M. Operational wind and turbulence nowcasting capability for advanced air mobility. Neural Comput. Appl. 2024, 36, 10637–10654. [Google Scholar] [CrossRef]
  19. Sun, X.; Liu, H. Multivariate short-term wind speed prediction based on PSO-VMD-SE-ICEEMDAN two-stage decomposition and Att-S2S. Energy 2024, 305, 132228. [Google Scholar] [CrossRef]
  20. Sweeney, C.P.; Lynch, P.; Nolan, P. Reducing errors of wind speed forecasts by an optimal combination of post-processing methods. Meteorol. Appl. 2013, 20, 32–40. [Google Scholar] [CrossRef]
  21. Xu, W.; Ning, L.; Luo, Y. Wind speed forecast based on post-processing of numerical weather predictions using a gradient boosting decision tree algorithm. Atmosphere 2020, 11, 738. [Google Scholar] [CrossRef]
  22. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. LightGBM: A highly efficient gradient boosting decision tree. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  23. Phipps, K.; Lerch, S.; Andersson, M.; Mikut, R.; Hagenmeyer, V.; Ludwig, N. Evaluating ensemble post-processing for wind power forecasts. Wind Energy 2022, 25, 1379–1405. [Google Scholar] [CrossRef]
  24. Duan, Z.; Liu, H.; Li, Y.; Nikitas, N. Time-variant post-processing method for long-term numerical wind speed forecasts based on multi-region recurrent graph network. Energy 2022, 259, 125021. [Google Scholar] [CrossRef]
  25. Bouallègue, Z.B.; Cooper, F.; Chantry, M.; Düben, P.; Bechtold, P.; Sandu, I. Statistical modeling of 2-m temperature and 10-m wind speed forecast errors. Mon. Weather Rev. 2023, 151, 897–911. [Google Scholar] [CrossRef]
  26. Theodoridis, S.; Koutroumbas, K. Pattern Recognition, 4th ed.; Academic Press: Cambridge, MA, USA, 2008. [Google Scholar]
  27. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature selection: A data perspective. ACM Comput. Surv. 2017, 50, 1–45. [Google Scholar] [CrossRef]
  28. Van Der Maaten, L.; Postma, E.; Van den Herik, J. Dimensionality reduction: A comparative review. J. Mach. Learn. Res. 2009, 10, 66–71. [Google Scholar]
  29. Abdi, H.; Williams, L.J. Principal component analysis. WIREs Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  30. Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
  31. Bastos, B.Q.; Cyrino Oliveira, F.L.; Milidiú, R.L. Componentnet: Processing u- and v-components for spatio-temporal wind speed forecasting. Electr. Power Syst. Res. 2021, 192, 106922. [Google Scholar] [CrossRef]
  32. Kohavi, R.; John, G.H. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef]
  33. Hall, M.A. Correlation-Based Feature Selection for Machine Learning. Ph.D. Thesis, University of Waikato, Hamilton, New Zealand, 1999. [Google Scholar]
  34. Abe, S. Feature selection and extraction. In Support Vector Machines for Pattern Classification; Springer: London, UK, 2010; pp. 331–341. [Google Scholar] [CrossRef]
  35. Yang, R.; Ren, M. Wavelet denoising using principal component analysis. Expert Syst. Appl. 2011, 38, 1073–1076. [Google Scholar] [CrossRef]
  36. Cavalcanti, G.D.; Ren, T.I.; Pereira, J.F. Weighted modular image principal component analysis for face recognition. Expert Syst. Appl. 2013, 40, 4971–4977. [Google Scholar] [CrossRef]
  37. Moon, S.H.; Kim, Y.H.; Lee, Y.H.; Moon, B.R. Application of machine learning to an early warning system for very short-term heavy rainfall. J. Hydrol. 2019, 568, 1042–1054. [Google Scholar] [CrossRef]
  38. Jolliffe, I.T. Principal Component Analysis; Springer: New York, NY, USA, 2002. [Google Scholar]
  39. Leskovec, J.; Rajaraman, A.; Ullman, J.D. Mining of Massive Datasets, 2nd ed.; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  40. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  41. Xu, P.; Cao, Q.; Shen, Y.; Chen, M.; Ding, Y.; Cheng, H. Predicting the Motion of a USV Using Support Vector Regression with Mixed Kernel Function. J. Mar. Sci. Eng. 2022, 10, 1899. [Google Scholar] [CrossRef]
  42. Jiang, L.; Zhang, Z.; Lu, L.; Shang, X.; Wang, W. Nonparametric Modelling of Ship Dynamics Using Puma Optimizer Algorithm-Optimized Twin Support Vector Regression. J. Mar. Sci. Eng. 2024, 12, 754. [Google Scholar] [CrossRef]
  43. Burges, C.J.C. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  44. Witten, I.H.; Frank, E.; Hall, M.A.; Pal, C. Data Mining: Practical Machine Learning Tools and Techniques, 4th ed.; Morgan Kaufmann: Cambridge, UK, 2016. [Google Scholar]
  45. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  46. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  47. Bouallègue, Z.B.; Weyn, J.A.; Clare, M.C.A.; Dramsch, J.; Dueben, P.; Chantry, M. Improving medium-range ensemble weather forecasts with hierarchical ensemble transformers. Artif. Intell. Earth Syst. 2024, 3, e230027. [Google Scholar] [CrossRef]
Figure 1. Locations of the 7 offshore wind stations around the Korean Peninsula.
Figure 1. Locations of the 7 offshore wind stations around the Korean Peninsula.
Jmse 12 01360 g001
Figure 2. Illustration of PCA for two variables: ECMWF forecast u-comp. 1 h and Observation u-comp. 1 h .
Figure 2. Illustration of PCA for two variables: ECMWF forecast u-comp. 1 h and Observation u-comp. 1 h .
Jmse 12 01360 g002
Figure 3. Support vector regression with epsilon tube.
Figure 3. Support vector regression with epsilon tube.
Jmse 12 01360 g003
Figure 4. Flowchart of the wind prediction correction using PCA and SVR.
Figure 4. Flowchart of the wind prediction correction using PCA and SVR.
Jmse 12 01360 g004
Figure 5. Flowchart of the wind prediction correction using the adaptive weighting scheme.
Figure 5. Flowchart of the wind prediction correction using the adaptive weighting scheme.
Jmse 12 01360 g005
Figure 6. Monthly RMSE comparison of ECMWF and SVR preceded by PCA.
Figure 6. Monthly RMSE comparison of ECMWF and SVR preceded by PCA.
Jmse 12 01360 g006aJmse 12 01360 g006b
Figure 7. RMSE comparison of wind prediction. ECMWF (previous day) and its post-processed version with SVR cover the 24 h –71 h prediction period, while ECMWF (current day) covers the same time period with the 0 h –48 h forecast.
Figure 7. RMSE comparison of wind prediction. ECMWF (previous day) and its post-processed version with SVR cover the 24 h –71 h prediction period, while ECMWF (current day) covers the same time period with the 0 h –48 h forecast.
Jmse 12 01360 g007
Table 1. Details of the seven offshore wind stations around the Korean Peninsula.
Table 1. Details of the seven offshore wind stations around the Korean Peninsula.
RegionNameIDLatitudeLongitudeMissing ValuesWind Speed (avg ± std)
u v
East SeaDonghae22,10537.54130.001.53%0.65 ± 4.11−0.17 ± 5.21
Uljin22,19036.91129.871.47%1.05 ± 3.99−0.05 ± 5.13
Yellow SeaSinangageocho6133.94124.591.36%0.03 ± 5.34−1.96 ± 6.65
Gageodo22,29734.03125.2110.64%−0.73 ± 3.71−1.83 ± 5.46
Korea StraitNamhaedongbu2534.22128.425.13%−0.06 ± 5.71−1.98 ± 4.31
Gyoboncho4234.70128.315.55%0.79 ± 4.13−0.94 ± 3.97
Geojedo22,10434.77128.901.06%0.01 ± 4.96−2.20 ± 4.52
Table 2. List of independent variables.
Table 2. List of independent variables.
No.VariableRange
1sin( 2 π day / 365 ) [ 1 , 1 ]
2cos( 2 π day / 365 ) [ 1 , 1 ]
3–26u-component of ECMWF (0 h –23 h ) ( , )
27–50v-component of ECMWF (0 h –23 h ) ( , )
51–74u-component of observation (0 h –23 h ) ( , )
75–98v-component of observation (0 h –23 h ) ( , )
99Target hour t to be corrected at [ 24 , 71 ]
100u-component of ECMWF (t h ) ( , )
101v-component of ECMWF (t h ) ( , )
Table 3. Comparison of RMSE values for different machine learning models without dimensionality reduction techniques. The best values are shown in bold.
Table 3. Comparison of RMSE values for different machine learning models without dimensionality reduction techniques. The best values are shown in bold.
RegionStationECMWFLinear RegressionRandom ForestLightGBMSVR
East Sea22,1053.42663.30963.30323.16763.2374
22,1903.58833.34073.43933.16653.2830
Yellow Sea613.33472.34382.48042.41172.3468
22,2974.34793.60033.37953.34123.6327
Korea Strait252.33602.24752.29512.27452.2714
422.74962.29312.32822.28872.3215
22,1043.38673.24363.14513.01943.1555
Average3.31002.91122.91012.80992.8926
Table 4. Comparison of RMSE values for different machine learning techniques preceded by PCA. The best values are shown in bold.
Table 4. Comparison of RMSE values for different machine learning techniques preceded by PCA. The best values are shown in bold.
RegionStationECMWFLinear RegressionRandom ForestLightGBMSVR
East Sea22,1053.42663.17643.68593.30203.1136
22,1903.58833.21203.78313.42033.1627
Yellow Sea613.33472.33513.72483.11362.2832
22,2974.34793.46783.74843.47303.4656
Korea Strait252.33602.17903.21682.85562.1650
422.74962.23773.04682.72902.2178
22,1043.38673.13933.55083.22643.0526
Average3.31002.82113.53673.16002.7801
Table 5. Average number of retained input variables and average RMSE for different PCA cutoff values.
Table 5. Average number of retained input variables and average RMSE for different PCA cutoff values.
Cutoff0.900.910.920.930.940.950.960.970.980.99
No. of input variables4.675.086.007.008.009.5811.0014.0019.0032.08
Average RMSE4.06823.88303.79553.63823.38183.09592.90112.79092.78012.7883
Table 6. RMSE values for different correction methods at locations without observations. Averaging denotes the arithmetic mean of the results from ECMWF and SVR. The best values are shown in bold.
Table 6. RMSE values for different correction methods at locations without observations. Averaging denotes the arithmetic mean of the results from ECMWF and SVR. The best values are shown in bold.
RegionStationECMWFSVRAveragingAdaptive Weighting
East Sea22,1053.42662.93333.03503.2031
22,1903.58833.06733.24243.3729
Yellow Sea613.33474.12253.48393.2906
22,2974.34794.93514.43804.1347
Korea Strait252.33602.55132.22132.1223
422.74962.65462.64232.6481
22,1043.38673.28603.15393.1432
Average3.31003.36433.17383.1307
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moon, S.-H.; Kim, D.-Y.; Kim, Y.-H. Post-Processing Maritime Wind Forecasts from the European Centre for Medium-Range Weather Forecasts around the Korean Peninsula Using Support Vector Regression and Principal Component Analysis. J. Mar. Sci. Eng. 2024, 12, 1360. https://doi.org/10.3390/jmse12081360

AMA Style

Moon S-H, Kim D-Y, Kim Y-H. Post-Processing Maritime Wind Forecasts from the European Centre for Medium-Range Weather Forecasts around the Korean Peninsula Using Support Vector Regression and Principal Component Analysis. Journal of Marine Science and Engineering. 2024; 12(8):1360. https://doi.org/10.3390/jmse12081360

Chicago/Turabian Style

Moon, Seung-Hyun, Do-Youn Kim, and Yong-Hyuk Kim. 2024. "Post-Processing Maritime Wind Forecasts from the European Centre for Medium-Range Weather Forecasts around the Korean Peninsula Using Support Vector Regression and Principal Component Analysis" Journal of Marine Science and Engineering 12, no. 8: 1360. https://doi.org/10.3390/jmse12081360

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop