Next Article in Journal
Forecasting of Typhoon-Induced Wind-Wave by Using Convolutional Deep Learning on Fused Data of Remote Sensing and Ground Measurements
Next Article in Special Issue
Adaptive Unscented Kalman Filter for Target Tacking with Time-Varying Noise Covariance Based on Multi-Sensor Information Fusion
Previous Article in Journal
Classifying Ingestive Behavior of Dairy Cows via Automatic Sound Recognition
Previous Article in Special Issue
RGB-D Data-Based Action Recognition: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sea Fog Dissipation Prediction in Incheon Port and Haeundae Beach Using Machine Learning and Deep Learning

1
Underwater Survey Technology 21, Incheon 21999, Korea
2
Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
3
Korea Hydrographic and Oceanographic Agency, Busan 49111, Korea
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(15), 5232; https://doi.org/10.3390/s21155232
Submission received: 8 June 2021 / Revised: 19 July 2021 / Accepted: 29 July 2021 / Published: 2 August 2021
(This article belongs to the Collection Multi-Sensor Information Fusion)

Abstract

:
Sea fog is a natural phenomenon that reduces the visibility of manned vehicles and vessels that rely on the visual interpretation of traffic. Fog clearance, also known as fog dissipation, is a relatively under-researched area when compared with fog prediction. In this work, we first analyzed meteorological observations that relate to fog dissipation in Incheon port (one of the most important ports for the South Korean economy) and Haeundae beach (the most populated and famous resort beach near Busan port). Next, we modeled fog dissipation using two separate algorithms, classification and regression, and a model with nine machine learning and three deep learning techniques. In general, the applied methods demonstrated high prediction accuracy, with extra trees and recurrent neural nets performing best in the classification task and feed-forward neural nets in the regression task.

1. Introduction

Sea fog is an important meteorological phenomenon, similar to wind and precipitation. It influences the daily operations of air, sea, and land transportation and has estimated negative economic effects [1]. Fog impedes visibility and is conducive to traffic crashes. A study in the USA demonstrated that depending on the state, the weather can contribute to over 21% of all crashes, and fog together with snow and rain contributed to 31,514 traffic crashes between 2000 and 2007 [2].
Sea fog is a type of fog that generally occurs from advective fog when relatively wet and warm air moves over the sea surface with its temperature falling below the dew point [3]. Sea fog can also be born onshore and be moved to the sea surface or be a mix of on- and off-shore fog [4]. In South Korea, fog occurs mostly on the west coast. It has a seasonal pattern, with most of the fog occurring during monsoon in June and July [5].
Fog prediction is mostly performed using either numerical or machine learning methods [6,7]. Neural networks have been used for fog prediction for a long time with varied success. Unlike other domains, fog data are relatively scarce, while neural networks work best in a data abundance regime [8,9,10]. Furthermore, fog occurs seldom and leads to a data imbalance, with most data being in the non-fog cluster.
While fog prediction is a well-studied [11] and established area, fog dissipation is scarce in the literature. Fog dissipation refers to the clearing of fog from the air and improving visibility. Some numerical approaches have been suggested to model fog dissipation. One way in which dissipation happens is when the fog droplets become larger and drop to the ground [12]. Predicting fog dissipation is important for air flight planning [13] and cargo working times [14].
In this paper, our goal is to find suitable machine learning and deep learning algorithms for sea fog dissipation data. The structure of this paper is as follows: in Section 2, we analyze the weather data and present a predictive modeling method; Section 3 introduces the classification and regression models used in the sea fog prediction; in Section 4, we summarize the results of the sea fog prediction performed with each model; lastly, in Section 5, the results concluded in Section 4 are further discussed.

2. Weather Data and Prediction Modelling

2.1. Data sources and Preprocessing

Data from two sites were used: Incheon port and Haeundae beach. The observation data were obtained from the Korean Metrological Administration (KMA) and Korea Hydrographic and Oceanographic Agency (KHOA). Incheon is a strategic port in the west of the Korean Peninsula facing the Yellow Sea, while Haeundae beach is popular in the southeast of the peninsula (Figure 1).
For Incheon, the period of observations is seven years from 1 January 2012 to 31 May 2019, and for Haeundae, the period is over five years from 1 January 2014 to 31 July 2019. The objective of our study was to develop a prediction model of the sea fog dissipation to be operated on the KHOA system. The amount of sunlight in the data we collected was not considered in the preliminary feature selection, because it was not observable in real time in the KHOA system. Moreover, the cumulative precipitation was excepted, because it was confirmed to decrease the performance of the sea fog prediction when added to the learning dataset [15]. Out of the total number of variables available, seven were selected as base features: air temperature, sea surface pressure, relative humidity, sea surface temperature, visibility, u-component wind, and v-component wind. Using these given features, we created additional features: air and sea temperature difference (ASTD), dew point temperature (DT), air and dew point temperature difference (T_DT), and sea surface temperature and dew point temperature differences (sst_DT). A list of the given and created features are listed in Table 1. We resampled1-minute observations into 10-min observations. Sea surface temperature is available once in an hour, and therefore its value was assigned to six 10-minute observations.
Air and sea temperature difference, air and dew point temperature difference, sea surface temperature and dew point temperature difference are all derived by simple subtraction operation. The dew point temperature was calculated using the formula suggested in [16] as follows:
D T = a i r _ t e m p ( 100 h u m i d i t y 5 ) ( a i r _ t e m p + 273.15 300 ) 2
where D T is the dew temperature, a i r _ t e m p is the air temperature, and humidity is relative humidity.
For a given time T i in the observed period and a feature set f i = [ a i r _ t e m p i ,   s e a _ a i r _ p r e i ,   h u m i d i t y i ,   s e a _ t e m p i ,   v i s i ,   u i ,   v i ,   A S T D i ,   D T i ,   T _ D T i ,   s s t _ D T i ] , we created the past one-hour feature vector v 1 i = [ f i ,   f i 1 ,   ,   f i 5 ,   f i 6 ] , and past three-hour feature vector v 3 i = [ f i ,   f i 1 ,   ,   f i 35 ,   f i 36 ]. Then, we used these feature vectors separately to observe their effects on the training. The same set of features was used for all the models as is described in Section 2.2.

2.2. Dissipation as a Classification and Regression Task

Dissipation is the natural outcome of the fog. Once the fog clears and the visibility returns to normal ranges, the fog is said to have dissipated. We define fog as the visibility with less than or equal to 1100 m. The visibility may drop below or rise above this threshold at any given consecutive period. Due to this variability, fog dissipation does not always occur after the rise in visibility above the 1100 m threshold. Therefore, fog dissipation prediction is performed, even for the cases when the visibility is over 1100 m but is preceded by a fog period.
In this work, dissipation prediction is formulated as binary classification (dissipate or non-dissipate) and regression problems. To this end, we grouped all fog cases that occur within close proximity (within one, two or three hours depending on the prediction period) into grouped fog periods of [ T i ,   T i + k ] consisting of k cases. The period of [ T i ,   T i + k ] mostly consists of cases of continual fog periods with intermittent and short non-fog cases.
For the binary classification task, each case from the [ T i ,   T i + k ] period is labeled as either 1 or 0 depending on if the fog dissipates over 60, 120, or 180 min from that point. The grouping period depends on the prediction period, and for prediction horizons of 60, 120, or 180 min, the grouping period is 60, 120, or 180 min, respectively. In Figure 2, visibility dropped below 1100 m for 20, 30, 40, 120, 130, 140, and 150 min, and these are labelled as fog. When the prediction period is 60 min, and hence the same length grouping period is used, the mentioned fog occurrences are grouped into the same 20–40 and 120–150 min intervals. However, if the prediction period, and hence the grouping period of 120 min, is used, these two intervals and the interval between them are all grouped together into the 20–50 min interval. Then, the data from these intervals are included in the training.
As can be seen, the longer the period, the more cases were grouped, and the more non-fog were cases included. In practice, this also leads to an increase in the training data for longer periods but not as linearly as the increase in the period. This is because we removed grouped cases with any missing observations in base features. In Figure 2, if any base features from the interval 20–40 are missing, then only this interval is excluded from the training, provided the interval 120–150 does not have any missing values. At the same time, when grouping and predicting for 120 min, the total 20–150 interval is excluded if any base features are missing.
The regression task aims to predict the time to dissipation for each case in the grouped [ T i ,   T i + k ] period. All cases in the grouped [ T i ,   T i + k ] period have the same dissipation time T i + k + 1 , and their time to dissipation has a monotonically decreasing characteristic: { 10 k   m i n u t e s ,   10 ( k 1 )   m i n u t e s , ,   20   m i n u t e s ,   10   m i n u t e s } . Thus, for a given case at index i   { i , i + 1 ,   , i + k 1 ,   i + k } , its time to dissipation is given:
t t d i = ( k i + 1 ) 10 m i n u t e s
Formally, given a prediction time period t   { 60 ,   120 ,   180 } representing 10-min intervals in 60, 120, or 180 min, a grouped fog period [ T i ,   T i + k ] , which consists of fog cases within the prediction time proximity t from each other, is labelled L i t as
L i t = { T r u e ,             t t d i > t 10                             F a l s e ,             o t h e r w i s e                              
  L = the dissipation within t hours is true; L = the maintenance within t hours is false.
In general, the later the case within the grouped fog period [ T i ,   T i + k ] , the sooner the time to dissipation. Given that the longest period that we aimed to group fog cases together is three hours, we grouped fog cases that occur within three hours of proximity into distinct grouped fog periods. In practice, grouping for even longer periods yielded more non-fog cases being included in the training and, thus, presented less interest.

2.3. Data Characteristics

Data came from both Incheon port and Haeundae beach (Busan) of the Republic of Korea. Incheon is located in the northwest and Haeundae beach in the southeast. These geographical differences play a role in the distribution of the base features and fog frequency. Fog cases represent 0.8% and 2.8% of all the observations for Incheon port and Haeundae beach, respectively. Therefore, the resulting number of selected data points is small despite the relatively long periods of observations. Furthermore, during the observation period from 1 January 2012 to 31 May 2019 for Incheon, and from 1 January 2014 to 31 July 2019 to Haeundae beach, 1or more of the 11features were missing, and this further led to a reduction in the size of the selected data. In such cases, the total observation period was removed, even though only one or a few observations had missing values.
Among the base features (10-minute average), air temperature, sea surface pressure, humidity, and the sea surface temperature were slightly higher in Incheon port than Haeundae beach during the observation period (Table 2). Similarly, derived features, such as air and sea temperature difference (ASTD), dew point temperature (DT), air and dew point temperature difference (T_DT), and sea surface temperature and dew point temperature difference (sst_DT), demonstrated similar characteristics. There were two optical visibility meters, with a limit of 20,000 m for the VAISALA visibility and 3000 m for the AANDERAA visibility system. The visibility was capped at 3000 m for both sites, and we decided to use the visibility value of the VAISALA system after comparing CCTV images of both sites.
Weather conditions are related to each other, and despite a high variance in each individual feature, there is a considerable correlation among some features, as shown in Figure 3. Air temperature has a strong correlation with sea surface temperature, air and sea temperature difference (ASTD), and sea surface temperature and dew point temperature difference (sst_DT). Air temperature is a component of the latter two and, therefore, a high correlation is expected. There are strong and negative correlations between air temperature and sea surface pressure, and sea surface temperature and dew point temperature differences (sst_DT). Again, the latter feature is derived from air temperature, and therefore a high correlation is inevitable.
There were other correlations, both positive and negative, among the rest of the features in both Incheon port and Haeundae beach. Despite these highly correlated features, due to the low count of the number of features available, all of them were included in the training. Therefore, we could not choose a linear model as a learning model. The list of base seven features was extended with derived features, such as air and sea temperature difference and dew point temperature, for the same reason. Overall, as weather conditions are highly dependent on each other, the observed correlation is inevitable.
The count of the final selected data is summarized in Table 3. Despite the shorter observation period, there are more data available for Haeundae than Incheon since fog was more frequent in Haeundae beach during the observation period. Moreover, Haeundae beach had more missing values, and therefore more data points were removed as a result. A similar effect was observed when the past three-hour features were selected instead of the past hour-hour features: the number of missing values increased, and more data points were removed as a result.
As the grouping period is extended, the probability of the fog dissipation also increases and is reflected in more data being in the dissipation class than in the non-dissipation class. The grouped fog period contains observations with a visibility less than or equal to 1100 m (considering the 10% error of the optical visibility equipment) that are within one (two or three) hour apart and other observations in between them. In both of the studied sites, if not grouped, consecutive fog periods had a median of 0.67 and 0.33 h for Incheon and Haeundae, respectively (Figure 4). However, visibility fluctuates around the threshold, and fog tends to go and come back before final dissipation. Therefore, periods of fog were grouped within one-, two-, and three-hour intervals, and the median duration increased to 1.33, 1.67, and 2.17 h in Incheon port and 1.17, 1.67, and 2.17 h in Haeundae beach, respectively. On average, Haeundae has longer fog durations than those of Incheon.
The time to dissipation is a continual measurement that we model for predictions. Figure 5 shows the distribution of the time to dissipation for both sites. The time to dissipation was calculated by grouping fog cases that are within 3 h apart from each other. The time to dissipation was left-skewed, with half of the fog dissipation within 2.5 h in Incheon and 5.33 h in Haeundae beach. Haeundae beach hada longer time to dissipation on average than that of Incheon port and was more difficult to predict, as is discussed in Section 4.

3. Classification and Regression Algorithms

Each of the below-discussed methods was used for the two purposes of classification and regression. Model selection criteria were based on the simultaneous presence of a classifier and regressor (AdaBoost, bagging, extra trees, gradient boosting, random forest, k-nearest neighbors, and decision model), except for linear models and multiclass models. In addition, nine machine learning models were selected by replacing the HistGradientBoosting existing in Scikit-learn with the famous Light GBM model. Finally, since our goal was to perform predictive performance tests on deep learning models, we selected the most famous models of FFNN, CNN, and RNN. The hyperparameters used are the default ones of Scikit-learn version 0.21, unless otherwise mentioned, and they are given in the Appendix A.

3.1. k-Nearest Neighbors (k-NN)

k-NN is one of the non-parametric classifiers that uses the distance between a data point and its closest k neighbors to decide on the class [17]. The most represented class in the neighborhood of the data point is decided on the class of the data. The algorithm needs an appropriate number of neighbors to be selected for classification. Depending on the number of k, the algorithm may take longer or shorter, and most importantly, choose an appropriate class.
k-NN regression attempts to predict the value of the output variable by using a local average. We used default settings and deemed the number of closest neighbors to be 7.

3.2. Decision Tree (DT)

The decision tree classifier is one of the widely used early classification algorithms in data mining. The model is derived from the research paper on the classification and regression trees (CART) [18]. The decision tree is built by splitting the data on the basis of the Gini impurity measure, which is calculated as
I g ( p ) = 1 i = 1 J p i 2
where I g is Gini impurity, J   is the number of classes from p { 0 , 1 } , and p i is the fraction of items with label i   (=sea fog dissipation). Thus, each step is decided until the minimum number of items is left with the node, which then becomes the leaf, and the splitting is then discontinued on that node.
In the case of regression, a similar tree construction algorithm is employed with a mean square error being the measure function. In both classification and regression, we did not change the default hyperparameters.

3.3. Support Vector Machine (SVM)

SVM represents another class of discriminative classifiers that separate data points by building a hyperplane in a dimension. Originally proposed by [19], we used the specific implementation provided in Scikit-learn [20], i.e., the C-Support vector classification algorithm.
In practice, a support vector machine for classification and regression has been shown to perform the worst among the algorithms. We spent some time trying to tune it, but it turned out that the model itself was not well suited for the task at hand. The only hyperparameter we decided to change was the gamma option, which is the kernel coefficient for ‘radial base function’, ‘polynomial’, and ‘sigmoid’ for which we chose ‘scale’ in the Scikit-learn package.

3.4. Bagging and Boosting Ensemble Models

3.4.1. Random Forest (RF)

The random forest classifier is a type of randomized tree ensemble that uses an ensemble of decision trees [21]. Each decision tree is trained separately on the random subsample of the training data with replacement, and the final decision is made on the basis of the average of the trees in the ensemble.
In the experiments, we found that setting the number of estimators to 100 facilitated good performance, and we left the other hyperparameters unchanged for both classification and regression. The same hyperparameters as those in other models were applied as one value in order to make comparisons in similar states.

3.4.2. Extremely Randomized Trees (ET)

An extremely randomized classifier is another type of randomized tree ensemble, very similar to the random forest classifier, except that the several decision thresholds for the splits are selected randomly, and the best threshold among these is chosen as the split threshold [22].
Similarly to the random forest, we left the default hyperparameters of extremely randomized classifiers and regressors unchanged.

3.4.3. Bagging

The bagging classifier is very similar to the random forest classifier except that it is a metaclassifier that can build its ensemble from any classifier and not only a decision tree. Given a metaclassifier, it builds several estimators that are fit to a random subsample of the data generated with replacement. The final decision is based on the average of all models’ performances [23].
For both tasks, we chose the decision tree as the base classifier. Regression is achieved in a similar fashion to that of classification.

3.4.4. AdaBoost (AB)

The AdaBoost classifier is a representative classifier for model boosting, which, unlike bagging, relies on a set of weak classifiers. Each classifier is trained successively and not in parallel, concentrating on the error of the previous one. Such an ensemble of weak classifiers then produces a cascading effect with a net-positive effect on the accuracy of the final result [24].

3.4.5. Gradient Boosting (GB)

Gradient boosting can be seen as another boosting algorithm, but it is more general than the AdaBoost classifier. The main difference between the AdaBoost classifier and gradient boosting is that the latter identifies the shortcomings of each weak classifier by its gradient [25,26]. This difference is expressed in the particular loss function of each classifier.
The learning rate was set to 0.1, and the remaining parameters were set to default.

3.4.6. Light GBM (LGBM)

While there are many boosting classifiers based on decision trees, they have the disadvantage of an extensive training time. In this regard, Light GBM classifier is 20 times faster in terms of training when compared with the other algorithms while providing almost the same accuracy [27]. This is achieved by gradient-based, one-side sampling and exclusive feature bundling.
Similar to other cases, the classification and regression models used the same hyperparameters, and no change was made to the default hyperparameters.

3.5. Neural Network-Based Architectures

3.5.1. Feed-Forward Neural Network (FFNN)

The simplest neural network, in terms of its architecture, is a feed-forward neural network that builds a layer of perceptrons on top of another layer [28]. Such a simple architecture has been proven effective for most classification tasks. In the case in which the past time steps are required, they may be incorporated by concatenating with the current time step. For both of the tasks, we concatenated base features for each time step to the base features. Then, there were three deep layers of 512, 256, and 256 neurons that were each followed with batch normalization and ReLU activation layers. The final layer was a logistic regressor/classifier with the sigmoid activation function. In the case of the regression, the output was used as it was, while for the classification case, the results that were equal to or higher than 0.5 were treated as ones or zeros otherwise.

3.5.2. Convolutional Neural Network (CNN)

One of the earliest versions of the CNN was designed to recognize hand-written digits [29]. Although originally designed mostly for image-related tasks, the CNN has been adopted to a broad range of tasks, such as text classification, spatiotemporal data analysis, and weather forecasting.
We modeled our classification and regression tasks as convolutions over time steps with each time step having 11 dimensional features (seven base features and four derived features). Similar to the FFNN, we concatenated each time step but did not flatten them. Thus, each input was similar to an image with just one filter. There were two convolution layers with output filter sizes of 512 and 256 that were each followed by the ReLU activation layer. The final layer was passed to a perceptron with the same activation case as in the case of the FFNN.

3.5.3. Recurrent Neural Network (RNN)

Composed of LSTM cells, the RNN is able to interact with and remember long-term dependencies [30]. Therefore, unlike the FFNN and CNN, the RNN is well suited for time-series data, such as fog dissipation [31]. Unlike the other neural network architectures, inputs are not concatenated in the case of the RNN. Instead, each time step is an input with 11 features and the 1-h past features represent 6 past time steps, while 3 and 6 h represent 18 and 36 past time steps, respectively.
There are two layers of recurrent LSTM cells that are stacked. Each recurrent layer passes its output forward with the third recurrent layer returning only the last LSTM cell’s output. Each LSTM cell has 64 neurons, and the last recurrent layer’s output is passed to a perceptron just like the other two architectures with the same activation function for the classification and regression tasks.

3.6. Evaluation

For the binary classification task, we evaluated each model’s performance using the critical success index (CSI) score, precision score, recall score, and F1 score. The CSI score is a verification measure of categorical forecast performance equal to the total number of correct event forecasts (hits) divided by the total number of correct forecasts plus the number of misses. The precision score is the ratio of correctly predicted positive observations to the total predicted positive observation, while the recall score is the ratio of correctly predicted positive observations to the total number of events observed. The F1 score is the harmonic mean of the precision score and recall score. Given a confusion matrix, the scores are calculated as follows:
CSI score = TP/(TP+FP+FN)
Precision score = TP/(TP+FP)
Recall score = TP/(TP+FN)
F 1   score = 2   p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
where TP—true positive, TN—true negative, FP—false positive, and FN—false negative.
For the binary classification task, we evaluated each model’s performance using Mean Squared Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and the coefficient of determination (R2). The mean squared error measures the average of the squares of the errors. Taking the square root of MSE yields the Root Mean Square Error, which has the same units as the quantity being estimated. The mean absolute error is an arithmetic average of the absolute errors. R squared represents the proportion of variance (of y) that has been explained by the independent variables in the model. All of the scores are calculated as follows:
M S E = 1 n i n ( y i y ^ i ) 2
R M S E = M S E
M A E = 1 n i n | y i y ^ i |
R 2 = 1 U n e x p l a i n e d   V a r i a t i o n T o t a l   V a r i a t i o n
where y i is ground truth, y ^ i is the predicted value or data point at index i , and y ¯ is the mean of all values.

4. Results

For the experiments, we used an ordinary desktop computer with the Microsoft Windows 10 operating system, an Intel-based CPU of 2.90 GHz with two cores, NVIDIA GeForce GTX 1660 GPU accelerator, and two DDR 4 RAMs of 8 GB. We ran each model five times and report here their median performance.

4.1. Classification Results

When modeled as a classification task, the models showed performance that is consistent over the prediction times of one, two, and three hours, as shown in Table 4, Table 5 and Table 6, respectively. As the prediction time is extended, the prediction of dissipation (occurrences) becomes more accurate. Another dimension along which improvement is achieved is the past time steps—more features resulted in better accuracy for all models. The accuracy was higher for three-hour features as compared with the one-hour ones. The performance of the models over the prediction periods was averaged and ranked. The rankings derived in this way demonstrate relative consistency when compared to the number of past time steps.
Among the studied sites, the predictions of Incheon port in terms of the CSI score, being the median of the models, were about 11% higher than those of Haeundae beach. The prediction accuracy improved as the prediction period was extended. The most accurate predictions came from some of the ensembles and neural network models. Extremely random trees and random forest were the strongest among the ensemble methods, while RNN was best among neural network-based classifiers.
The SVM performed the worst among all of the models. Its performance for one-hour prediction was below the random guess in terms of the class distribution of the training data. SVM classifies on the basis of the separating planes in high dimensions, and it seems that for the fog dissipation within one hour, it could not find an accurate plane with the radial basis function (RBF) kernel. Other kernels with different gamma and penalty values did not show any improvement over the default RBF kernel.
Another classifier that performed worse than others but better than SVM was the gradient boosting-based ensemble model. Gradient boosting builds an ensemble sequentially by using weak classifiers with each next one building on the error of the previous one. While tuning it for better performance, the number of estimators appeared to be the most important among other hyperparameters. The current results were calculated using an ensemble of 10,000 estimators.
Other non-neural network-based models demonstrated reasonable results with their default settings in Scikit-learn package [32]. Neural networks were tuned on the number of layers and the number of neurons at each layer. Among the three neural network-based architectures, the RNN was the best, except when used with the base features. When used with the base features, there are no past time steps that can recur, and therefore the model cannot learn from the past time steps. On the contrary, when the past time steps are input to the model, fog dissipation is usually best captured by the recurrent nets. Overall, among the neural network-based models, the RNN demonstrated higher performance for Haeundae beach than for Incheon port as compared with extremely random trees.
In general, the longer the period of prediction, the more accurate the models’ performances and the less the difference between them. Using only base features is also effective in predicting fog dissipation with most of the models. Fog dissipation in Incheon port is more predictable than in Haeundae beach, although slightly more data are available for the latter. This could be due to the meteorological conditions in Haeundae beach being more complex than those in Incheon port.
On the basis of the value with the highest prediction performance among base features, 1H features, and 3H features, we plotted a comparison graph for the prediction performance and the learning model selected by CSI score (Figure 6). When checking the overall performance index, we found that most of the models showed very good PAG performance, but POD performance showed large regional variations in Incheon and Haeundae. Among the tree-based models, the POD performance of the ET algorithm was the best. Learning sea fog dissipation prediction as a classification model has shown that algorithms such as FFNN, RNN, RF, and ET are superior to the other models.

4.2. Regression Results

For the regression task, we grouped fog cases that are three hours apart. Then, we fit models to predict the time to dissipation given each case within the grouped fog period. Similar to the classification case, we experimented with base features, past one- and three-hour features. Unlike the classification model, the regression model showed that a larger number of features did not always result in more accurate predictions. For a number of tree-based ensemble models, such as bagging and random forest, their performance with only base features was on par with their past three hours feature models, as can be observed in Table 7.
Among the neural network-based models, the FFNN was the most accurate, with the convolutional and recurrent architectures being much less accurate. Given three hours of past features, FFNN presented an R2 of 0.99 and 0.97 for Incheon port and Haeundae beach, respectively, and this ranks it as the best model. Given base and past one-hour features, it ranks as the second-best for Incheon port and fourth and second for Haeundae beach. Recurrent neural nets demonstrated the worst performance of 0.04 among the models given base features, which is not surprising since the prediction was made only with a single LSTM cell at each recurrent layer. Overall, except for the FFNN, both the RNN and CNN ranked much worse than the other models when compared with their ranking in the classification case.
Extremely random trees performed the best in most of the cases, just as in the classification case. Other tree-based ensemble models, such as bagging, random forest, and Light GBM, performed on par with each other. The worst performance was demonstrated by AdaBoost and SVR, with k-nearest neighbors performing slightly better. Overall, the regression results appeared to be quite accurate, and the dissipation time was predicted within reasonable accuracy ranges.
Observing the median value of the overall performance of the regression models for sea fog dissipation, we found the R2 values of Incheon and Haeundae to appear to be similar, but the MSE, RMSE, and MAE of Incheon were observed to be lower by more than half when compared with those of Haeundae (Figure 7). What is noteworthy here is that the RNN model, which did not have a large difference in classification model prediction performance in the features experiment, distinctively differed in prediction performance in the regression models. The R2 difference between the model using base features and the model using 3H features was 0.84. We confirmed once again that time-lagged data must be used when using the RNN model as a regression model.

5. Discussion

The tree-based ensemble models, together with neural network-based models, demonstrated relatively high performance on both classification and regression tasks. In the classification task, higher prediction accuracy was achieved when the period of prediction lengthened from one hour to three hours. This was partly expected as more of the cases fell under the dissipation case rather than non-dissipation as the prediction period was extended.
The length of fog duration rose as the period of prediction increased since we grouped fog cases that were within the prediction time frame. In Incheon port, the median duration of fog was 0.67 h initially, and this doubled to 1.33 h under one-hour prediction and rose to 1.67 and 2.17 h under two- and three-hour prediction regimes, respectively. For Haeundae beach, median fog duration started at 0.33 h and then rose to 1.17, 1.67, and 2.17 h as the prediction period was extended. In both sites, as we grouped fogs that were within one, two, and three hours, the median fog duration rose but not at the same pace as the prediction time frame. This influenced non-dissipation and dissipation class distribution. For Incheon under a one-hour prediction regime, 70% of the cases fell within the non-dissipation class and 30% within the dissipation class. When the prediction period was extended to three hours and the fog cases that were grouped were within three hours, the distribution of the prediction classes changed, with 45% of the data falling within the non-dissipation class and the rest within dissipation. Under the three-hour prediction regime, the most extreme cases were grouped and fell within non-dissipation classes. Such cases were outliers, more exceptional, and occurred under more differentiated weather conditions. This should make detecting non-dissipation cases easier than under the one- or three-hour regimes, and similar assumptions should hold for the case of Haeundae beach.
For the classification task, the reason why the predictive performance of the ET model was high is thought to be because the decision boundary was generalized through extreme random variable modeling for a slight overfitting caused by a small amount of training data. The reason the SVM and AB algorithms showed low predictive performance was considered to be due to the fact that the learning model was not sufficiently optimized with the default hyperparameters.
For the regression task, the three-hour prediction regime was selected as the basis for grouping fog cases that happened within this time frame. This made the median and maximum time to dissipation 2.50 and 38.50 h for Incheon, and 5.33 and 53.66 h for Haeundae beach, respectively. As the predictions were made for grouped fog cases under this time frame, we do not know how the results would be if other time frames were chosen. Each model’s performance varied in terms of the number of features used, but the increase in the accuracy was not always achieved as more time steps were included in the input. The increase was positive for neural network-based models, k-nearest neighbors, SVR, and some tree-based ensemble models. An increase is expected when the models learn to condition on the past and perform more accurately. This was most evident in the case of recurrent and convolutional neural nets. However, in the case of the random forest and decision tree, more features confused the models with a net drop in R2. Similar observations for the same classifiers were observed in the classification case under a one-hour prediction regime. Overall, more features meant better performance for most of the cases for classification and regression tasks.
Despite relatively few base features and high intercorrelation for some of them, the outcome of the experiments demonstrated accurate results due to the use of machine learning models. Higher performance may be achieved by more weather data included in the base features and a longer observation time span. The latter is especially beneficial in the case of neural network-based models, as they thrive on large amounts of training data. Another dimension for improvement is to include other models with different architectures, such as transformers [33], logistic or linear regression, lazy learning (e.g., one of the time series prediction models) [34], and/or other machine learning methods.

6. Conclusions

In this work, we addressed fog dissipation prediction in Incheon port and Haeundae beach of the Korean peninsula. Fog dissipation is a relatively under-researched area, with more research available on fog prediction and forecasting. This causes the current research results to be less comparable with previous research, and no benchmarking datasets were used to compare the results.
The Korean peninsula is in East Asia, and the weather contrast is more distinct from north to south and from east to west due to the distinct geography of its location. Two studied sites that are important for the Korean economy and recreation, Incheon port and Haeundae beach, are located in the northwest and southeast of the country, respectively. Fog is more frequent in Haeundae beach and lasts longer than in Incheon port. Through the experiments, we found that this was also reflected in the less accurate predictions for Haeundae beach than those for Incheon port.
Our results demonstrate high prediction accuracy when dissipation prediction is modeled as classification and regression tasks. CSI scores were within 0.82 and 0.96 for the best models of classification depending on the prediction horizon for the classification task. The score was higher when the prediction period was longer and when more past time steps were included. Regression accuracy was also improved when past time steps were included, but not for all models. The best model’s R2 ranged from 0.93 to 0.99 for Incheon port and from 0.93 to 0.97 for Haeundae beach, depending on the past time steps used for prediction, as shown in Table 7.

Author Contributions

Conceptualization, J.H.H. and Y.T.K.; methodology, J.H.H.; software, J.H.H., Y.T.H. and H.S.J.; validation, J.H.H. and H.S.J.; formal analysis, J.H.H. and Y.T.K.; investigation, J.H.H. and Y.T.K.; resources, Y.T.K. and K.J.K.; data curation, J.H.H. and K.J.K.; writing—original draft preparation, J.H.H. and K.J.K.; writing—review and editing, J.H.H. and Y.T.K.; visualization, J.H.H.; supervision, K.J.K., S.J.K. and Y.T.K.; project administration, K.J.K., S.J.K. and Y.T.K.; funding acquisition, S.J.K. and Y.T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the research project of Korea Hydrographic and Oceanographic Agency (KHOA tender notice 2021-21).

Data Availability Statement

The data used for this study are available on request from the KHOA corresponding author. The rest of the weather data can be found at https://data.kma.go.kr/, accessed on 1 August 2021.

Acknowledgments

We deeply appreciate two anonymous reviewers and editors. Owing to their invaluable comments and suggestions, the original manuscript was highly enhanced.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Model Hyperparameters

CLS indicates classification model and REG indicates regression model.
Table A1. Hyperparameters of Sea Fog Dissipation Prediction Models.
Table A1. Hyperparameters of Sea Fog Dissipation Prediction Models.
Model NameIncheonHaeundae
ParameterCLS 1HCLS2HCLS 3HREGCLS 1HCLS 2HCLS 3HREG
FFNN
 num_layers33
 units[521,256][521,256]
CNN
 num_layers22
 kernel_size22
 units[521,256][521,256]
RNN
 num_layers22
 units6464
KNN
 n_neighbors76568886
 weightsdistanceuniformuniformuniformdistancedistancedistanceuniform
SVM
 C4565687168716571
 kernelrbfrbfrbflinearrbflinearrbflinear
RF
 max_depth6513862919592954
 max_features0.590.750.570.560.850.860.560.80
 min_samples_split44524422
 n_estimators178219139365451460365423
ET
 max_depth4065328256799643
 max_features0.750.590.910.830.621.000.850.66
 min_samples_split3134833913
 n_estimators358359427339163128223219
AdaBoost
 algorithmSAMME.RSAMME.RSAMME.R SAMME.RSAMME.RSAMME.R
 learning_rate0.940.690.850.590.990.840.800.45
 n_estimators4124703705949915010275
GB
 max_depth12244291011932
 max_features0.530.730.710.530.520.720.530.89
 min_samples_split2696142661416
 n_estimators386250498414400324414388
 subsample0.600.930.810.740.760.820.740.79
Bagging
 bootstrapFalseFalseFalseFalseFalseFalseFalseFalse
 bootstrap_featuresTrueFalseTrueTrueFalseTrueFalseTrue
 max_features0.690.520.630.520.810.860.620.53
 max_samples0.800.850.850.800.620.970.830.98
DT
 criterionentropyentropyginifriedman_mseginiginiginimse
 max_depth24667475434351494354117
 max_features0.920.750.760.510.860.920.820.95
 min_samples_split52555572
 splitterrandombestrandomrandomrandombestrandombest
LGMB
 bagging_fraction0.960.850.830.730.960.950.800.95
 feature_fraction0.620.680.850.660.960.860.630.72
 learning_rate0.200.200.200.050.200.200.050.10
 num_leaves2430242021233026

References

  1. Zheng, G.; Yang, L. Solutions of fog and haze weather from the perspective of economic development. Ecol. Econ. 2015, 31, 34–38. [Google Scholar]
  2. Ahmed, M.M.; Abdel-Aty, M.; Lee, J.; Yu, R. Real-time assessment of fog-related crashes using airport weather data: A feasibility analysis. Accid. Anal. Prev. 2014, 72, 309–317. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, S.-P.; Xie, S.-P.; Liu, Q.-Y.; Yang, Y.-Q.; Wang, X.-G.; Ren, Z.-P. Seasonal Variations of Yellow Sea Fog: Observations and Mechanisms. J. Clim. 2009, 22, 6758–6772. [Google Scholar] [CrossRef]
  4. Koračin, D.; Businger, J.A.; Dorman, C.; Lewis, J.M. Formation, Evolution, and Dissipation of Coastal Sea Fog. Boundary-Layer Meteorol. 2005, 117, 447–478. [Google Scholar] [CrossRef]
  5. Cho, Y.-K.; Kim, M.-O.; Kim, B.-C. Sea Fog around the Korean Peninsula. J. Appl. Meteorol. 2000, 39, 2473–2479. [Google Scholar] [CrossRef]
  6. Zhou, B.; Du, J. Fog Prediction from a Multimodel Mesoscale Ensemble Prediction System. Weather. Forecast. 2010, 25, 303–322. [Google Scholar] [CrossRef]
  7. Bartok, J.; Babič, F.; Bednár, P.; Paralič, J.; Kováč, J.; Bartokova, I.; Hluchý, L.; Gera, M. Data mining for fog prediction and low clouds detection. Comput. Inform. 2013, 31, 1441–1464. [Google Scholar]
  8. Kipfer, K. Fog Prediction with Deep Neural Networks. Master’s Thesis, ETH Zurich, Zurich, Switzerland, 2017. [Google Scholar] [CrossRef]
  9. Colabone, R.D.O.; Ferrari, A.L.; Tech, A.R.B.; Vecchia, F.A.D.S. Application of Artificial Neural Networks for Fog Forecast. J. Aerosp. Technol. Manag. 2015, 7, 240–246. [Google Scholar] [CrossRef]
  10. Liu, Y.; Racah, E.; Prabhat; Correa, J.; Khosrowshahi, A.; Lavers, D.; Kunkel, K.; Wehner, M.; Collins, W. Application of deep convolutional neural networks for detecting extreme weather in climate datasets. arXiv 2016, arXiv:1605.01156. [Google Scholar]
  11. Shrivastava, G.; Karmakar, S.; Kowar, M.K.; Guhathakurta, P. Application of Artificial Neural Networks in Weather Forecasting: A Comprehensive Literature Review. Int. J. Comput. Appl. 2012, 51, 17–29. [Google Scholar] [CrossRef]
  12. Dupont, J.-C.; Haeffelin, M.; Protat, A.; Bouniol, D.; Boyouk, N.; Morille, Y. Stratus–Fog Formation and Dissipation: A 6-Day Case Study. Boundary-Layer Meteorol. 2012, 143, 207–225. [Google Scholar] [CrossRef] [Green Version]
  13. Dietz, S.J.; Kneringer, P.; Mayr, G.J.; Zeileis, A. Low-visibility forecasts for different flight planninghorizons using tree-based boosting models. Adv. Stat. Clim. Meteorol. Oceanogr. 2019, 5, 101–114. [Google Scholar] [CrossRef]
  14. Kim, D.-H.; Song, J.-Y.; Kang, I.-Y.; Lee, A.-H. Development and Evaluation of Prototype System for Harbor Container Delivery & Cargo Work Automation. Int. J. Precis. Eng. Manuf. 2010, 11, 865–871. [Google Scholar]
  15. Korea Hydrographic and Oceanographic Agency. Improvement and Expansion of Sea Fog Forecasting Services for Navigation Safety. 2018; pp. 4–47 (In Korean). Available online: https://librarian.nl.go.kr/LI/contents/L20101000000.do?viewKey=555556287&viewType=AH1&typeName=%EC%9D%BC%EB%B0%98%EB%8F%84%EC%84%9C (accessed on 1 August 2021).
  16. Lawrence, M.G. The Relationship between Relative Humidity and the Dew Point Temperature in Moist Air: A Simple Conversion and Applications. Bull. Am. Meteorol. Soc. 2005, 86, 225–233. [Google Scholar] [CrossRef]
  17. K-Nearest Neighbors Algorithm. Available online: https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm (accessed on 1 May 2021).
  18. Breiman, L.; Freidman, H.; Olshen, A.; Stone, C.J. Classification and Regression Trees; Wadsworth & Brooks/Cole Advanced Books & Software: Monterey, CA, USA, 1984. [Google Scholar]
  19. Cortes, C.; Vapnik, V. Support-vector network. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  20. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 27. [Google Scholar] [CrossRef]
  21. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  22. Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef] [Green Version]
  23. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  24. Schapire, R.; Freund, Y. A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar]
  25. A Gentle Introduction to Gradient Boosting. Available online: http://en.wikipedia.org/wiki/Gradient_boosting (accessed on 1 May 2021).
  26. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  27. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. Light GBM: A Highly Efficient Gradient Boosting Decision Tree. Adv. Neural Inf. Process. Syst. 2017, 30, 3149–3157. [Google Scholar]
  28. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Networks. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  30. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  31. Time Series Forecasting. Available online: https://www.tensorflow.org/tutorials/structured_data/time_series (accessed on 1 May 2021).
  32. Supervised Learning. Available online: https://scikit-learn.org/stable/supervised_learning.html#supervised-learning (accessed on 1 May 2021).
  33. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4 December 2017; pp. 6000–6010. [Google Scholar]
  34. Bontempi, G.; Ben Taieb, S.; Le Borgne, Y.-A. Machine Learning Strategies for Time Series Forecasting. In Business Intelligence; Aufaure, M.A., Zimányi, E., Eds.; Springer: Berlin, Heidelberg, 2013; Available online: https://doi.org/10.1007/978-3-642-36318-4_3 (accessed on 1 May 2021).
Figure 1. Incheon port and Haeundae beach labeled with white circles (image source: Google Maps).
Figure 1. Incheon port and Haeundae beach labeled with white circles (image source: Google Maps).
Sensors 21 05232 g001
Figure 2. Illustration of fog and non-fog cases over the 200-min interval.
Figure 2. Illustration of fog and non-fog cases over the 200-min interval.
Sensors 21 05232 g002
Figure 3. Pearson correlation matrix of the 11 features for (a) Incheon port and (b) Haeundae beach.
Figure 3. Pearson correlation matrix of the 11 features for (a) Incheon port and (b) Haeundae beach.
Sensors 21 05232 g003
Figure 4. Fog duration in (a) Incheon port and (b) Haeundae beach.
Figure 4. Fog duration in (a) Incheon port and (b) Haeundae beach.
Sensors 21 05232 g004
Figure 5. Time to dissipation in (a) Incheon port and (b) Haeundae beach.
Figure 5. Time to dissipation in (a) Incheon port and (b) Haeundae beach.
Sensors 21 05232 g005
Figure 6. Performance comparison graph of the regression models with the highest CSI performance within the features experiment: (a) Incheon port; (b) Haeundae beach.
Figure 6. Performance comparison graph of the regression models with the highest CSI performance within the features experiment: (a) Incheon port; (b) Haeundae beach.
Sensors 21 05232 g006
Figure 7. Performance comparison graph of the regression models with the highest R2 performance within the feature experiment.
Figure 7. Performance comparison graph of the regression models with the highest R2 performance within the feature experiment.
Sensors 21 05232 g007
Table 1. Eleven features (seven base features and four derived features).
Table 1. Eleven features (seven base features and four derived features).
ItemVariable NameUnite of MeasurementNoteObservation
Frequency
TimeDatetimestampn/an/a
Air temperatureair_temp°Cfeature 11 min
Sea surface pressuresea_air_prehPafeature 21 min
Relative humidityHumidity%feature 31 min
Sea surface temperaturesea_temp°Cfeature 41 h
VisibilityVismfeature 51 min
U wind (10 m)Um/sfeature 61 min
V wind (10 m)Vm/sfeature 71 min
Air and sea
Temperature difference
ASTD°Cfeature 81 min
Dew point temperatureDT°Cfeature 91 min
Air and dew point temperature differenceT_DT°Cfeature 101 min
Sea surface temperature and dew point temperature differencesst_DT°Cfeature 111 min
DissipationL[0,1]labeln/a
Time to dissipationttdminutescontinuous
target
10 min
Table 2. Data statistics for Incheon and Haeundae.
Table 2. Data statistics for Incheon and Haeundae.
Variables(a) Incheon (1 January 2012–31 May 2019)(b) Haeundae (1 Janury 2014–31 July 2019)
AverageMedianStdMinMaxAverageMedianStdMinMax
air_temp11.2810.407.84−6.6027.5018.3819.104.600.5029.30
sea_air_pre1012.211012.307.16987.701035.501009.131029.105.83992.501029.10
humidity96.9198.804.5546.2099.9092.15100.009.5220.40100.00
sea_temp10.808.306.611.1025.9017.2628.73.7111.3028.70
vis683.91541.00642.6628.403000.001048.64750.00923.2710.003000.00
u−0.81−0.791.57−7.996.04−0.20−0.083.26−11.6111.98
v−0.35−0.461.82−7.739.870.07−0.043.36−14.0212.76
ASTD0.491.003.37−14.709.601.121.202.84−12.608.80
DT10.7910.048.00−12.7927.3316.9717.665.14−9.3826.55
T_DT0.500.180.830.0112.151.410.782.20−0.0020.85
sst_DT0.02−9.143.51−9.1421.690.28−0.173.54−8.311.36
ttd231.270.00278.530.002310.00484.40320.00521.320.003220.00
Table 3. Input data counts for (a) Incheon (period 1 January2012 to 31 May 2019) and (b) Haeundae (period 1 January 2014 to 31 July 2019). Data counts with past one- and three-hour features. n/diss and diss denote non-dissipation and dissipation, respectively.
Table 3. Input data counts for (a) Incheon (period 1 January2012 to 31 May 2019) and (b) Haeundae (period 1 January 2014 to 31 July 2019). Data counts with past one- and three-hour features. n/diss and diss denote non-dissipation and dissipation, respectively.
Predicting Fog Dissipation Within
1 h2 h3 h
n/dissDissTotaln/dissDissTotaln/dissDissTotal
(a) Incheon
base
features
459819756573389130246915324739627209
(69%)(31%) (57%)(43%) (45%)(55%)
1-h
features
433218506182364128726513300737686775
(70%)(30%) (55%)(45%) (44%)(54%)
3-h
features
375516305385312325405663257533315906
(70%)(30%) (55%)(45%) (43%)(57%)
(b) Haeundae
base
features
6848220490527320291810,2387294365810,952
(75%)(25%) (71%)(29%) (66%)(34%)
1-h
features
6474208085546908276596736830349210,322
(75%)(25%) (71%)(29%) (66%)(34%)
3-h
features
588618767762622825668794611932229341
(76%)(24%) (71%)(29%) (65%)(35%)
Table 4. Performance of classification models (one-hour).
Table 4. Performance of classification models (one-hour).
Model Name IncheonHaeundae
FeaturesCSIPAGPODF1CSIPAGPODF1
FFNNbase63.5875.8580.5177.7455.569.3974.3871.38
1H65.7680.0578.6579.3558.1272.9573.0873.51
3H73.8984.585.2884.9870.2983.0683.282.56
CNNbase53.0464.3578.9969.3244.6953.7869.6161.78
1H61.2373.9178.1175.9551.5668.5468.5168.04
3H72.0982.0685.5883.7862.7875.9877.8777.13
RNNbase60.7273.277.4775.5651.1563.4573.0267.68
1H72.5585.5283.5184.0965.5677.9180.5379.2
3H81.3687.6189.2689.7379.6787.5389.8788.68
k-NNbase43.4971.7452.0360.6233.9665.8541.2750.7
1H43.4173.1851.6260.5438.2469.6646.6355.32
3H53.3977.9562.8869.6147.6174.9255.7364.51
DTbase48.1365.0264.9464.9844.4161.7861.2261.5
1H42.6961.216059.8434.852.6751.251.64
3H47.4564.3164.7264.3635.8853.152.5352.82
SVMbase18.9970.5320.7631.9210.482.5410.6618.84
1H18.1875.6118.9230.7711.1975.7111.5420.13
3H25.2177.527.340.2714.8479.7115.225.85
ABbase45.4266.4559.3762.4630.9758.0539.2347.29
1H50.4370.6663.7867.0536.4958.8148.853.47
3H61.0779.5272.0975.8348.2570.2958.6765.09
Baggingbase62.6183.2471.6577.0151.1282.8957.1467.65
1H58.2582.5266.7673.6241.785.1244.7158.86
3H66.7690.871.4780.0745.3689.1946.9362.41
RFbase69.387.9276.4681.8751.4686.9355.7867.96
1H69.1590.7375.1481.7646.3588.7448.863.34
3H74.9395.4578.8385.6752.5991.5254.6768.93
ETbase76.0488.9183.6786.3961.3888.5566.6776.07
1H81.1692.0286.4989.662.8191.8466.1177.16
3H82.6193.5787.4290.4868.9193.3873.8781.59
GBbase32.0572.2936.5848.5116.6772.5517.9128.57
1H33.4972.9137.8450.1820.0968.6721.6333.46
3H39.1778.0444.4856.2924.1171.3325.8738.85
LGBMbase57.4181.465.772.9441.881.2746.2658.96
1H60.0585.0268.6575.0440.9581.345.6758.1
3H72.1690.7177.9183.8349.2784.5854.1366.02
Median ofModelsbase55.3873.3769.6271.2844.9470.9355.4462.01
1H58.9279.1367.374.1541.7174.1848.858.87
3H68.1183.0675.3181.0349.3382.4657.8766.06
Table 5. Performance of classification models (two-hour).
Table 5. Performance of classification models (two-hour).
Model Name IncheonHaeundae
FeaturesCSIPAGPODF1CSIPAGPODF1
FFNNbase78.4087.8887.6087.8973.6883.1286.3084.85
1H81.0589.4890.6189.5371.4979.8086.6283.38
3H86.7493.0891.9392.9083.9091.0791.4291.25
CNNbase67.9977.6484.1380.9457.1863.9583.0572.75
1H77.6688.1985.7487.4370.6680.8784.0982.81
3H85.3091.6291.9392.0777.3490.5187.1387.22
RNNbase75.1586.1784.7985.8170.5182.7883.3982.71
1H83.6092.1090.6191.0781.3288.7788.9789.70
3H89.9895.0394.0994.7387.8292.8894.3593.51
k-NNbase66.3882.3276.6979.7954.6479.7963.1870.67
1H66.9782.4677.9180.2157.2381.1765.8272.80
3H75.6688.8483.4686.1469.8287.5578.1782.23
DTbase68.4582.2380.3381.2755.5171.2873.9771.39
1H65.6179.1779.3079.2450.3465.6866.1866.97
3H66.7881.4778.7480.0855.3770.4170.5771.28
SVMbase51.3666.9968.6067.8617.7980.8718.4930.21
1H47.5970.3358.9664.4919.4376.8720.9832.54
3H55.6375.4668.1171.4924.5884.3125.9339.47
ABbase58.0875.1370.5873.4839.0465.8149.3256.15
1H65.1779.8677.9178.9144.7966.1457.1461.87
3H75.2686.6084.8485.8858.3777.8769.9873.72
Baggingbase82.3490.9589.7590.3273.0893.0178.0884.44
1H81.5990.0089.3989.8665.6492.9469.0879.25
3H85.8292.5291.7392.3770.9294.9773.6882.99
RFbase85.1492.8291.9091.9772.2692.9576.7183.90
1H87.8794.1693.2293.5473.0696.4475.0584.44
3H91.3895.9895.4795.5079.8196.1881.8788.77
ETbase88.7594.2093.8894.0478.4893.4583.0587.94
1H91.5095.6495.6595.5684.0696.3986.8091.34
3H93.7396.4896.2696.7686.4296.4388.8992.72
GBbase56.3273.7970.4172.0627.8182.8429.6243.52
1H56.2173.9470.0971.9629.1277.4532.0145.11
3H62.3879.3874.4176.8334.0681.5336.6550.81
LGBMbase77.7988.3387.1187.5158.7789.1563.7074.03
1H80.1689.4688.7088.9960.7089.0866.1875.54
3H87.3993.3793.5093.2771.3593.6475.2483.28
Median ofModelsbase72.1484.1284.4683.8158.7082.2774.1473.97
1H78.9288.0788.4388.2263.1381.3069.3577.40
3H84.8791.5691.3491.8271.3990.3077.2983.31
Table 6. Performance of classification models (three-hour).
Table 6. Performance of classification models (three-hour).
Model Name IncheonHaeundae
FeaturesCSIPAGPODF1CSIPAGPODF1
FFNNbase84.2192.4590.7991.4378.6585.4890.5788.05
1H86.6792.8692.3192.8679.5487.7688.7088.60
3H93.0196.5596.1096.3890.5595.3995.0495.04
CNNbase75.0686.2985.7585.7564.4271.7086.0778.36
1H84.4593.1490.0591.5776.9084.1189.9986.94
3H90.3295.3795.0594.9285.8693.0992.7192.39
RNNbase81.7990.0990.0489.9874.5583.0089.3485.42
1H90.1195.0994.6994.8087.3394.0692.7093.24
3H94.2397.4697.4597.0392.7395.5796.9096.23
k-NNbase75.6085.9086.7686.1163.0881.8273.0977.36
1H76.4888.4085.2886.6765.4084.7874.1179.08
3H85.2192.7891.9092.0180.9592.6185.7489.47
DTbase79.3888.9788.9088.5167.6381.7779.7880.69
1H75.5784.4486.2186.0957.7073.8573.9673.18
3H77.3286.9087.4187.2159.6076.2473.4974.68
SVMbase58.6471.0074.9173.9323.3675.5225.2737.87
1H61.2371.2681.1775.9628.7678.8031.1944.67
3H67.6975.8986.6680.7342.5482.8347.2959.69
ABbase68.7280.1582.6081.4642.9666.7254.7860.10
1H73.9784.8185.5485.0451.2271.6163.5267.74
3H84.2091.6791.9091.4265.1881.8075.9778.92
Baggingbase88.9592.7195.0894.1580.7793.2486.6189.36
1H89.4792.3396.0294.4477.1195.0780.9787.08
3H91.3194.8296.4095.4680.0996.4781.7188.94
RFbase89.8993.7796.4794.6881.2294.0085.6689.64
1H93.0495.6797.4896.3985.1696.4287.4191.99
3H94.6896.0398.2097.2787.2197.6489.9293.17
ETbase92.6495.4297.1096.1883.7794.4488.1191.17
1H95.1296.8698.1497.5090.7997.3192.1395.18
3H95.6497.3298.3597.7791.9898.0394.2695.82
GBbase65.5775.4285.8879.2033.6276.1837.7050.32
1H68.4575.4588.0681.2737.7978.7842.4954.85
3H72.8779.1789.0684.3144.2385.0748.0661.34
LGBMbase84.1389.8193.4491.3868.0689.3773.6380.99
1H88.6791.5795.8994.0072.3692.4776.3983.96
3H91.8594.7397.0095.7582.7995.0685.8990.58
Median ofModelsbase80.6689.1889.0389.3067.8483.2181.2880.84
1H85.7292.0391.7192.3174.3686.1578.8385.29
3H90.7094.6495.6595.1282.0394.3385.8190.13
Table 7. Performance of regression models.
Table 7. Performance of regression models.
Model Name IncheonHaeundae
FeaturesMSERMSEMAER2MSERMSEMAER2
FFNNbase540274470.9319,149138910.93
1H307155360.9617,787133800.93
3H113534180.99713084350.97
CNNbase37,8941951480.51184,1694293140.31
1H29,2361711330.61166,7204082900.36
3H18,2131351000.8091,6323032180.66
RNNbase62,5902501680.19255,3955053750.04
1H38,0001951370.50128,8443592590.51
3H23,0451521180.7533,4671831310.88
k-NNbase14,719121780.8186,3972941760.68
1H12,426111740.84118,5343442100.55
3H12,382111640.8694,4413071730.65
DTbase11,040105510.8641,457204800.84
1H19,704140610.7458,394242900.78
3H16,206127560.8274,997274990.72
SVMbase54,0802331710.30191,4514383090.28
1H45,0712121570.41179,9374243060.31
3H40,6802021520.55163,3824042920.40
ABbase34,9461871520.55212,1444613600.20
1H31,8701791450.58202,9464503520.22
3H35,4201881500.61188,6804343330.30
Baggingbase565375480.9318,885137790.93
1H704884570.9125,154159910.90
3H610378540.9325,020158930.91
RFbase573876480.9319,064138800.93
1H712684580.9124,644157910.91
3H613778540.9325,054158920.91
ETbase347859340.9614,605121680.95
1H268152310.96942797530.96
3H231148290.97801690450.97
GBbase24,5581571220.68113,1953362450.58
1H18,6051361060.75114,9883392480.56
3H17,5931331050.8194,9243082220.65
LGBMbase913596720.8837,3831931410.86
1H772288640.9034,9721871310.87
3H594277530.9333,5021831180.88
Median ofModelsbase13,118114750.8364,7842511590.76
1H15,176123690.8089,5102961710.66
3H12,382111630.8664,5542541180.76
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Han, J.H.; Kim, K.J.; Joo, H.S.; Han, Y.H.; Kim, Y.T.; Kwon, S.J. Sea Fog Dissipation Prediction in Incheon Port and Haeundae Beach Using Machine Learning and Deep Learning. Sensors 2021, 21, 5232. https://doi.org/10.3390/s21155232

AMA Style

Han JH, Kim KJ, Joo HS, Han YH, Kim YT, Kwon SJ. Sea Fog Dissipation Prediction in Incheon Port and Haeundae Beach Using Machine Learning and Deep Learning. Sensors. 2021; 21(15):5232. https://doi.org/10.3390/s21155232

Chicago/Turabian Style

Han, Jin Hyun, Kuk Jin Kim, Hyun Seok Joo, Young Hyun Han, Young Taeg Kim, and Seok Jae Kwon. 2021. "Sea Fog Dissipation Prediction in Incheon Port and Haeundae Beach Using Machine Learning and Deep Learning" Sensors 21, no. 15: 5232. https://doi.org/10.3390/s21155232

APA Style

Han, J. H., Kim, K. J., Joo, H. S., Han, Y. H., Kim, Y. T., & Kwon, S. J. (2021). Sea Fog Dissipation Prediction in Incheon Port and Haeundae Beach Using Machine Learning and Deep Learning. Sensors, 21(15), 5232. https://doi.org/10.3390/s21155232

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop