Next Article in Journal
Applying MBSE to Optimize Satellite and Payload Interfaces in Early Mission Phases
Next Article in Special Issue
An Approach for Multi-Item Product Sales Forecasting Based on Advancing the BCG Matrix with Matrix-Clustering and Time Modeling Techniques
Previous Article in Journal
Coupling between Population and Construction Land Changes in the Beijing–Tianjin–Hebei (BTH) Region: Residential and Employment Perspectives
Previous Article in Special Issue
XGBoost-B-GHM: An Ensemble Model with Feature Selection and GHM Loss Function Optimization for Credit Scoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Intelligent Prediction Model for the Containerized Freight Index: A New Perspective of Adaptive Model Selection for Subseries

1
School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan 250014, China
2
Ping An Life Insurance Company of China Ltd., Shandong Branch, Jinan 250001, China
3
Business School, Shandong Normal University, Jinan 250014, China
*
Author to whom correspondence should be addressed.
Systems 2024, 12(8), 309; https://doi.org/10.3390/systems12080309
Submission received: 2 July 2024 / Revised: 3 August 2024 / Accepted: 17 August 2024 / Published: 19 August 2024

Abstract

:
The prediction of the containerized freight index has important economic and social significance. Previous research has mostly applied sub-predictors directly for integration, which cannot be optimized for different datasets. To fill this research gap and improve prediction accuracy, this study innovatively proposes a new prediction model based on adaptive model selection and multi-objective ensemble to predict the containerized freight index. The proposed model comprises the following four modules: adaptive data preprocessing, model library, adaptive model selection, and multi-objective ensemble. Specifically, an adaptive data preprocessing module is established based on a novel modal decomposition technology that can effectively reduce the impact of perturbations in historical data on the prediction model. Second, a new model library is constructed to predict the subseries, consisting of four basic predictors. Then, the adaptive model selection module is established based on Lasso feature selection to choose valid predictors for subseries. For the subseries, different predictors can produce different effects; thus, to obtain better prediction results, the weights of each predictor must be reconsidered. Therefore, a multi-objective artificial vulture optimization algorithm is introduced into the multi-objective ensemble module, which can effectively improve the accuracy and stability of the prediction model. In addition, an important discovery is that the proposed model can acquire different models, adaptively varying with different extracted data features in various datasets, and it is common for multiple models or no model to be selected for the subseries.The proposed model demonstrates superior forecasting performance in the real freight market, achieving average MAE, RMSE, MAPE, IA, and TIC values of 9.55567, 11.29675, 0.44222%, 0.99787, and 0.00268, respectively, across four datasets. These results indicate that the proposed model has excellent predictive ability and robustness.

1. Introduction

Containerization has provided a favorable environment for international trade, and specialization and technological advances have simplified international shipping [1]. The China containerized freight index (CCFI) was released by the Shanghai Shipping Exchange in 1998 to objectively reflect changes in Chinese container freight rates. As the number of shipping containers continues to increase, the containerized freight index in China has a significant impact on the world economy [2]. In recent years, an increasing number of scholars have focused on predicting the containerized freight index [3]. Forecasting models that can accurately and stably predict the containerized freight index are becoming increasingly important and have considerable economic significance [4].
A traditional econometric model is a theoretical structure that describes the relationships between relevant economic variables. Jeon et al. [5] employed system dynamics to reflect the supply-side and demand-side drivers of the market and proposed a one-step prediction method by utilizing the cyclical system dynamics approach. Koyuncu and TAVACIOĞLU [6] applied Holt–Winter smoothing and the Seasonal Autoregressive Integrated Moving Average (SARIMA) to forecast the Shanghai Containerized Freight Index (SCFI), which revealed that the SARIMA model has excellent prediction performance in short-term forecasting. Kawasaki et al. [7] discussed the application of the SARIMA and Vector Autoregression (VAR) models in predicting container trade volume. Hirata and Matsuda [8] used the SARIMA and Long Short-Term Memory (LSTM) methods to predict SCFI, which demonstrated that the performance of the SARIMA model is better than that of LSTM in predicting cargo volumes along the western and eastern routes of Japan. However, limited data for building models and unrealistic assumptions regarding future conditions have led to the poor performance of traditional economic models in short-term forecasting.
Artificial intelligence models are established based on mathematical algorithms to create prediction models that can use specific algorithms to analyze data and learn from data patterns to generate models. In recent years, deep learning models have been widely applied to time-series forecasting. Shankar et al. [9] used LSTM to predict container throughput. Kim and Choi [10] used Gate Recurrent Units (GRUs) and LSTM to construct a prediction model for CCFI, which illustrated the improvement of the artificial intelligence models compared to the Autoregressive Integrated Moving Average (ARIMA) in terms of prediction accuracy. Dasari and Bhukya [11] analyzed the capability of Convolutional Neural Networks (CNNs) and CNN-LSTM in automatic feature extraction, which demonstrated that CNNs have a unique advantage in feature extraction. Khaksar Manshad et al. [12] discovered a novel time-series link prediction method based on evolutionary computation and irregular cellular learning automata. Swathi et al. [13] observed the satisfactory performance of LSTM model-based sentiment analysis in predicting time-series data. Xiao et al. [14] used LSTM and ensemble learning technology to predict the China Coastal Bulk Coal Freight Index. Liang et al. [15] transformed time-series data into network structures, selected relevant features with the Maximum Relevance, Minimum Redundancy (mRMR) method and applied Support Vector Machine (SVM), Deep Neural Network (DNN), and LSTM models to improve prediction accuracy. However, training artificial intelligence models requires a substantial amount of data for learning, and the quality and quantity of the data have a significant impact on performance.
Numerous models have been employed in the prediction field, and decomposition ensemble prediction model is one of the most commonly used models. Because the containerized freight index has strong nonlinearity and volatility, some scholars have proposed the use of data decomposition technology to decompose complex time series, which has the potential to enhance both the accuracy and stability of predictions. There are many data decomposition methods, such as wavelet packet decomposition [16], wavelet soft-threshold denoising [17], Ensemble Empirical Mode Decomposition (EEMD) [18], Complementary Ensemble Empirical Mode Decomposition (CEEMD) [19], Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) [20], Variational Mode Decomposition (VMD) [21], and Discrete Fourier Transform [22]. Specifically, Bai et al. [23] decomposed raw stock price-based Multivariate Empirical Mode Decomposition (MEMD) and the Neighborhood Rough Set (NRS). Rezaei et al. [24] discussed Empirical Mode Decomposition (EMD) and CEEMD performance, and Liu et al. [25] analyzed the results of Interval Discrete Wavelet Transform (IDWT), Interval Empirical Mode Decomposition (IEMD), and Interval VMD (IVMD) under different models. Nguyen and Phan [26] applied EEMD to decompose time-series data to achieve excellent performance. Yang et al. [27] introduced VMD and Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (ICEEMDAN) to the prediction model, which illustrated that data preprocessing technology has the benefits of denoising. Jaseena and Kovoor [28] applied the discrete wavelet transform algorithm to decompose raw data, which proved that Discrete Wavelet Transformation (DWT) is effective in enhancing the prediction model. Li et al. [29] considered the robustness of the decomposition algorithm to improve the accuracy of the prediction model. Da Silva et al. [30] presented an ensemble learning model that incorporates variable mode decomposition and singular spectrum analysis. Fang [31] applied the EMD method to a backpropagation (BP) neural network and ARIMA and used EMD-BP and EMD-ARIMA models to predict the containerized freight index. Chen et al. [32] decomposed the CCFI into multiple subseries (IMF) using the EMD method and predicted each IMF using the gray method and theAutoregressive Moving Average (ARMA) method separately, exploring the benefits of gray wave forecasting methods using periodic fluctuations.
There have also been many research achievements in predictors and ensemble schemes. For example, Tian and Chen [33] used a reinforced LSTM model to predict decomposed modes, and a modified sparrow search algorithm was applied to optimize the hyperparameters of an LSTM model. Zhao et al. [34] proposed a hybrid deep learning model based on CNN, VMD, and GRUs and validated the model using wind power. Niu et al. [35] proposed a novel decomposition ensemble model and compared its accuracy with that of simple Recurrent Neural Network (RNNs), GRUs, and LSTM, which indicated that the EEMD-RNN model can improve accuracy for time-series forecasting. Altan et al. [36] used LSTM to predict subseries, and prediction results were integrated using Multi-objective Gray Wolf Optimization (MOGWO), which indicated that the ensemble scheme is able to capture the inherent features of the time series and achieve more accurate prediction results than individual prediction models. Joseph et al. [37] predicted subseries using a bi-directional LSTM network. Luo and Zhang [38] demonstrated that Bi-directional LSTM (BiLSTM) can analyze the intrinsic characteristics of longer temporal-frequency information, and the attention mechanism can assign more reasonable weights to parameters, which can effectively enhance the accuracy of prediction results. Yang et al. [39] proved that recursive empirical mode decomposition and LSTM perform well in time-series data. Yu et al. [40] presented a new model based on LSTM and a time-series cross-correlation network, which achieved excellent prediction results in a wind power field. Yang et al. [41] employed Extreme Learning Machine (ELM) series models to establish a novel forecasting model and conducted experimental simulations, showing that the ensemble model can achieve excellent performance. Wang et al. [42] determined VMD parameters using Kullback–Leibler divergence and an applied LSTM-based attention module to predict stock prices. Hao et al. [43] introduced an advanced prediction model for air pollutant levels that integrates time-series reconstruction, sub-model simulation, weight optimization, and combination techniques to improve both prediction accuracy and stability in environmental management.
By examining the pertinent literature on prediction models and the containerized freight index, these studies show that the decomposition ensemble model has many advantages in containerized freight index prediction, and an increasing number of scholars are focusing on it. However, many problems still exist in forecasting the Containerized Freight Index (CFI). This subsection summarizes some of the limitations of CFI forecasting.
  • From the above literature analysis, most of the studies directly use the same type of predictor without focusing on the diversity of the prediction models. In theory, the diversity of predictors could improve the robustness of the prediction model so that the model can be stabilized and continuously operated. Hence, this paper suggests introducing a novel model library to bolster the stability of the prediction model, building upon existing frameworks.
  • Most previous studies selected the current optimal predictors for each subseries, but these predictors do not guarantee that the prediction results are optimal on a global level. Thus, this paper introduces the concept of valid predictors to enhance the overall predictive capability of the model.
To address the gaps identified above, this paper introduces a novel intelligent prediction model. This model is designed to enhance the prediction performance of the CFI by adopting a new approach to adaptive model selection for subseries. The proposed model comprises the following four key modules: adaptive data preprocessing, model library, adaptive model selection, and multi-objective ensemble. First, an adaptive data preprocessing module is introduced based on Sequential VMD (SVMD) technology to mitigate the impact of fluctuations and noise in the original data series. Next, a new model library module is constructed, which consists of four prediction models, namely Outlier Robust Extreme Learning Machine (ORELM), BP, the Group Method of Data Handling (GMDH), and the Adaptive-network-based Fuzzy Inference System (ANFIS), to offer a diversity of predictors for one subseries. These four predictors have good nonlinear fitting abilities and can ensure the prediction accuracy of the proposed model. This study introduces an adaptive model selection module that can efficiently select the appropriate predictor to improve integral prediction effectiveness. Finally, the multi-objective artificial vulture optimization algorithm is introduced into the multi-objective ensemble module, which enhances the accuracy and stability of the prediction model.
In this study, four experiments and one discussion are designed and conducted on four different datasets (Southeast Asia Freight Index, Dalian Containerized Freight Index, SCFI, and CCFI) to verify the superiority of the proposed model. At the same time, eleven benchmark models are built to confirm the superiority of this prediction model. The experimental results indicate that this model significantly outperforms the comparison models across all datasets, with MAPE values of 0.545154, 0.572040, 0.536753, and 0.114937%. This demonstrates the effectiveness of the model selection module in adapting to different datasets. Therefore, the proposed prediction model can serve as a powerful tool for the forecasting of container freight indices. The contributions and innovations of the proposed novel intelligent prediction model are summarized as follows:
  • Developing a creative prediction model for CFI. Although many studies have applied the decomposition ensemble framework to predict the CFI, the processing of decomposed subseries is inadequate and needs further improvement. Therefore, to fill existing gaps, this study innovatively proposes a prediction model for CFI that incorporates the following four modules: adaptive data preprocessing, model library, adaptive model selection, and multi-objective ensemble. In addition, note that to achieve effective integral prediction, the number of predictors is not limited by applying the adaptive selection strategy.
  • Devising an innovative adaptive data preprocessing module. It is clear from the analysis of previous studies that VMD is widely used in the decomposition ensemble model, but this study innovatively applies SVMD technology to the decomposition ensemble model for the CFI. Compared with VMD technology, SVMD can adaptively decompose the raw data and determine the number of decomposition layers (K), which can effectively ensure the objectivity of the decomposition process.
  • Establishing a model library to predict the decomposed subseries. Unlike most previous studies, which did not focus on the variety of predictors for subseries prediction, a new model library module is constructed, incorporating the following four models: ORELM, BP, GMDH, and ANFIS. It can not only greatly improve the diversity of predictors but also effectively advance the nonlinear fitting ability of the prediction model.
  • Introducing the idea of model validity to adaptive model selection. The adaptive model selection module is designed to obtain valid predictors for each subseries in this study. In this module, Least Absolute Shrinkage and Selection Operator (LASSO) technology is introduced to conduct adaptive model selection, selecting the valid predictors for the subseries from the perspective of advancing the effectiveness of integral forecasting. In contrast to scholars who choose the best predictor for one subseries locally, the number of predictors is not restricted, which means that for one subseries, multiple models may be selected, and it is common that no model is chosen. Thus, adaptive model selection chooses predictors that can effectively enhance the prediction performance from the perspective of the prediction model rather than using the greedy idea to select the optimal current predictor.
  • Designing the multi-objective ensemble module based on the Multi-objective Artificial Vulture Optimization Algorithm (MOAVOA). The multi-objective ensemble is proposed to integrate the prediction values of subseries, which can concurrently fulfill the requirements of both prediction accuracy and stability. By introducing archive grid and leader selection mechanisms, the search ability of MOAVOA is greatly boosted, which can effectively advance the prediction accuracy and robustness of the prediction model.
The remainder of this paper is organized as follows: Section 2 describes the preliminary methods. Section 3 introduces the proposed model. Section 4 presents the empirical results and verifies the contribution of this study. Section 5 discusses the results, and Section 6 summarizes the whole paper.

2. Preliminary Methods

A novel intelligent prediction model based on adaptive selection scheme for CFI is established in this study. It incorporates the following four modules: adaptive data preprocessing, model library, adaptive model selection, and multi-objective ensemble modules.

2.1. Adaptive Data Preprocessing Module

In contrast to EMD [44], VMD employs an iterative search to determine the optimal solution for the variational model [45]. The VMD model seeks the mode components, along with their corresponding center frequencies. SVMD can continuously extract modes without requiring a predefined number of modes, and its computational complexity is significantly lower than that of VMD [46]. The decomposition process of SVMD is as follows:
Assuming that the input signal ( f ( t ) ) is decomposed into two signals, namely the Lth mode ( u L ( t ) ) and the residual signal ( f r ( t ) ), f can be expressed as Equation (1).
f ( t ) = u L ( t ) + f r ( t )
The residual signal ( f r ( t ) ) is distinct from the u L ( t ) signal, and f r ( t ) can be expressed as Equation (2).
f r ( t ) = i = 1 L 1 u i ( t ) + f u ( t )
The Lth mode ( u L ( t ) ) surrounds the central frequency, and u L ( t ) can be expressed by Equation (3).
min J 1 = min t δ ( t ) + j π t u L ( t ) e j ω L t 2 2
At the frequency where u L ( t ) has a significant component, the objective is to minimize the f r ( t ) of the residual signal.
β ^ L ( ω ) = 1 α ( ω ω L )
Equation (5) minimizes the spectral overlap between the Lth mode and the residual signal as follows:
J 2 = β L ( t ) f r ( t ) 2 2
Equation (6) can select an appropriate constraint to avoid the possibility that the L mode could be any of the L 1 modes.
β ^ i ( ω ) = 1 α ( ω ω i ) 2
The added criterion is expressed in Equation (7).
J 3 = i = 1 L 1 β i ( t ) u L ( t ) 2 2
In Equation (8), f ( t ) is reconstructed from the L mode and unprocessed signal.
f ( t ) = u L ( t ) + f u ( t ) + i = 1 L 1 u i ( t )
Therefore, extracting the L mode is equivalent to constraint minimization (Equation (9)).
min u L , ω L , f r { α J 1 + J 2 + J 3 } s u b j e c t t o u L ( t ) + f r ( t ) = f ( t )

2.2. Model Library Module

The model library module was developed to facilitate the incorporation of multiple predictors for subseries, including ORELM, BP neural network, GMDH, and ANFIS.

2.2.1. ORELM

ORELM is a feedforward neural network proposed by Zhang and Luo [47]. The calculation method for predicted values is expressed as follows:
f L ( x j ) = i = 1 L β i · h i ( x j ) = i = 1 L β i · g ( ω i x j + b i )
The matrix form of Equation (10) can be expressed as follows:
T = H · β
The procedure for computing the weight matrix ( β ) is expressed as follows:
β ^ = H · Y = ( H T H ) 1 · H T · Y
The optimization function of ORELM is expressed as follows:
min β C · e 1 + β 2 2 s . t . Y H · β = e
Equation (13) is solved using the augmented Lagrangian method, which is expressed as follows:
L u ( e , β , λ ) = e 1 + 1 C β 2 2 + λ T ( Y H β e ) + u 2 y H β e 2 2

2.2.2. BP Neural Network

The BP neural network is a multi-layer neural network and is one of the most extensively utilized models [48]. The BP network consists of an input layer, a hidden layer, and an output layer.
H = f ( W · X + b h ) O = f ( V · H + b o ) ,
where X R p × 1 represents input data, W R d × p represents a weight matrix, b h R d × 1 represents bias, V R q × d represents a weight matrix, b o R q × 1 represents bias, and f ( x ) represents a sigmoid activation function.
In the backpropagation process, the loss function is expressed as follows:
J ( Y , O ) = 1 2 k = 1 q Y k O k 2
where Y = [ Y 1 , Y 2 , , Y q ] T , O = [ O 1 , O 2 , , O q ] T .
Using the gradient descent method, the iterative formulas for V, b o , W, and b h are expressed as follows:
V V ϵ · J V b o b o ϵ · J b o W W ϵ · J W b h b h ϵ · J b h

2.2.3. GMDH

GMBH is a heuristic model for complex nonlinear models [49]. The GMDH algorithm uses simple regression equations to obtain estimates of the output variables. Assume that the raw data consist of observation vectors (y) and N column-related feature vectors ( x 1 , x 2 , , x N ). The original equation can be represented as a quadratic regression polynomial, and Equation (18) is expressed as follows:
z = A + B · u + C · v + D · u 2 + E · v 2 + F · u v
where A , B , C , D , E , and F represent the model parameters; u and v represent the variables; and z represents the fitted value of y.
The GMDH model generation process is described as follows.
Let the input variable set be X = { x 1 , x 2 , , x N } .
The output variables of the quadratic regression polynomial are obtained. u and v are extracted from the input variable set (X) and entered into Equation (18) to obtain the column of the predicted value ( z ( u , v ) ).
The calculation results ( z ( u , v ) ) are added to the output variable set (Z).
The variables with strong predictive ability in Z are identified. There are N ( N 1 ) 2 variables in the output variable set (Z). In general, performance metrics also need to include a penalty to reduce network complexity. For example, the performance indicator is below a predefined threshold or the optimal retention number in Z is specified.
Input variables are then fed to the next network layer. Columns z 1 , z 2 , , z k are calculated from x 1 , x 2 , , x N , where k represents the overall number of preserved columns.
Then, the possibility of further enhancing the model is evaluated, and the minimum values of the performance indicators acquired in the current iteration are contrasted with those achieved in the preceding iteration. If improvements are implemented, let X = z 1 , z 2 , , z k ; otherwise, the iteration is terminated and network generation is completed.

2.2.4. ANFIS

ANFIS represents an innovative fuzzy inference system structure that integrates fuzzy logic and neural networks [50]. Assume that x and y are inputs and z is the output of the fuzzy inference system. Common rules include two fuzzy if–then rules, as outlined below.
Rule 1:    If x is A1 and y is B1, then z = p1x + q1y + r1;
Rule 2:    If x is A2 and y is B2, then z = p2x + q2y + r2.
The ANFIS calculation process is described as follows.
Fuzzification. This layer converts the input variables into the degree of membership for each fuzzy set. x and y are input nodes. A i ( B i ) is a value related to the node function, and U A i ( x ) ( U B i ( y ) ) represents the degree to which x satisfies A i ( B i ). Typically, the range of U A i ( x ) ( U B i ( y ) ) is ( 0 , 1 ] . The calculation method for U A i ( x ) ( U B i ( y ) ) is expressed in Equation (19).
U A i ( x ) = e x c i a i 2 U B i ( y ) = e y c i + 2 a i + 2 2 ,
where i = 1 , 2 , c i represents the center of the Gaussian function, and a i represents the density of the Gaussian function. a i and c i are parameters that can be optimized.
Rule Applicability. The output of each node results from the multiplication of the input signals, and it represents the applicability of the rules.
W i = U A i ( x ) · U B i ( y )
Normalized applicability
W ¯ i = W i i W i
TSK output layer. The layer then calculates the output of each rule.
O i = W ¯ i · f i = W ¯ i · ( p i x + q i y + r i ) ,
where x and y are the input nodes; W ¯ i is the normalized applicability; and p i , q i , and r i are the fuzzy rule parameters.
Summation. This layer sums each input to obtain the total output.
R = i O i = i W ¯ i · ( p i x + q i y + r i )
ANFIS network output value (R).

2.3. Adaptive Model Selection Module

In this study, LASSO feature selection technology is applied to the adaptive model selection module. LASSO can solve the high complexity of a model caused by high-dimensional data through its sparsity characteristics.
By using LASSO regression, these unimportant parameter values are compressed to approximately zero. Assuming that the original objective function is J ( β ) , the objective function after adding LASSO selection is Q ( β ) , and the equation for Q ( β ) is expressed as follows:
Q ( β ) = J ( β ) + λ β 1 ,
where β is the parameter to be optimized.
Usually, Q ( β ) is not continuously differentiable; therefore, the gradient descent method cannot be used for optimization. The coordinate axis descent method is an optimization technique, and its algorithm is presented in Algorithm 1.
Algorithm 1 The coordinate axis descent method
  • Require:
    • Loss function J ( θ ) of the original model.
  • Ensure:
    • The optimized model.
    • Set the initial position point θ ( 0 ) = ( θ 1 ( 0 ) , θ 2 ( 0 ) , , θ p ( 0 ) ) ; the iteration number k = 1 .
    • At the k iteration, other parameters except θ j ( k ) are fixed and θ j ( k ) is optimized. The iterative formula for θ j ( k ) is shown in Equation (25).
      θ j ( k ) = arg min θ j J ( θ 1 ( k 1 ) , , θ j , , θ p ( k 1 ) )
      where j = 1 , 2 , , p .
    • If the change in θ j ( k ) is considerably smaller compared to the previous iteration, the result has converged and the iteration is terminated. Otherwise, continue the iteration.
      return The optimized model.

2.4. Multi-Objective Ensemble Module

The MOAVOA is an innovative intelligent optimization technique [51]. This algorithm is characterized by high computational speed and high accuracy; therefore, this study uses the latest MOAVO.
Following the formation of the initial population, the fitness of the solutions is evaluated. The optimal and suboptimal solutions are selected, and the other solutions are moved toward the best solution using Equation (26).
R ( i ) = B e s t v u l t u r e 1 i f p i = L 1 B e s t v u l t u r e 2 i f p i = L 2 ,
where L 1 [ 0 , 1 ] and L 2 [ 0 , 1 ] represent hyperparameters that are given before the search operation, and L 1 + L 2 = 1 .
The likelihood of selecting the best solution is determined using the roulette-wheel approach in Equation (27), and each optimal solution is chosen for groups.
P i = F i i = 1 n F i
Inspired by hungry vultures, the model is represented by Equation (28).
F = ( 2 × r a n d 1 + 1 ) × z × 1 I t e r a t i o n i I t e r a t i o n m a x ,
where z [ 1 , 1 ] , h [ 2 , 2 ] , r a n d i [ 0 , 1 ] , and I t e r a t i o n i and I t e r a t i o n m a x represent the current and maximum number of iterations, respectively.
r a n d p 1 [ 0 , 1 ] is a random number. If r a n d p 1 is greater than P 1 , it is explored using Equation (29); if r a n d p 1 is less than P 1 , it is explored using Equation (30).
P ( i + 1 ) = R ( i ) | X × R ( i ) P ( i ) | × F
P ( i + 1 ) = R ( i ) F + r a n d 2 × ( ( u b l b ) × r a n d 3 + l b )
r a n d p 2 [ 0 , 1 ] is a random number; if r a n d p 2 is greater than P 2 , it uses the strategy of Equation (31); if r a n d p 2 is less than P 2 , it uses the strategy of Equation (32).
P ( i + 1 ) = D ( i ) × ( F + r a n d 4 ) R ( i ) + P ( i )
P ( i + 1 ) = R ( i ) R ( i ) × r a n d 5 × P i 2 π ( sin ( P ( i ) ) + cos ( P ( i ) ) )
r a n d p 3 [ 0 , 1 ] is a random number; if r a n d p 3 is greater than P 3 , it uses the scheme of Equation (33); if r a n d p 3 is less than P 3 , it uses the scheme of Equation (34).
P ( i + 1 ) = B V 1 ( i ) + B V 2 ( i ) 2 1 2 · B V 1 ( i ) × P ( i ) B V 1 ( i ) P ( i ) 2 + B V 2 ( i ) × P ( i ) B V 2 ( i ) P ( i ) 2
P ( i + 1 ) = R ( i ) | d ( t ) | × F × L e v y ( d ) ,
where B V 1 ( i ) and B V 2 ( i ) are the optimal vultures for the first and second groups in the current iteration, respectively.

3. Proposed Method

This study establishes a new intelligent prediction model-based adaptive selection strategy for CFI, incorporating the following four steps: (1) adaptive data preprocessing, (2) construction of the model library and subseries prediction, (3) adaptive model selection, and (4) multi-objective ensemble (Figure 1).
Step 1: Adaptive data preprocessing.
SVMD has many achievements in many fields and is not affected by the number of decomposed modes. Therefore, in this study, SVMD was selected to decompose the CFI. Multiple modes can be obtained from the raw data through adaptive data preprocessing module, which can significantly reduce the impact of data noise on the prediction accuracy.
Step 2: Construction of the model library and subseries prediction.
Owing to the strong nonlinearity of the containerized freight index, it is challenging for the single-predictor model to predict CFI trends; thus, the model library module is introduced into the prediction model. The model library contains the following four predictors: ORELM, BP, GMDH, and ANFIS. These predictors have a strong nonlinear fitting ability, and each predictor has unique characteristics, which gives the proposed model a high prediction accuracy.
Step 3: Adaptive model selection.
Because different models have different prediction abilities for different subseries, adaptive model selection module is essential. The LASSO feature selection algorithm achieves feature dimensionality reduction by introducing an L1 norm penalty term. In this study, the LASSO feature selection algorithm is used in adaptive model selection, which can effectively extract key impact factors and select the valid predictors for each subseries.
Step 4: Multi-objective ensemble.
The multi-objective ensemble is an important step in the prediction model that combines the prediction values of multiple subseries into a final prediction result. As different subseries prediction values may have different proportions in the final result, weights are added to different subseries prediction values. Because the MOAVOA has a strong ability to combine predictors, it is used to estimate the predictor weights in this study.

4. Empirical Results and Analysis

This section presents the empirical results of the proposed model. Section 4.1 introduces the data selection and performance metrics for the prediction results. Section 4.2 describes the experimental design used to demonstrate the experimental objectives. In Section 4.3, the results of the adaptive model selection module are discussed and analyzed. In Section 4.4, an experiment is designed to compare a single artificial neural network with the proposed model. In Section 4.5, an experiment is designed to compare the equal-weight method with the proposed model. In Section 4.6, an experiment is designed to compare different intelligent ensemble methods with the proposed model. These experiments were performed using MATLAB R2022b on a Windows 10 Intel Core i5-10210U CPU.

4.1. Data Description and Performance Metrics

The Southeast Asia Freight Index (SEAFI), Dalian Containerized Freight Index (DCFI), SCFI, and CCFI were selected to verify the model proposed in this study. These container freight indices have important applications in measuring the level of container transportation markets worldwide. The Southeast Asia CFI provides key insights into shipping trends in Southeast Asia and aids businesses and policymakers in making informed logistics and trade decisions. The Dalian CFI reflects shipping activities in Northeast China, helping businesses manage costs and optimize supply chains in this strategic area. The Shanghai CFI monitors shipping status throughout Shanghai and serves as a crucial indicator of global shipping trends and economic health. The China CFI offers a comprehensive overview of China’s container shipping market, covering multiple ports and helping businesses engaged in international trade with China forecast shipping expenses and adjust logistics strategies. The data are derived from https://x.qianzhan.com/xdata/ (accessed on 8 March 2023). The SEAFI dataset has 288 samples, the DCFI dataset has 288 samples, the SCFI dataset has 480 samples, and the CCFI dataset has 960 samples. The datasets were partitioned into three segments, namely training, validation, and testing sets, with an allocation ratio of 12:3:1. The data features and segmentation of all datasets are shown in Figure 2. Training samples from different datasets were used to train the prediction model, and the empirical results were compared to confirm the advantages of the proposed model. Specifically, SEAFI had 216 training samples, DCFI had 216 training samples, SCFI, had 360, and CCFI had 720 training samples. Table 1 presents the descriptive statistics for the four datasets. Table 1 shows that the descriptive statistical values of distinct datasets exhibit variations; therefore, different datasets have different characteristics, which verifies that the proposed model has strong robustness.
MAE, RMSE, MAPE, IA, and TIC were chosen to assess model performance because they each offer unique insights and are widely used in predictions [41]. MAE measures the average error magnitude in the same units as the data, making it easy to understand. RMSE emphasizes larger errors by squaring them, which is useful when large errors require more attention. MAPE provides the error as a percentage, making it easy to compare different datasets. IA measures both the size and direction of errors, with values ranging from 0 to 1, indicating agreement between the observed and predicted values. TIC compares forecasting performances across different scales and balances different error types, which is particularly helpful in economic forecasting. In addition, the stability of the prediction model can be determined by assessing the standard deviation of the prediction results [52]. A smaller standard deviation suggests that the model exhibits greater robustness and that the prediction results are more reliable. These metrics provide a comprehensive evaluation of model accuracy and error characteristics. The mathematical equations for these metrics are listed in Table 2.

4.2. Experimental Design

Eleven comparative models were constructed to demonstrate the predictive performance of the proposed model. Four comparative experiments were conducted to verify the superiority of the proposed model. Experiment I aimed to introduce the selection results of the adaptive selection module on different datasets, proving the necessity of the adaptive data preprocessing module in the proposed prediction model. In Experiment II, four sub-predictors (ORELM, BP, GMDH, and ANFIS) were used as comparison models to prove the necessity and effectiveness of the adaptive data preprocessing and multi-objective ensemble modules, proving that the proposed model exhibits superior predictive capabilities compared to a single neural network model. To further demonstrate the superiority of the multi-objective ensemble, Experiment III was designed, and the proposed model was compared with an equal-weight ensemble model, which can prove that the proposed model exhibits superior predictive capabilities compared to an equal-weight ensemble model. In Experiment IV, we compared the proposed model with other multi-objective ensemble methods, proving that the proposed model exhibits superior predictive capabilities compared to other multi-objective ensemble models.

4.3. Experiment I: Results of the Adaptive Model Selection Module

A comprehensive assessment of the results of the adaptive model selection process across different datasets was conducted. The model library, comprising ORELM, BP, GMDH, and ANFIS, showed a robust nonlinear fitting capability. By employing consistent hyperparameters across all the models, the influence of model-specific parameters was minimized, ensuring a fair comparison. The necessity of selecting models individually for each dataset stems from their distinct predictive capabilities. This is further exemplified by the varying contributions of the different dataset modes to the final prediction results, necessitating adaptive model selection for optimal performance. The outcomes of the adaptive model selection module for different datasets are shown in Table 3.
In the SEAFI dataset, ORELM and ANFIS are effective for mode 1, while ORELM, BP, and ANFIS are suited for mode 2. Interestingly, mode 3 requires no model, indicating that its exclusion may enhance forecasting accuracy. For the DCFI dataset, ORELM performed well for mode 1. BP and ANFIS were preferred for mode 2, and ORELM and ANFIS were suitable for mode 3. The results on the SCFI dataset demonstrate ORELM’s efficacy for mode 1, that of BP and ANFIS for mode 2, and that of a combination of ORELM and GMDH for mode 3. Finally, in the CCFI dataset, ORELM, BP, and ANFIS were effective for mode 1, and the same combination worked well for mode 2, with GMDH being the choice for mode 3. Owing to the use of adaptive model selection techniques, the proposed model achieved better predictive results than the comparison models. These results underscore the importance of adaptive model selection tailored to each dataset’s characteristics, demonstrating that different models and their combinations contribute variably to the prediction accuracy, depending on the data modes.

4.4. Experiment II: Comparison of Single Artificial Neural Network Model and Proposed Model

In this experiment, the comparative outcomes of a single artificial neural network and the proposed model were analyzed. The selected single artificial neural network models were ORELM, BP, GMDH, and ANFIS. The comparative outcomes between the single artificial neural network and the proposed model are displayed in Table 4. Visualizations of both the single artificial neural network and the proposed model are presented in Figure 3.
The experimental results compare the performance of single artificial neural network models (ORELM, BP, GMDH, and ANFIS) and those of the proposed model in predicting containerized freight indices across multiple datasets. The proposed model consistently outperforms the single models, demonstrating significantly lower MAE, RMSE, MAPE, and TIC values. For instance, for the SEAFI dataset, the proposed model achieved an MAE of 27.635876, an RMSE of 32.146729, and an MAPE of 0.545154%. The BP model, which performed the worst, recorded an MAE of 151.781720, an RMSE of 178.337512, and an MAPE of 3.015159%. This stark contrast highlights the superior predictive accuracy and reduced error rates of the proposed model compared with single neural network models. For the other datasets, the proposed model continued to demonstrate superior performance. The DCFI dataset achieved an MAE of 5.122103, an RMSE of 6.506787, and an MAPE of 0.572040%, whereas the ANFIS model performed poorly, with an MAE of 59.555643, an RMSE of 71.965793, and an MAPE of 6.865999%. Similarly, for the SCFI dataset, the proposed model had an MAE of 4.469254, an RMSE of 5.254923, and an MAPE of 0.536753%, significantly outperforming the ANFIS model’s MAE of 34.211761), RMSE of 39.326185, and MAPE of 4.063924%. For the CCFI dataset, the proposed model shows outstanding performance, with an MAE of 0.995431, RMSE of 1.278544, and MAPE of 0.114937%, compared with the BP model’s MAE of 10.592830, RMSE of 13.355790, and MAPE of 1.198346%.
These results indicate that the proposed model not only offers better accuracy but also greater reliability across various datasets, making it a superior choice for predicting containerized freight indices. Owing to the use of adaptive data preprocessing techniques, the proposed model achieved better predictive results than the comparison models. This consistent superiority across multiple performance metrics and datasets underscores the robustness and effectiveness of the proposed model for handling complex predictive tasks in freight index forecasting.

4.5. Experiment III: Comparison of Equal-Weight Method and the Proposed Model

In this section, we present a comparative experiment to assess the equal-weight strategy against the proposed model. The following comparison models were chosen: ADP-ORELM-EW, ADP-BP-EW, ADP-GMDH-EW, and ADP-ANFIS-EW, where ADP is an adaptive data preprocessing operation using SVMD and EW is the equal-weight method. All models used the same hyperparameters to mitigate the impact of the model parameters on the prediction outcomes. The comparative outcomes between the simple ensemble model and the proposed model are presented in Table 5, and a visual presentation of the comparative outcomes is presented in Figure 4.
The proposed model demonstrates superior prediction performance on the SEAFI, DCFI, SCFI, and CCFI datasets, with MAE values of 27.635876, 5.122103, 4.469254, and 0.995431, respectively. In comparison, ADP-BP-EW records the highest MAE of 113.873704 for SEAFI, ADP-ANFIS-EW has the maximum MAE loss of 10.902471 for DCFI, and ADP-GMDH-EW exhibits the poorest MAE values of 6.653787 and 1.877313 for SCFI and CCFI, respectively. In terms of RMSE, the proposed model achieves 32.146729, 6.506787, 5.254923, and 1.278544 for the SEAFI, DCFI, SCFI, and CCFI datasets, respectively. In contrast, ADP-GMDH-EW shows the poorest RMSE values, reaching 245.850597 for SEAFI, 7.532368 for SCFI, and 2.411816 for CCFI. Additionally, ADP-ANFIS-EW experienced a maximum RMSE loss of 13.981185 for DCFI. For MAPE, the proposed model attained values of 0.545154%, 0.572040%, 0.536753%, and 0.114937% for SEAFI, DCFI, SCFI, and CCFI, respectively. Comparatively, ADP-BP-EW achieved the worst MAPE of 2.219730% on SEAFI, ADP-ANFIS-EW recorded the maximum MAPE loss of 1.224123% on DCFI, and ADP-GMDH-EW showed the poorest MAPE values of 0.784850% and 0.215563% on SCFI and CCFI, respectively.
The experimental results highlight the benefits and advantages of the proposed model, particularly in its ability to significantly outperform alternative models across multiple performance indicators such as MAE, RMSE, and MAPE. The strength of the proposed model lies in its adaptive model selection module and multi-objective ensemble module, which collectively enhance its predictive accuracy and robustness. These components enable the model to select the most suitable models dynamically and optimally combine them, leading to more precise and reliable predictions. This adaptability is crucial for handling the varying characteristics and complexities of different datasets. Owing to the use of multi-objective ensemble techniques, the proposed model achieved better predictive results than the comparison models. Moreover, compared with traditional equal-weight ensemble models, the proposed model shows a significant improvement. This further demonstrates the superiority of multi-objective ensemble methods, providing a more nuanced and effective approach for predictive modeling. Its superior performance across diverse datasets underscores the versatility and potential of the model for broad applicability to various predictive tasks.

4.6. Experiment IV: Comparison of Different Intelligent Ensemble Method and the Proposed Model

The multi-objective ensemble module plays a vital role in consolidating the outputs of multiple predictors to produce the final prediction value. In this section, a comparison experiment designed to explore the influence of various multi-objective ensemble schemes on the final prediction value is presented. ADP-LASSO-MOGWO, ADP-LASSO-MODA, and ADP-LASSO-MOPSO were selected for comparison with the proposed model, and the adaptive model selection module was established based on LASSO technology. All models used the same hyperparameters to mitigate the impact of the model parameters on the prediction outcomes.
Because each intelligent optimization scheme has different ensemble capabilities for different datasets, it is necessary to analyze the different ensemble schemes separately for each dataset. The performance of each intelligent optimization scheme is shown in Table 6. A visualization of each intelligent optimization scheme is shown in Figure 5.
In the SEAFI dataset, the proposed model excels, with an MAE of 27.635876, an RMSE of 32.146729, and an MAPE of 0.545154%. In contrast, ADP-LASSO-MOGWO performed poorly, with MAE, RMSE, and MAPE values of 29.484178, 34.217756, and 0.583031%, respectively. ADP-LASSO-MODA showed the worst RMSE of 34.409024. For the DCFI dataset, the proposed model demonstrated strong performance, achieving an MAE of 5.122103, RMSE of 6.506787, and MAPE of 0.572040%. Conversely, ADP-LASSO-MODA underperformed, with corresponding values of 5.525988, 7.271350, and 0.612448%, while ADP-LASSO-MOGWO had the highest MAE and MAPE values of 5.568783 and 0.632117%, respectively. On the SCFI dataset, the proposed model again shows superior results, with an MAE of 4.469254, RMSE of 5.254923, and MAPE of 0.536753%. Meanwhile, ADP-LASSO-MODA lags behind, recording corresponding values of 4.868545, 5.737525, and 0.588101%. Finally, on the CCFI dataset, the proposed model achieved outstanding performance, with an MAE of 0.995431, RMSE of 1.278544, and MAPE of 0.114937%. Conversely, ADP-LASSO-MODA yielded less favorable results, with corresponding values of 1.142011, 1.440428, and 0.131289%.
The experimental results indicate that the proposed model significantly outperforms its counterparts in predictive accuracy across multiple datasets. This performance is underscored by lower MAE, RMSE, MAPE, IA, and TIC values, suggesting that the model delivers more precise and reliable forecasts. The experimental results prove that the proposed model exhibits superior predictive capabilities compared to other multi-objective ensemble models. Owing to the use of multi-objective ensemble techniques based on the MOAVOA, the proposed model achieved better predictive results than the comparison models. The benefits of the proposed model include enhanced predictive accuracy, which can lead to better decision making and resource allocation. The robustness and consistency of the model across various datasets also highlight its adaptability and potential applicability to a wide range of predictive tasks. In addition, the model effectively enhanced the overall predictive performance, indicating its ability to efficiently handle complex data patterns.

5. Further Discussion

This section discusses the necessity of the adaptive model selection and multi-objective ensemble modules in the proposed model. To further verify the contribution of each module to the overall model, Equation (35) is used to analyze the improvement percentage of point predictions [41]. Here, I n d i c a t o r 1 and I n d i c a t o r 2 are the statistical indicator values for the comparison and proposed models, respectively. The larger the P i n d i c a t o r value, the better the performance of the proposed prediction model compared to that of the comparison model. Table 7 lists the improvement percentages of the proposed model across the four datasets.
P i n d i c a t o r = I n d i c a t o r 1 I n d i c a t o r 2 I n d i c a t o r 1
Comparison of the proposed model wit EW models ADP-ORELM-EW, ADP-BP-EW, ADP-GMDH-EW, and ADP-ANFIS-EW shows that the P i n d i c a t o r values are all positive, which indicates that the proposed model outperforms the other models with EW. Specifically, the P i n d i c a t o r values of the proposed model and ADP-GMDH-EW models for RMSE and TIC are as high as 0.869243 and 0.868656, respectively, on the SEAFI dataset. Therefore, the adaptive model selection module in the proposed model is more significant for the CFI. According to a comparison of the proposed model with models employing other intelligent optimization algorithms such as ADP-LASSO-MOGWO, ADP-LASSO-MODA, and ADP-LASSO-MOPSO, the P i n d i c a t o r values are all positive, which demonstrates that the performance of the proposed model surpasses that of the models using other intelligent optimization algorithms. The difference in the P i n d i c a t o r value between the proposed model and models using other intelligent optimization algorithms is relatively small for most indicators and datasets. However, the P i n d i c a t o r values of the proposed model and ADP-LASSO-MODA models on RMSE and TIC indicators are as high as 0.105147 and 0.105402, respectively, on the DCFI. Hence, the multi-objective ensemble module in the proposed model is more effective than other intelligent optimization algorithms for the CFI.
Comparative analysis shows that the proposed model consistently outperforms other models, thanks to its effective adaptive model selection and multi-objective ensemble modules. These components significantly improve the forecasting performance of the model, making it superior to models with equal-weight ensembles and those using other intelligent optimization algorithms.

6. Conclusions

Accuracy and stability are extremely important for the prediction model to reflect the status of the container market. However, the CFI has strong nonlinearity, which presents a severe test for the prediction model. In addition, the VMD decomposition scheme requires the decomposition layer count (K) to be set, and the prediction performance of the prediction model may be poor when the choice of K value is not reasonable. Numerous studies have overlooked the influence of the model library module on the accuracy of prediction models and considered the effects of both adaptive model selection and multi-objective ensembles in enhancing the robustness of the models. Therefore, this study proposes a novel selection ensemble prediction model with adaptive data preprocessing, model library, adaptive model selection, and multi-objective ensemble modules from the perspective of adaptive model selection.
The primary findings of this study are outlined as follows:
  • The proposed decomposition ensemble prediction model has better prediction ability and robustness. On the CCFI dataset, the MAE, RMSE, MAPE, IA, and TIC of the proposed model are 0.995431, 1.278544, 0.114937%, 0.999878, and 0.000731, respectively; the standard deviation of the proposed model is 1.288266.
  • A novel model library module consists of ORELM, BP, GMDH, and ANFIS, which can provide multiple predictors for different modes. For the models that do not use the model library, the MAE values for ADP-ORELM-EW, ADP-BP-EW, ADP-GMDH-EW, and ADP-ANFIS-EW are 1.172282, 1.306912, 1.877313, and 1.150139, respectively; and the MAE value of the proposed model is 0.995431. This illustrates the importance of the novel model library module for prediction accuracy.
  • The adaptive model selection module can effectively identify an appropriate predictor by optimizing overall prediction performance. The adaptive model selection module can dynamically choose the most suitable predictor for distinct datasets, aligning with the decomposed subseries. Comparison experiments demonstrated the adaptive model selection module is able to improve the prediction reliability of the model proposed in this paper.
  • A novel multi-objective ensemble module based on the artificial vulture optimization algorithm was designed. It provides better ensemble results than MOGWO, MODA, and MOPSO. For models using other intelligent optimization algorithms, the MAPE values for ADP-LASSO-MOGWO, ADP-LASSO-MODA, and ADP-LASSO-MOPSO are 0.120120%, 0.131289%, and 0.120066%, respectively, and the MAPE value of the proposed model is 0.114937%. This illustrates that the novel multi-objective ensemble module is effective in improving the accuracy of the prediction model.
  • A new intelligent prediction model is proposed, which comprises the following four components: adaptive data preprocessing, model library, adaptive model selection, and multi-objective ensemble modules. The experimental analysis results prove that the proposed model demonstrates outstanding accuracy and robustness, and it also provides a new perspective to understand the decomposition ensemble model for other fields.
Despite the excellent performance of the proposed decomposition ensemble model in the container prediction field, there are still some limitations that require further investigation. This study primarily focuses on historical data analysis and does not consider other factors that influence the container freight index. Moreover, combining historical CFI data with related policy analyses is another potential research direction. By comprehensively considering policy changes and their impact on container transportation, the accuracy and reliability of the prediction models can be further enhanced. In addition, future research should explore the application of advanced technologies to enhance the generalization capabilities of prediction models and their ability to adapt to complex environmental changes.

Author Contributions

W.Y.: conceptualization, methodology, project administration, software, supervision, writing—original draft, writing—review and editing, and funding acquisition. H.Z.: data curation, formal analysis, investigation, methodology, software, validation, visualization, writing—original draft, and writing—review and editing. S.Y.: conceptualization, methodology, and writing—review and editing. Y.H.: funding acquisition, methodology, supervision, and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant Nos. 72101138 and 72301157), the Humanities and Social Science Fund of the Ministry of Education of the People’s Republic of China (Grant No. 21YJCZH198), the Shandong Provincial Natural Science Foundation, China (Grant Nos. ZR2021QG034 and ZR2022QG036), the Social Science Planning Project of Shandong Province (Grant Nos. 22DJJJ24, 24DGLJ09), and the Shandong Province Higher Educational Youth Innovation Team Development Program (Grant No. 2021RW020).

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

Mr. Sibo Yang is employee of the company Ping An Life Insurance Company of China Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CFIContainerized freight index
SEAFISoutheast Asia Freight Index
DCFI Dalian Containerized Freight Index
SCFIShanghai Containerized Freight Index
CCFIChina Containerized Freight Index
VARVector autoregression
ARIMAAutoregressive integrated moving average
SARIMASeasonal autoregressive integrated moving average
SVRSupport vector regression
LSSVRLeast squares support vector regression
GPGenetic programming regression
TF-DPSOTransfer prediction model guided by discrete particle swarm optimization
EMD-BPBP model-based EMD
EMD-ARIMAARIMA model-based EMD
BDIBaltic Dry Index
EMDEmpirical mode decomposition
EEMDEnsemble empirical mode decomposition
CEEMDComplementary ensemble empirical mode decomposition
ICEEMDANImproved complete ensemble empirical mode decomposition with adaptive noise
VMDVariational mode decomposition
SVMDSequential variational mode decomposition
LSTMLong short-term memory
ORELMOutlier robust extreme learning machine
BPBackpropagation
GMDHGroup method of data handling
ANFISAdaptive-network-based fuzzy inference system
ADPAdaptive data preprocessing module with SVMD
MOAVOAMulti-objective artificial vulture optimization algorithm
MOGWOMulti-objective gray wolf optimization
MODAMulti-objective dragonfly algorithm
MOPSOMulti-objective particle swarm optimization
ADP-ORELM-EWEqual-weight strategy-based ADP and ORELM
ADP-BP-EWEqual-weight strategy-based ADP and BP
ADP-GMDH-EWEqual-weight strategy-based ADP and GMDH
ADP-ANFIS-EWEqual weight strategy-based ADP and ANFIS
ADP-LASSO-MOGWOMOGWO ensemble based on ADP and LASSO
ADP-LASSO-MODAMODA ensemble based on ADP and LASSO
ADP-LASSO-MOPSOMOPSO ensemble based on ADP and LASSO

References

  1. Feng, X.; Song, R.; Yin, W.; Yin, X.; Zhang, R. Multimodal Transportation Network with Cargo Containerization Technology: Advantages and Challenges. Transp. Policy 2023, 132, 128–143. [Google Scholar] [CrossRef]
  2. Tu, X.; Yang, Y.; Lin, Y.; Ma, S. Analysis of Influencing Factors and Prediction of China’s Containerized Freight Index. Front. Mar. Sci. 2023, 10, 1245542. [Google Scholar] [CrossRef]
  3. Yu, F.; Xiang, Z.; Wang, X.; Yang, M.; Kuang, H. An Innovative Tool for Cost Control under Fragmented Scenarios: The Container Freight Index Microinsurance. Transp. Res. Part E Logist. Transp. Rev. 2023, 169, 102975. [Google Scholar] [CrossRef]
  4. Munim, Z.H.; Schramm, H.J. Forecasting Container Freight Rates for Major Trade Routes: A Comparison of Artificial Neural Networks and Conventional Models. Marit. Econ. Logist. 2021, 23, 310–327. [Google Scholar] [CrossRef]
  5. Jeon, J.W.; Duru, O.; Yeo, G.T. Modelling Cyclic Container Freight Index Using System Dynamics. Marit. Policy Manag. 2020, 47, 287–303. [Google Scholar] [CrossRef]
  6. Koyuncu, K.; TAVACIOĞLU, L. Forecasting Shanghai Containerized Freight Index by Using Time Series Models. Mar. Sci. Technol. Bull. 2021, 10, 426–434. [Google Scholar] [CrossRef]
  7. Kawasaki, T.; Matsuda, T.; Lau, Y.y.; Fu, X. The Durability of Economic Indicators in Container Shipping Demand: A Case Study of East Asia–US Container Transport. Marit. Bus. Rev. 2022, 7, 288–304. [Google Scholar] [CrossRef]
  8. Hirata, E.; Matsuda, T. Forecasting Shanghai Container Freight Index: A Deep-Learning-Based Model Experiment. J. Mar. Sci. Eng. 2022, 10, 593. [Google Scholar] [CrossRef]
  9. Shankar, S.; Ilavarasan, P.V.; Punia, S.; Singh, S.P. Forecasting Container Throughput with Long Short-Term Memory Networks. Ind. Manag. Data Syst. 2020, 120, 425–441. [Google Scholar] [CrossRef]
  10. Kim, D.; Choi, J.S. Estimation Model for Freight of Container Ships Using Deep Learning Method. J. Korean Soc. Mar. Environ. Saf. 2021, 27, 574–583. [Google Scholar] [CrossRef]
  11. Dasari, C.M.; Bhukya, R. Explainable Deep Neural Networks for Novel Viral Genome Prediction. Appl. Intell. 2022, 52, 3002–3017. [Google Scholar] [CrossRef]
  12. Khaksar Manshad, M.; Meybodi, M.R.; Salajegheh, A. A New Irregular Cellular Learning Automata-Based Evolutionary Computation for Time Series Link Prediction in Social Networks. Appl. Intell. 2021, 51, 71–84. [Google Scholar] [CrossRef]
  13. Swathi, T.; Kasiviswanath, N.; Rao, A.A. An Optimal Deep Learning-Based LSTM for Stock Price Prediction Using Twitter Sentiment Analysis. Appl. Intell. 2022, 52, 13675–13688. [Google Scholar] [CrossRef]
  14. Xiao, W.; Xu, C.; Liu, H.; Liu, X. A Hybrid LSTM-Based Ensemble Learning Approach for China Coastal Bulk Coal Freight Index Prediction. J. Adv. Transp. 2021, 2021, 5573650. [Google Scholar] [CrossRef]
  15. Liang, X.; Wang, Y.; Yang, M. Systemic Modeling and Prediction of Port Container Throughput Using Hybrid Link Analysis in Complex Networks. Systems 2024, 12, 23. [Google Scholar] [CrossRef]
  16. Li, M.; Yang, Y.; He, Z.; Guo, X.; Zhang, R.; Huang, B. A Wind Speed Forecasting Model Based on Multi-Objective Algorithm and Interpretability Learning. Energy 2023, 269, 126778. [Google Scholar] [CrossRef]
  17. Wang, Y.; Xu, H.; Song, M.; Zhang, F.; Li, Y.; Zhou, S.; Zhang, L. A Convolutional Transformer-based Truncated Gaussian Density Network with Data Denoising for Wind Speed Forecasting. Appl. Energy 2023, 333, 120601. [Google Scholar] [CrossRef]
  18. Yang, F.; Fu, X.; Yang, Q.; Chu, Z. Decomposition Strategy and Attention-Based Long Short-Term Memory Network for Multi-Step Ultra-Short-Term Agricultural Power Load Forecasting. Expert Syst. Appl. 2024, 238, 122226. [Google Scholar] [CrossRef]
  19. Ding, Y.; Chen, Z.; Zhang, H.; Wang, X.; Guo, Y. A Short-Term Wind Power Prediction Model Based on CEEMD and WOA-KELM. Renew. Energy 2022, 189, 188–198. [Google Scholar] [CrossRef]
  20. Li, G.; Zhong, X. Parking Demand Forecasting Based on Improved Complete Ensemble Empirical Mode Decomposition and GRU Model. Eng. Appl. Artif. Intell. 2023, 119, 105717. [Google Scholar] [CrossRef]
  21. Liu, Y.; Huang, S.; Tian, X.; Zhang, F.; Zhao, F.; Zhang, C. A Stock Series Prediction Model Based on Variational Mode Decomposition and Dual-Channel Attention Network. Expert Syst. Appl. 2024, 238, 121708. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Wang, J.; Wei, D.; Luo, T.; Xia, Y. A Novel Ensemble System for Short-Term Wind Speed Forecasting Based on Two-stage Attention-Based Recurrent Neural Network. Renew. Energy 2023, 204, 11–23. [Google Scholar] [CrossRef]
  23. Bai, J.; Guo, J.; Sun, B.; Guo, Y.; Bao, Q.; Xiao, X. Intelligent Forecasting Model of Stock Price Using Neighborhood Rough Set and Multivariate Empirical Mode Decomposition. Eng. Appl. Artif. Intell. 2023, 122, 106106. [Google Scholar] [CrossRef]
  24. Rezaei, H.; Faaljou, H.; Mansourfar, G. Stock Price Prediction Using Deep Learning and Frequency Decomposition. Expert Syst. Appl. 2021, 169, 114332. [Google Scholar] [CrossRef]
  25. Liu, J.; Wang, P.; Chen, H.; Zhu, J. A Combination Forecasting Model Based on Hybrid Interval Multi-Scale Decomposition: Application to Interval-Valued Carbon Price Forecasting. Expert Syst. Appl. 2022, 191, 116267. [Google Scholar] [CrossRef]
  26. Nguyen, T.H.T.; Phan, Q.B. Hourly Day Ahead Wind Speed Forecasting Based on a Hybrid Model of EEMD, CNN-Bi-LSTM Embedded with GA Optimization. Energy Rep. 2022, 8, 53–60. [Google Scholar] [CrossRef]
  27. Yang, W.; Hao, M.; Hao, Y. Innovative Ensemble System Based on Mixed Frequency Modeling for Wind Speed Point and Interval Forecasting. Inf. Sci. 2023, 622, 560–586. [Google Scholar] [CrossRef]
  28. Jaseena, K.U.; Kovoor, B.C. Decomposition-Based Hybrid Wind Speed Forecasting Model Using Deep Bidirectional LSTM Networks. Energy Convers. Manag. 2021, 234, 113944. [Google Scholar] [CrossRef]
  29. Li, X.; Zhang, X.; Zhang, C.; Wang, S. Forecasting Tourism Demand with a Novel Robust Decomposition and Ensemble Framework. Expert Syst. Appl. 2024, 236, 121388. [Google Scholar] [CrossRef]
  30. Da Silva, R.G.; Moreno, S.R.; Ribeiro, M.H.D.M.; Larcher, J.H.K.; Mariani, V.C.; dos Santos Coelho, L. Multi-Step Short-Term Wind Speed Forecasting Based on Multi-Stage Decomposition Coupled with Stacking-Ensemble Learning Approach. Int. J. Electr. Power Energy Syst. 2022, 143, 108504. [Google Scholar] [CrossRef]
  31. Fang, L. Establishment of Shipping Container Price Prediction Model for International Trade. In Proceedings of the 2022 6th International Symposium on Computer Science and Intelligent Control (ISCSIC), Beijing, China, 11–13 November 2022; pp. 331–335. [Google Scholar] [CrossRef]
  32. Chen, Y.; Liu, B.; Wang, T. Analysing and Forecasting China Containerized Freight Index with a Hybrid Decomposition–Ensemble Method Based on EMD, Grey Wave and ARMA. Grey Syst. Theory Appl. 2021, 11, 358–371. [Google Scholar] [CrossRef]
  33. Tian, Z.; Chen, H. A Novel Decomposition-Ensemble Prediction Model for Ultra-Short-Term Wind Speed. Energy Convers. Manag. 2021, 248, 114775. [Google Scholar] [CrossRef]
  34. Zhao, Z.; Yun, S.; Jia, L.; Guo, J.; Meng, Y.; He, N.; Li, X.; Shi, J.; Yang, L. Hybrid VMD-CNN-GRU-Based Model for Short-Term Forecasting of Wind Power Considering Spatio-Temporal Features. Eng. Appl. Artif. Intell. 2023, 121, 105982. [Google Scholar] [CrossRef]
  35. Niu, X.; Ma, J.; Wang, Y.; Zhang, J.; Chen, H.; Tang, H. A Novel Decomposition-Ensemble Learning Model Based on Ensemble Empirical Mode Decomposition and Recurrent Neural Network for Landslide Displacement Prediction. Appl. Sci. 2021, 11, 4684. [Google Scholar] [CrossRef]
  36. Altan, A.; Karasu, S.; Zio, E. A New Hybrid Model for Wind Speed Forecasting Combining Long Short-Term Memory Neural Network, Decomposition Methods and Grey Wolf Optimizer. Appl. Soft Comput. 2021, 100, 106996. [Google Scholar] [CrossRef]
  37. Joseph, L.P.; Deo, R.C.; Prasad, R.; Salcedo-Sanz, S.; Raj, N.; Soar, J. Near Real-Time Wind Speed Forecast Model with Bidirectional LSTM Networks. Renew. Energy 2023, 204, 39–58. [Google Scholar] [CrossRef]
  38. Luo, J.; Zhang, X. Convolutional Neural Network Based on Attention Mechanism and Bi-LSTM for Bearing Remaining Life Prediction. Appl. Intell. 2022, 52, 1076–1091. [Google Scholar] [CrossRef]
  39. Yang, Y.; Fan, C.; Xiong, H. A Novel General-Purpose Hybrid Model for Time Series Forecasting. Appl. Intell. 2022, 52, 2212–2223. [Google Scholar] [CrossRef] [PubMed]
  40. Yu, R.; Sun, Y.; Li, X.; Yu, J.; Gao, J.; Liu, Z.; Yu, M. Time Series Cross-Correlation Network for Wind Power Prediction. Appl. Intell. 2023, 53, 11403–11419. [Google Scholar] [CrossRef]
  41. Yang, S.; Yang, W.; Wang, X.; Hao, Y. A Novel Selective Ensemble System for Wind Speed Forecasting: From a New Perspective of Multiple Predictors for Subseries. Energy Convers. Manag. 2023, 294, 117590. [Google Scholar] [CrossRef]
  42. Wang, J.; Cui, Q.; Sun, X.; He, M. Asian Stock Markets Closing Index Forecast Based on Secondary Decomposition, Multi-Factor Analysis and Attention-Based LSTM Model. Eng. Appl. Artif. Intell. 2022, 113, 104908. [Google Scholar] [CrossRef]
  43. Hao, Y.; Zhou, Y.; Gao, J.; Wang, J.; Hao, Y.; Zhou, Y.; Gao, J.; Wang, J. A Novel Air Pollutant Concentration Prediction System Based on Decomposition-Ensemble Mode and Multi-Objective Optimization for Environmental System Management. Systems 2022, 10, 139. [Google Scholar] [CrossRef]
  44. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.C.; Tung, C.C.; Liu, H.H. The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  45. Dragomiretskiy, K.; Zosso, D. Variational Mode Decomposition. IEEE Trans. Signal Process. 2013, 62, 531–544. [Google Scholar] [CrossRef]
  46. Nazari, M.; Sakhaei, S.M. Successive Variational Mode Decomposition. Signal Process. 2020, 174, 107610. [Google Scholar] [CrossRef]
  47. Zhang, K.; Luo, M. Outlier-Robust Extreme Learning Machine for Regression Problems. Neurocomputing 2015, 151, 1519–1527. [Google Scholar] [CrossRef]
  48. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back Propagating Errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  49. Oh, S.K.; Pedrycz, W. The Design of Self-Organizing Polynomial Neural Networks. Inf. Sci. 2002, 141, 237–258. [Google Scholar] [CrossRef]
  50. Jang, J.S. ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  51. Khodadadi, N.; Soleimanian Gharehchopogh, F.; Mirjalili, S. MOAVOA: A New Multi-Objective Artificial Vultures Optimization Algorithm. Neural Comput. Appl. 2022, 34, 20791–20829. [Google Scholar] [CrossRef]
  52. Hao, Y.; Wang, X.; Wang, J.; Yang, W. A new perspective of wind speed forecasting: Multi-objective and model selection-based ensemble interval-valued wind speed forecasting system. Energy Convers. Manag. 2024, 299, 117868. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed prediction model.
Figure 1. Framework of the proposed prediction model.
Systems 12 00309 g001
Figure 2. Data features and data segmentation of all datasets.
Figure 2. Data features and data segmentation of all datasets.
Systems 12 00309 g002
Figure 3. Visualization of the single artificial neural network and the proposed model.
Figure 3. Visualization of the single artificial neural network and the proposed model.
Systems 12 00309 g003
Figure 4. Visual presentation of the simple ensemble model with the proposed model.
Figure 4. Visual presentation of the simple ensemble model with the proposed model.
Systems 12 00309 g004
Figure 5. Visualization of each intelligent optimization scheme on different datasets.
Figure 5. Visualization of each intelligent optimization scheme on different datasets.
Systems 12 00309 g005
Table 1. Descriptive statistical values for each dataset.
Table 1. Descriptive statistical values for each dataset.
SiteDatasetNumber of SamplesMeanStdMinMaxKurtosisSkewness
SEAFITraining2161108.691153.18351.084997.936.052.78
Validation545096.201388.783896.118100.88−0.141.16
Testing185052.77266.434426.255427.590.10−0.59
All Samples2882102.852081.37351.088100.88−0.121.10
DCFITraining2161033.50253.85489.461,528.54−0.65−0.33
Validation54885.2278.62708.981,003.87−0.84−0.33
Testing18872.0257.85800.95977.94−0.980.57
All Samples288995.60232.26489.461528.54−0.500.05
SCFITraining3601016.22265.62400.431583.18−0.30−0.21
Validation90814.0276.41646.59956.63−0.59−0.35
Testing30844.1683.69723.93976.52−1.470.15
All Samples480967.56248.10400.431583.18−0.210.22
CCFITraining7201048.76112.30712.581335.860.48−0.54
Validation180790.9263.18632.36891.320.07−0.89
Testing60872.7258.38776.921023.020.350.83
All Samples960989.41145.95632.361335.86−0.75−0.27
Table 2. Evaluation metrics and calculation methods.
Table 2. Evaluation metrics and calculation methods.
Evaluation MetricEquation
MAE M A E = 1 N · i = 1 N F i A i
RMSE R M S E = 1 N · i = 1 N ( F i A i ) 2
MAPE M A P E = 1 N · i = 1 N A i F i A i × 100 %
IA I A = 1 i = 1 N ( F i A i ) 2 / i = 1 N ( | A i A ¯ | + | F i A ¯ | ) 2
TIC T I C = i = 1 N ( F i A i ) 2 / N / i = 1 N ( A i ) 2 / N + i = 1 N ( F i ) 2 / N
Std S t d = 1 N i = 1 N ( E i E ¯ ) 2
Note: A i and F i are the actual and predicted values, respectively; E i and E ¯ are the loss value and the average loss value, respectively; and N is the number of samples.
Table 3. Results of the adaptive model selection module.
Table 3. Results of the adaptive model selection module.
SiteModeORELMBPGMDHANFIS
SEAFIMode 1YNNY
Mode 2YYNY
Mode 3NNNN
DCFIMode 1YNNN
Mode 2NYNY
Mode 3YNNY
SCFIMode 1YNNN
Mode 2NYNY
Mode 3YNYN
CCFIMode 1YYNY
Mode 2YYNY
Mode 3NNYN
Note: Y indicates that the model is selected, and N indicates that the model is not selected in the specified dataset.
Table 4. Comparison results between the single neural network model and the proposed model.
Table 4. Comparison results between the single neural network model and the proposed model.
DatasetModelMAERMSEMAPE (%)IATICStd
SEAFIORELM123.205880140.4529042.4246710.9190280.01403292.916048
BP151.781720178.3375123.0151590.8981370.017885101.466952
GMDH101.303133129.7725302.0420180.9146120.012805132.178997
ANFIS88.548893108.9996771.7788140.9499630.010741107.657257
The proposed model27.63587632.1467290.5451540.9960790.00317733.074908
DCFIORELM34.48103352.6314093.8841730.7844480.03024053.596355
BP42.92316862.9016344.8677150.7445000.03571163.428099
GMDH41.83190458.1624824.7780380.7475460.03293456.944095
ANFIS59.55564371.9657936.8659990.6683200.04017859.503584
The proposed model5.1221036.5067870.5720400.9965570.0037236.694206
SCFIORELM23.46474631.7463722.7758900.9629690.01874932.102956
BP29.69624936.6816463.5214350.9564650.02145835.331475
GMDH28.68733832.4697633.4599230.9585080.01898329.524676
ANFIS34.21176139.3261854.0639240.9478270.02301138.185880
The proposed model4.4692545.2549230.5367530.9989680.0030985.344411
CCFIORELM10.24026413.0116211.1619680.9873590.00745212.731295
BP10.59283013.3557901.1983460.9867410.00762413.215159
GMDH10.20353013.0012151.1629900.9876940.00742813.078269
ANFIS10.08436212.7683991.1403710.9879100.00730812.697646
The proposed model0.9954311.2785440.1149370.9998780.0007311.288266
Table 5. Comparison results between models with equal-weight strategy and the proposed model.
Table 5. Comparison results between models with equal-weight strategy and the proposed model.
DatasetModelMAERMSEMAPE (%)IATICStd
SEAFIADP-ORELM-EW105.563264129.5811382.1133990.9461120.01293679.910906
ADP-BP-EW113.873704135.8728232.2197300.9272420.01358279.910906
ADP-GMDH-EW90.683083245.8505971.7650160.8343590.024187249.904473
ADP-ANFIS-EW39.37764847.8779090.7797280.9910100.00472143.485246
the proposed model27.63587632.1467290.5451540.9960790.00317733.074908
DCFIADP-ORELM-EW5.8990397.3046740.6671630.9955660.0041787.482022
ADP-BP-EW7.6387339.1570150.8731280.9930640.0052308.735189
ADP-GMDH-EW10.55617712.2587721.1937670.9869670.00702512.398004
ADP-ANFIS-EW10.90247113.9811851.2241230.9831720.00801514.024307
the proposed model5.1221036.5067870.5720400.9965570.0037236.694206
SCFIADP-ORELM-EW4.7966205.4365660.5816250.9988870.0032045.488654
ADP-BP-EW4.8416085.6802610.5834790.9987810.0033495.772341
ADP-GMDH-EW6.6537877.5323680.7848500.9978340.0044457.512794
ADP-ANFIS-EW4.9845995.6137790.5977480.9988110.0033105.706959
the proposed model4.4692545.2549230.5367530.9989680.0030985.344411
CCFIADP-ORELM-EW1.1722821.4346080.1351130.9998480.0008201.438433
ADP-BP-EW1.3069121.6179940.1502180.9998050.0009251.619969
ADP-GMDH-EW1.8773132.4118160.2155630.9995650.0013792.414805
ADP-ANFIS-EW1.1501391.4156730.1323750.9998500.0008091.426747
the proposed model0.9954311.2785440.1149370.9998780.0007311.288266
Table 6. Comparison results between different intelligent ensemble schemes and the proposed model.
Table 6. Comparison results between different intelligent ensemble schemes and the proposed model.
DatasetModelMAERMSEMAPE (%)IATICStd
SEAFIADP-LASSO-MOGWO29.48417834.2177560.5830310.9955360.00338235.151828
ADP-LASSO-MODA29.43828934.4090240.5827340.9956200.00340035.385542
ADP-LASSO-MOPSO28.92988333.1545390.5723540.9957870.00327634.088452
The proposed model27.63587632.1467290.5451540.9960790.00317733.074908
DCFIADP-LASSO-MOGWO5.5687836.9353990.6321170.9960410.0039746.767971
ADP-LASSO-MODA5.5259887.2713500.6124480.9957620.0041627.469158
ADP-LASSO-MOPSO5.4347266.9058290.6067320.9961560.0039527.096348
The proposed model5.1221036.5067870.5720400.9965570.0037236.694206
SCFIADP-LASSO-MOGWO4.6572545.5965150.5637800.9988250.0032985.654825
ADP-LASSO-MODA4.8685455.7375250.5881010.9987580.0033825.814262
ADP-LASSO-MOPSO4.6150765.3409950.5544560.9989290.0031505.403378
The proposed model4.4692545.2549230.5367530.9989680.0030985.344411
CCFIADP-LASSO-MOGWO1.0613821.3759000.1201200.9998610.0007871.359917
ADP-LASSO-MODA1.1420111.4404280.1312890.9998460.0008231.451698
ADP-LASSO-MOPSO1.0488251.2844130.1200660.9998770.0007341.294984
The proposed model0.9954311.2785440.1149370.9998780.0007311.288266
Table 7. The P i n d i c a t o r results of the comparison model and the proposed model on different datasets.
Table 7. The P i n d i c a t o r results of the comparison model and the proposed model on different datasets.
IndicatorSEAFI DCFI SCFICCFIAverageSEAFI DCFI SCFICCFIAverage
The proposed model vs. ADP-ORELM-EWThe proposed model vs. ADP-BP-EW
MAE0.7382060.1317060.0682490.1508600.2722550.7573110.3294560.0769070.2383330.350502
RMSE0.7519180.1092300.0334110.1087850.2508360.7634060.2894200.0748800.2097970.334376
MAPE0.7420480.1425790.0771500.1493270.2777760.7544050.3448390.0800810.2348670.353548
IA0.0528130.0009950.0000810.0000310.0134800.0742380.0035170.0001870.0000740.019504
TIC0.7544250.1089740.0330360.1088830.2513290.7661050.2881000.0750200.2097370.334740
The proposed model vs. ADP-GMDH-EWThe proposed model vs. ADP-ANFIS-EW
MAE0.6952480.5147770.3283140.4697570.5020240.2981840.5301890.1033880.1345120.266568
RMSE0.8692430.4692140.3023540.4698830.5276740.3285690.5346040.0639240.0968650.255990
MAPE0.6911330.5208110.3161080.4668070.4987150.3008410.5326940.1020410.1317380.266828
IA0.1938260.0097160.0011360.0003140.0512480.0051140.0136140.0001570.0000280.004728
TIC0.8686560.4699950.3029980.4698190.5278670.3271380.5355080.0640270.0969250.255899
The proposed model vs. ADP-LASSO-MOGWOThe proposed model vs. ADP-LASSO-MODA
MAE0.0626880.0802120.0403670.0621370.0613510.0612270.0730880.0820140.1283520.086170
RMSE0.0605250.0618010.0610360.0707580.0635300.0657470.1051470.0841130.1123860.091848
MAPE0.0649650.0950420.0479390.0431490.0627740.0644880.0659790.0873120.1245520.085583
IA0.0005450.0005170.0001430.0000180.0003060.0004610.0007980.0002100.0000320.000375
TIC0.0607580.0630690.0606650.0709130.0638510.0656510.1054020.0838550.1123840.091823
The proposed model vs. ADP-LASSO-MOPSO
MAE0.0447290.0575230.0315970.0509080.046189
RMSE0.0303970.0577830.0161150.0045690.027216
MAPE0.0475230.0571780.0319300.0427180.044837
IA0.0002920.0004030.0000390.0000010.000184
TIC0.0303290.0580270.0164200.0045820.027340
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, W.; Zhang, H.; Yang, S.; Hao, Y. A Novel Intelligent Prediction Model for the Containerized Freight Index: A New Perspective of Adaptive Model Selection for Subseries. Systems 2024, 12, 309. https://doi.org/10.3390/systems12080309

AMA Style

Yang W, Zhang H, Yang S, Hao Y. A Novel Intelligent Prediction Model for the Containerized Freight Index: A New Perspective of Adaptive Model Selection for Subseries. Systems. 2024; 12(8):309. https://doi.org/10.3390/systems12080309

Chicago/Turabian Style

Yang, Wendong, Hao Zhang, Sibo Yang, and Yan Hao. 2024. "A Novel Intelligent Prediction Model for the Containerized Freight Index: A New Perspective of Adaptive Model Selection for Subseries" Systems 12, no. 8: 309. https://doi.org/10.3390/systems12080309

APA Style

Yang, W., Zhang, H., Yang, S., & Hao, Y. (2024). A Novel Intelligent Prediction Model for the Containerized Freight Index: A New Perspective of Adaptive Model Selection for Subseries. Systems, 12(8), 309. https://doi.org/10.3390/systems12080309

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop