Skip to Content
You are currently on the new version of our website. Access the old version .
EntropyEntropy
  • Article
  • Open Access

29 March 2022

Forecasting Network Interface Flow Using a Broad Learning System Based on the Sparrow Search Algorithm

,
,
and
1
College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
2
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
3
College of Mechanical Engineering, Guizhou University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.

Abstract

In this paper, we propose a broad learning system based on the sparrow search algorithm. Firstly, in order to avoid the complicated manual parameter tuning process and obtain the best combination of hyperparameters, the sparrow search algorithm is used to optimize the shrinkage coefficient ( r ) and regularization coefficient ( λ ) in the broad learning system to improve the prediction accuracy of the model. Second, using the broad learning system to build a network interface flow forecasting model. The flow values in the time period [ T 11 , T ] are used as the characteristic values of the traffic at the moment T + 1 . The hyperparameters outputted in the previous step are fed into the network to train the broad learning system network traffic prediction model. Finally, to verify the model performance, this paper trains the prediction model on two public network flow datasets and real traffic data of an enterprise cloud platform switch interface and compares the proposed model with the broad learning system, long short-term memory, and other methods. The experiments show that the prediction accuracy of this method is higher than other methods, and the moving average reaches 97%, 98%, and 99% on each dataset, respectively.

1. Introduction

The number of cloud platform users has increased in tandem with the development of internet technologies. In the context of high concurrency and limited cloud platform resources, how to reasonably allocate resources is one of the problems studied by cloud platform managers [1]. Forecasting the traffic of the cloud platform network interfaces is an effective way to achieve reasonable resource allocation: by predicting the traffic of each interface in the future, judging its resource demand, and accordingly allocating resources and planning the network to achieve a dynamic allocation of resources with the number of requests and achieve load balancing [2]. However, with the rapid increase of cloud platform access, scholars extracting internet traffic features for network traffic modeling and prediction not only have to consider its complex characteristics such as nonlinearity and multi-scale but also face the problems of decreasing prediction accuracy and increasing resource consumption caused by the increasing data scale. Therefore, the research of high-speed, high-efficiency, and high-precision network traffic prediction methods can not only further optimize and improve network resource provisioning, planning, and network security but also be extremely significant for the development of the internet and its good integration with other industries.
Network traffic forecasting belongs to the field of time series forecasting. The types of flow prediction methods include traditional statistical analysis [3,4] and machine learning. Traditional statistical analysis uses statistical and mathematical methods to make speculations and estimates on the development trend of internet traffic in the future period, and common models include the autoregressive integrated moving average (ARIMA) model [5] and the generalized autoregressive conditional heteroskedasticity (GARCH) model [6,7]. The method of machine learning can be divided into deep learning and classical machine learning methods such as support vector machines (SVM) [8,9]. In recent years, deep neural networks have been widely used in several fields due to their good feature extraction ability [10,11,12]. Meanwhile, it has also become a common method in network traffic prediction. Miguel [13] used artificial neural network swarms to predict long-term internet flow. They presented four network flow prediction models that are based on the ensemble of time-lagged feedforward networks (TLFNS), demonstrating the superiority of the proposed models by comparing them to the classical Holt–Winters approach. Nie [14] fused deep belief networks and Gaussian models, extracted the low-pass components of network traffic that can describe its own long-distance dependence using discrete wavelet transform and then learned deep belief networks from the low-pass components to build prediction models. Fang [15] used graph convolutional neural networks and long short-term memory (LSTM) to capture the temporal and spatial aspects of the cellular network of a single cell and build a prediction model, respectively. Zhang [16] proposed a spatio-temporal graph convolutional gated recurrent unit (GC-GRU) model to capture the spatial features of network traffic using graph convolutional neural network (GCN) and further process the spatio-temporal characteristics features using gated units (GRU) to improve the prediction performance of network traffic.
Deep models usually have complex structures and large parameters, so these models require repeated iterations to train the network, spending long training time and computational resources. Chen [17] proposed a new shallow neural network based on random vector function-link neural networks (RVFLNN) [18,19,20], named as broad learning system (BLS). It contains only one hidden layer, consisting of feature mapping nodes and enhancement nodes, which reduces the complexity of the neural network and has some feature capturing ability. Meanwhile, Chen proposed an incremental learning algorithm to calculate the output weights of the newly added hidden layer nodes in the BLS, so it can complete the training of the model in a shorter time while obtaining better accuracy [17,21,22]. Since its proposal, BLS has received a lot of attention and gained rapid development [23,24]. The experiment of Chen using BLS to predict short-term wind power has demonstrated it has good performance in time-series prediction [25]. However, as a kind of neural network, the hyperparameters of the BLS have a large impact on the network accuracy, and researchers usually need to train the model repeatedly to adjust the network hyperparameters to improve the model precision. This manual tuning method not only consumes a lot of time and energy but also wastes resources such as electricity for repeated training. Therefore, automatic hyperparameter optimization methods represented by population intelligence optimization algorithms such as ant colony algorithms [26] and particle swarm algorithms [27] have been developed as a result. In recent years, there have been numerous studies on the optimization of hyperparameters using population intelligence optimization algorithms. Zhou [28] improved the gray wolf algorithm and used this optimization algorithm to optimize hyperparameters such as kernel parameters in support vector machines. Xu [29] used the whale optimization algorithm to optimize the learning rate, training time, and the number of nodes in two hidden layers of the BiLSTM_Attention model to maximize the performance of the model. The ant colony algorithm and particle swarm algorithm have problems such as being easy to fall into local optimum and unsuitable for convergence. Sparrow search algorithm (SSA) [30] is a new type of swarm intelligence optimization algorithm with the advantages of good stability, strong global search ability, and fast convergence, so it has attracted extensive attention and research from scholars at home and abroad [31,32]. Tian [33] used SSA to optimize the hyperparameters of LSTM networks. Gai [34] used SSA to compute the best learning rate and batch size of deep confidence networks. Song [35] used SSA to optimize the penalty parameters and kernel function parameters of least squares support vector machines to improve the prediction accuracy and generalization ability of LSSVM.
In order to establish a fast and accurate network traffic prediction model, BLS is applied to network traffic prediction in this paper. At the same time, in order to quickly select the optimal hyperparameters to reduce their influence on the accuracy of the BLS, this paper combines the SSA with BLS, uses SSA to filter out the optimal combination of hyperparameters, and then uses the optimal hyperparameters to train BLS to build the network traffic prediction model.
The remaining part of this paper is organized as follows. Section 2 introduces the relevant methods used, including the BLS and the SSA. Section 3 introduces the proposed broad learning system based on sparrow search algorithm (SSA-BLS). Section 4 presents our experiments: SSA-BLS model is trained using two public datasets and real traffic data collected from the switch interface of an enterprise cloud platform, and its performance is compared with other models to verify the performance of the model. Section 5 summarizes our work, presents the limitations of the current approach, and briefly describes future work.

3. Broad Learning System Based on the Sparrow Search Algorithm (SSA-BLS)

To minimize the impact of network hyperparameters and improve the accuracy of network flow forecasting, this paper employs the sparrow search algorithm to optimize hyperparameters of the broad learning system, shrinkage coefficient ( r ), and regularization coefficient ( λ ), to obtain the optimal hyperparameters and use them to build the training model. We named this method Sparrow Search Algorithm-Broad Learning System (SSA-BLS), and the algorithm is broken down into five steps.
Step 1: Parameter initialization. Determine the parameters of SSA, for example, explorer proportion and population size. Determine the range of shrinkage coefficient ( r ) and regularization coefficient ( λ ), respectively, and generate p ( p is the population size) groups of initial hyperparameters as the initial position of the sparrow. The sparrow population is expressed as:
X = [ r 1 λ 1 r 2 λ 2 r p λ p ]  
where r and λ are randomly generated. They are the hyperparameters to be optimized.
Step 2: Choosing the root mean square error (RMSE) of the BLS’s prediction as to the objective function. Using the p sets of initial hyperparameters generated in the first step trains BLS to obtain the initial objective function. The objective function of the i -th sparrow is calculated as follows:
f i = j = 1 n ( y ^ j i y j ) 2 n , i = 1 , 2 , , p
where y ^ j i is the predicted value of the j -th sample of the BLS trained with the i -th set of hyperparameters; y j is the true value of the j -th sample; n is the number of training samples. The smaller the f i , the better.
Step 3: The objective function is input into SSA, and execute the algorithm. According to the algorithm to update the sparrow population and objective function to achieve optimization of the hyperparameters of BLS.
Step 4: If the predefined number of iterations is reached, the optimization is completed and output the minimum value of the objective function:
f m = min ( [ f 1 , f 2 , , f p ] )
where m is the subscript of the minimum objective function. Then the hyperparameters obtained by SSA are:
r , λ = x m = [ r m λ m ]
Step 5: Put the hyperparameters r and λ obtained in the previous step into the BLS, train and build the network flow prediction model.
The SSA-BLS flow chart is given in Figure 4.
Figure 4. The algorithm flow chart of SSA-BLS.

4. Experimentation

All the experimental programs are developed based on python 3.8, the main packages used include numpy1.21 and pandas1.3, and the deep learning related models are implemented using pytorch3.8. The experimental environment is Windows 10OS, Intel (R) Core (TM) i5-1135G7 2.40GHz CPU, and 16.0GB RAM.

4.1. Datasets

The experimental dataset uses the core network traffic dataset of European cities and the academic backbone network traffic dataset of the UK.
The core network traffic dataset of European cities: the traffic of 11 European cities in the private ISPs. The traffic in bits on a transatlantic link from 7 June 2005, at 06:57 to 31 July 2005, at 11:17 are collected with a sampling interval of five minutes.
UK academic backbone traffic dataset: the dataset collects gathering flow from UK academic network backbone from 09:30 on 19 November 2004, to 11:11 on 27 January 2005, with a sampling interval of five minutes.
We use the data from 1 July to 25 July 2005, in the core network traffic dataset of European cities as the training set and from 26 July to 28 July as the test set; the data from 1 January to 24 January 2005, in the UK academic network backbone traffic dataset as the training set and the test set is from 25 January to 27 January.

4.2. Parameters and Evaluation Indicators

The SSA-BLS parameters are chosen as follows. Setting population size as 50, the explorer proportion is 20%, and the maximum number of iterations is 5; the number of windows in the mapping layer is 10, the number of nodes within each window in the feature mapping layer is 10, the enhancement nodes’ number is 50, and the values of shrinkage coefficient ( r ) and regularization coefficient ( λ ) are taken in the ranges of [0.09, 0.999999] and [2−30, 2−35], respectively.
Mean squared error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and moving average (MA) are used as evaluation indicators. These indicators are calculated as follows:
MSE = 1 n i = 1 n ( y ^ i y i ) 2
RMSE = 1 n i = 1 n ( y ^ i y i ) 2
MAE = 1 n i = 1 n | y ^ i y i |
MAPE = 1 n i = 1 n | y ^ i y i y i |
MA = 1 MAPE = ( 1 1 n i = 1 n | y ^ i y i y i | ) 100 %
where samples number is n , y i is the true value and y ^ i is the output. The smaller the MSE, RMSE, MAE, and MAPE, the better, while MA is closer to 100% indicating better model prediction performance.

4.3. Results and Discussion

The flow values of [ T 11 , T ] were input into the SSA-BLS to predict the flow at the moment of T + 1 in the experiment. We compare SSA-BLS with the similarly structured BLS, Extreme Learning Machine (ELM) [37], Stochastic Configuration Networks (SCN) [38], RVFLNN, dRVFL (deep RVFL) (the variant of RVFL) [39], and the LSTM [40] that is commonly used in network traffic prediction and used to evaluate the quality of the SSA-BLS. Each model is run 100 times independently, and the prediction metrics of different models are evaluated individually each time, taking the average metrics as the final result of each model. The following are the parameters of each model: the values of r and λ for BLS are automatically obtained from {0.1, 0.5, 0.9, 0.99, 0.9999, 0.99999} and {2−30, 2−20, 2−10, 0.5, 1, 5, 10}, respectively, and the remaining parameters are the same as those of the SSA-BLS; the SCN can have a maximum of 250 hidden layer nodes, the training tolerance is 0.001, and candidate nodes maximum allowed 100; the regularization factor of RVFL is 1 × 10 3 , and hidden layer has 100 nodes; dRVFL parameters are the same as RVFL; the hidden layer of ELM contains 200 nodes, and the maxing coefficient for distance and dot product input activations is 1.0; the LSTM contains 3 hidden layers, each with 12 blocks, and is trained with a learning rate of 1 × 10 2 , batch size of 64 and epoch is 15. Table 1 and Table 2 show the prediction performance of two datasets on the different models.
Table 1. Experimental results of a core network traffic dataset in European cities.
Table 2. Experimental results of UK academic backbone network traffic dataset.
On the test set of the public dataset, Figure 5 shows the predicted versus true values of the SSA-BLS model versus the other models. Moreover, to better validate the prediction accuracy of the SSA-BLS model, the model is applied to a private traffic dataset. The private traffic dataset is derived from the real incoming traffic data of switch interfaces of an enterprise from 5 October to 18 October 2021. We employ the data from 5 October to 16 October 2021, in the private dataset as training data, using the data from 17 October to 18 October as test data.
Figure 5. (a) Prediction results for a core network traffic dataset in European cities; (b) UK academic backbone network traffic dataset forecast results.
Since the sampling interval of the enterprise switch interface traffic data is unequal, the resampling is first performed: the average value of the interface traffic within 5 min is calculated, and if there is no traffic data within 5 min, the previous value is used to fill in. Meanwhile, there are great abnormal traffic values in the original data, and to lessen the influence of abnormal values, the data are smoothed using spectral smoothing (spectral smoother). Table 3 shows the experimental results.
Table 3. Enterprise cloud platform switch interface traffic data set experimental results.
Figure 6 shows the predicted versus true values of the SSA-BLS compared to others on the test data in the private dataset. It is clear from Table 1, Table 2 and Table 3 that the hyperparameters have a strong influence on BLS. If the hyperparameters are bad, the prediction performance of BLS will be degraded. The results show that the SSA-BLS model has better prediction accuracy than the other models on both the UK academic backbone network traffic dataset and the enterprise cloud platform switch interface traffic dataset, and its prediction performance on the European urban core network traffic dataset is only slightly below SCN. It can be seen that the SSA-BLS model, which is obtained after optimizing BLS using SSA, provides optimal hyperparameters for BLS through SSA, so that the SSA-BLS model can choose to capture the time characteristics of traffic better, and its network traffic prediction capability gains a large improvement compared with the original BLS model.
Figure 6. Enterprise cloud platform switch interface traffic dataset prediction results.
Meanwhile, this paper uses BLS for network traffic prediction based on the advantage of less training time due to its “expanding landscape” network structure. The running time of BLS in SSA-BLS is the main factor affecting the time of SSA-BLS. In order to verify the advantage of SSA-BLS model in time consumption, we compare the running time of BLS and the running time of LSTM for one epoch on three datasets, and the experimental results are shown in Figure 7. In Figure 7, dataset 1, dataset 2, and dataset 3 are the UK academic backbone network traffic dataset, European urban core network traffic dataset, and enterprise cloud platform switch interface traffic dataset, respectively. The experimental results show that BLS can complete the training in a shorter time, and the larger the data volume, the greater the advantage of BLS.
Figure 7. BLS and LSTM runtime.

5. Conclusions, Limitations, and Future Research

Predicting future traffic on the cloud platform interface can be used to assist the cloud platform in provisioning resources and planning the network, and it is an effective way to help achieve dynamic resource allocation and load balancing with the volume of requests. In this paper, we propose a model named SSA-BLS to predict the network interface traffic. The model uses SSA to optimize two hyperparameters in BLS to obtain the optimal combination of hyperparameters quickly and enhance the performance of BLS. At the same time, the model uses BLS to capture the traffic timing features and reduce the training time of the prediction model. Finally, we apply SSA-BLS to the short-term network traffic prediction, selecting two public datasets of network traffic and a real dataset of network switch interface traffic of an enterprise cloud platform for experiments. Finally, we compare the SSA-BLS with other models, and the experiments show that the SSA-BLS can select better hyperparameters to make the network traffic prediction accuracy above 97%.
Currently, most network traffic prediction models have a strict sampling interval for training data, requiring the data to be equally spaced. Sometimes, frequent sampling is required to obtain more fine-grained data. However, frequent sampling for a long time will increase resource consumption, and it is difficult to present the data with equal spacing due to the inevitable packet loss during the network transmission. Therefore, future research will be conducted for the prediction modeling of non-equally spaced sampled data to reduce the requirement of data spacing and improve the generalizability of the model.

Author Contributions

Project administration, S.L.; Writing—original draft, X.L.; Writing—review & editing, P.Z. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the National Key R&D Program under grant no.2020YFB1713300, and by Higher Education Project of Guizhou Province under grants no. [2020]005 and [2020]009.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, S.; Chana, I. A Survey on Resource Scheduling in Cloud Computing: Issues and Challenges. J. Grid Comput. 2016, 14, 217–264. [Google Scholar] [CrossRef]
  2. Katris, C.; Daskalaki, S. Comparing Forecasting Approaches for Internet Traffic. Expert Syst. Appl. 2015, 42, 8172–8183. [Google Scholar] [CrossRef]
  3. Yang, J.; Xiao, X.; Mao, S.; Rao, C.; Wen, J. Grey Coupled Prediction Model for Traffic Flow with Panel Data Characteristics. Entropy 2016, 18, 454. [Google Scholar] [CrossRef]
  4. Vo, N.; Ślepaczuk, R. Applying Hybrid ARIMA-SGARCH in Algorithmic Investment Strategies on S&P500 Index. Entropy 2022, 24, 158. [Google Scholar] [CrossRef]
  5. Zhong-Da, T.; Shu-Jiang, L.; Yan-Hong, W.; Xiang-Dong, W. Network Traffic Prediction Based on ARIMA with Gaussian Process Regression Compensation. J. Beijing Univ. Posts Telecommun. 2017, 40, 65. [Google Scholar]
  6. Kim, S. Forecasting Internet Traffic by Using Seasonal GARCH Models. J. Commun. Netw. 2011, 13, 621–624. [Google Scholar] [CrossRef]
  7. Kim, M. Network Traffic Prediction Based on INGARCH Model. Wirel. Netw. 2020, 26, 6189–6202. [Google Scholar] [CrossRef]
  8. Alekseeva, D.; Stepanov, N.; Veprev, A.; Sharapova, A.; Lohan, E.S.; Ometov, A. Comparison of Machine Learning Techniques Applied to Traffic Prediction of Real Wireless Network. IEEE Access 2021, 9, 159495–159514. [Google Scholar] [CrossRef]
  9. Wang, Q.; Fan, A.; Shi, H. Network Traffic Prediction Based on Improved Support Vector Machine. Int. J. Syst. Assur. Eng. Manag. 2017, 8, 1976–1980. [Google Scholar] [CrossRef]
  10. Nabipour, M.; Nayyeri, P.; Jabani, H.; Mosavi, A.; Salwana, E.; Shahab, S. Deep Learning for Stock Market Prediction. Entropy 2020, 22, 840. [Google Scholar] [CrossRef]
  11. Liu, F.; Liu, B.; Sun, C.; Liu, M.; Wang, X. Deep Belief Network-Based Approaches for Link Prediction in Signed Social Networks. Entropy 2015, 17, 2140–2169. [Google Scholar] [CrossRef] [Green Version]
  12. Huang, Z.; Xia, J.; Li, F.; Li, Z.; Li, Q. A Peak Traffic Congestion Prediction Method Based on Bus Driving Time. Entropy 2019, 21, 709. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Miguel, M.L.F.; Penna, M.C.; Nievola, J.C.; Pellenz, M.E. New Models for Long-Term Internet Traffic Forecasting Using Artificial Neural Networks and Flow Based Information. In Proceedings of the 2012 IEEE Network Operations and Management Symposium, Maui, HI, USA, 16–20 April 2012; pp. 1082–1088. [Google Scholar] [CrossRef]
  14. Nie, L.; Jiang, D.; Yu, S.; Song, H. Network Traffic Prediction Based on Deep Belief Network in Wireless Mesh Backbone Networks. In Proceedings of the 2017 IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, 19–22 March 2017; pp. 1–5. [Google Scholar]
  15. Fang, L.; Cheng, X.; Wang, H.; Yang, L. Mobile Demand Forecasting via Deep Graph-Sequence Spatiotemporal Modeling in Cellular Networks. IEEE Internet Things J. 2018, 5, 3091–3101. [Google Scholar] [CrossRef]
  16. Zhang, K.; Zhao, X.; Li, X.; You, X.; Zhu, Y. Network Traffic Prediction via Deep Graph-Sequence Spatiotemporal Modeling Based on Mobile Virtual Reality Technology. Wirel. Commun. Mob. Comput. 2021, 2021, 2353875. [Google Scholar] [CrossRef]
  17. Chen, C.P.; Liu, Z. Broad Learning System: An Effective and Efficient Incremental Learning System without the Need for Deep Architecture. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 10–24. [Google Scholar] [CrossRef]
  18. Pao, Y.-H.; Takefuji, Y. Functional-Link Net Computing: Theory, System Architecture, and Functionalities. Computer 1992, 25, 76–79. [Google Scholar] [CrossRef]
  19. Pao, Y.-H.; Park, G.-H.; Sobajic, D.J. Learning and Generalization Characteristics of the Random Vector Functional-Link Net. Neurocomputing 1994, 6, 163–180. [Google Scholar] [CrossRef]
  20. Igelnik, B.; Pao, Y.-H. Stochastic Choice of Basis Functions in Adaptive Function Approximation and the Functional-Link Net. IEEE Trans. Neural Netw. 1995, 6, 1320–1329. [Google Scholar] [CrossRef] [Green Version]
  21. Chen, C.P.; Wan, J.Z. A Rapid Learning and Dynamic Stepwise Updating Algorithm for Flat Neural Networks and the Application to Time-Series Prediction. IEEE Trans. Syst. Man Cybern. Part B 1999, 29, 62–72. [Google Scholar] [CrossRef]
  22. Gong, X.; Zhang, T.; Chen, C.P.; Liu, Z. Research Review for Broad Learning System: Algorithms, Theory, and Applications. IEEE Trans. Cybern. 2021, 1–29. [Google Scholar] [CrossRef]
  23. Jin, J.-W.; Chen, C.P. Regularized Robust Broad Learning System for Uncertain Data Modeling. Neurocomputing 2018, 322, 58–69. [Google Scholar] [CrossRef]
  24. Chen, C.P. Broad Learning System and Its Structural Variations. In Proceedings of the 2018 IEEE 16th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 13–15 September 2018; pp. 000011–000012. [Google Scholar]
  25. Chen, C.P.; Liu, Z.; Feng, S. Universal Approximation Capability of Broad Learning System and Its Structural Variations. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 1191–1204. [Google Scholar] [CrossRef] [PubMed]
  26. Gambardella, M.; Martinoli, M.B.A.; Stützle, R.P.T. Ant Colony Optimization and Swarm Intelligence. In 5th International Workshop; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  27. Figueiredo, E.M.; Ludermir, T.B.; Bastos-Filho, C.J. Many Objective Particle Swarm Optimization. Inf. Sci. 2016, 374, 115–134. [Google Scholar] [CrossRef]
  28. Zhou, Z.; Zhang, R.; Wang, Y.; Zhu, Z.; Zhang, J. Color Difference Classification Based on Optimization Support Vector Machine of Improved Grey Wolf Algorithm. Optik 2018, 170, 17–29. [Google Scholar] [CrossRef]
  29. Xu, X.; Liu, C.; Zhao, Y.; Lv, X. Short-Term Traffic Flow Prediction Based on Whale Optimization Algorithm Optimized BiLSTM_Attention. Concurr. Comput. Pract. Exp. 2022, e6782. [Google Scholar] [CrossRef]
  30. Xue, J.; Shen, B. A Novel Swarm Intelligence Optimization Approach: Sparrow Search Algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  31. Zhang, C.; Ding, S. A Stochastic Configuration Network Based on Chaotic Sparrow Search Algorithm. Knowl.-Based Syst. 2021, 220, 106924. [Google Scholar] [CrossRef]
  32. Zhang, F.; Sun, W.; Wang, H.; Xu, T. Fault Diagnosis of a Wind Turbine Gearbox Based on Improved Variational Mode Algorithm and Information Entropy. Entropy 2021, 23, 794. [Google Scholar] [CrossRef]
  33. Tian, Z.; Chen, H. A Novel Decomposition-Ensemble Prediction Model for Ultra-Short-Term Wind Speed. Energy Convers. Manag. 2021, 248, 114775. [Google Scholar] [CrossRef]
  34. Gai, J.; Zhong, K.; Du, X.; Yan, K.; Shen, J. Detection of Gear Fault Severity Based on Parameter-Optimized Deep Belief Network Using Sparrow Search Algorithm. Measurement 2021, 185, 110079. [Google Scholar] [CrossRef]
  35. Song, C.; Yao, L.; Hua, C.; Ni, Q. A Water Quality Prediction Model Based on Variational Mode Decomposition and the Least Squares Support Vector Machine Optimized by the Sparrow Search Algorithm (VMD-SSA-LSSVM) of the Yangtze River, China. Environ. Monit. Assess 2021, 193, 363. [Google Scholar] [CrossRef] [PubMed]
  36. Devarapalli, R.; Sinha, N.K.; Rao, B.V.; Knypinski, L.; Lakshmi, N.J.N.; Márquez, F.P.G. Allocation of Real Power Generation Based on Computing over All Generation Cost: An Approach of Salp Swarm Algorithm. Arch. Electr. Eng. 2021, 70, 337–349. [Google Scholar]
  37. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme Learning Machine: Theory and Applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  38. Wang, D.; Li, M. Stochastic Configuration Networks: Fundamentals and Algorithms. IEEE Trans. Cybern. 2017, 47, 3466–3479. [Google Scholar] [CrossRef] [Green Version]
  39. Shi, Q.; Katuwal, R.; Suganthan, P.N.; Tanveer, M. Random Vector Functional Link Neural Network Based Ensemble Deep Learning. Pattern Recognit. 2021, 117, 107978. [Google Scholar] [CrossRef]
  40. Zhuo, Q.; Li, Q.; Yan, H.; Qi, Y. Long Short-Term Memory Neural Network for Network Traffic Prediction. In Proceedings of the 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Nanjing, China, 24–26 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.