Next Article in Journal / Special Issue
A Novel Adaptive Modulation Based on Nondata-Aided Error Vector Magnitude in Non-Line-Of-Sight Condition of Wireless Sensor Network
Previous Article in Journal
Development of Optical Fiber Based Measurement System for the Verification of Entrance Dose Map in Pencil Beam Scanning Proton Beam
Previous Article in Special Issue
Design and Practical Evaluation of a Family of Lightweight Protocols for Heterogeneous Sensing through BLE Beacons in IoT Telemetry Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

1
College of Mathematics, Physics and Information Engineering, Zhejiang Normal University, Jinhua 321004, China
2
School of Astronautics and Aeronautic, University of Electronic Science and Technology of China, Chengdu 611731, China
3
Department of Electrical, Computer, Software, and Systems Engineering, Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA
*
Authors to whom correspondence should be addressed.
Sensors 2018, 18(1), 233; https://doi.org/10.3390/s18010233
Submission received: 16 November 2017 / Revised: 15 December 2017 / Accepted: 11 January 2018 / Published: 15 January 2018
(This article belongs to the Collection Smart Communication Protocols and Algorithms for Sensor Networks)

Abstract

:
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

1. Introduction

Today, sensors are widely used in the real world [1,2,3,4], and sensor errors are one of the keys to evaluate the measurement quality of the sensor results. With the development of modern measurement technology, dynamic measurements have gradually become the mainstream of modern measurement techniques.
As an effective way to improve measurement accuracy and reduce measurement errors, real-time error correction of sensors has been widely used in dynamic measurements. Predicting the dynamic measurement error is useful to correct the errors of sensors. Dynamic measurement errors of sensors are difficult to model with traditional mathematics because they have four features [5]: time-varying, randomness, correlation and dynamic. Because of its complexity, predicting the dynamic errors has been a popular research field [6,7,8,9].
In recent years, several modeling methods were used to predict dynamic errors like the gray theory, Bayesian networks and neural network. Every method has its own advantages and drawbacks. The harmonic analysis method is suitable for modeling periodic sequences, but it is not suitable for random sequences [10]. Bayesian networks is a prediction modeling method, however, it requires the prior distribution and independent samples, which is difficult to achieve in real systems [11]. Grey theory model can be constructed by a few samples, but it only depicts a monotonically increasing or decreasing process [12,13]. Artificial neural network has a good non-linear mapping performance, however, it has disadvantages, such as over-fitting and easily falling into a local minimum [14,15,16].
Support vector machine (SVM) adopts structural risk minimization to improve its generalization ability [17]. It can better solve the problems of nonlinear data and small samples. SVM has been widely applied to solve function fitting problems. However, the generalization ability of SVM depends heavily on the appropriate parameters, and the model’s parameters have a huge influence on the precision of the model predictions [18]. Thus, many optimization algorithms have been adopted to optimize the SVM parameters [19,20,21], like the particle swarm optimization (PSO) algorithm, genetic algorithm (GA) and glowworm swarm optimization (GSO) algorithm. These methods have limitations, as the particle swarm optimization and genetic algorithm fall into local extremes easily [22], and the glowworm swarm optimization algorithm has low convergence precision and slow convergence speed [23]. The NAPSO algorithm is an improved particle swarm optimization algorithm based on the natural selection strategy and simulated annealing mechanism. These two methods are used to improve the global search ability and convergence speed. In this study, a method of dynamic measurement error prediction for sensors based on NAPSO optimize support vector machine is proposed.
The rest of the paper is organized as follows: in Section 2, a detailed overview of the SVM algorithm is provided. Then, in Section 3, the PSO and NAPSO algorithms and the process of optimization are described briefly. Section 4 reports on a simulation of the dynamic measurement error prediction model. The results of experiments are discussed in Section 5. Conclusions are drawn in the last section.

2. SVM Algorithm

2.1. SVM

SVM is a machine learning method based on the statistical learning theory developed in mid-1990s. The basic idea of SVM is that the data of input space Rn are mapped to a high dimensional feature space F by a nonlinear mapping, then the linear regression operations are finished in the high dimensional feature space [24].
For a given training dataset {(xi,yi), i = 1,2,…,n}, xi is a n-dimensional input vector and yi is the corresponding output value, φ(x) is the nonlinear mapping from input space Rn to high dimensional feature space F.
R n F : x φ ( x )
The regression function of SVM is formulated as follows:
f ( x ) = [ ω × φ ( x ) ] + b ω R m , b R
where ω is the weight vector and b is the threshold, the main goal of the SVM is to find the optimal ω, the weight vector ω can be expressed by a linear combination of the training examples. Thus, the Equation (2) takes the form:
f ( x ) = i = 1 n α i x i T x + b b R
where αi are Lagrangian multipliers. In the feature space, Equation (4) can be expressed as follows:
f ( x ) = i = 1 n α i φ ( x i T ) φ ( x j ) + b b R
Introduce the Kernel function K(xi,yj):
K ( x i , x j ) = φ ( x i T ) φ ( x j )
Finnally, the SVM regression function is formulated as:
f ( x ) = i = 1 n α i K ( x i , x j ) + b

2.2. Kernel Function

Kernel function is a key concept of SVM, the performance of SVM mainly depends on the kernel function. As shown in the Equation (1), the kernel function establishes a relation between the input space Rn and the high dimensional feature space F. Different selection of kernel functions will construct different regression models:
K ( x i , x j ) = ϕ ( x i ) T × ϕ ( x j )
The common kernel functions include the polynomial kernel function, linear kernel function, fourier kernel function and radial basis function (RBF) kernel function. The kernel function parameters has a directly influence on the complexity of the function, RBF kernel function has the advantages of fewer parameters and good performance. Thus, RBF kernel function is used in this paper.
The RBF kernel function is expressed as follows:
K ( x i , x j ) = exp { | x i x j | 2 2 σ 2 }
where σ is the width coefficient of the kernel function.
The SVM parameters determine both its generalization ability and learning ability, the punishment coefficient C and RBF kernel function width σ have a direct impact on the accuracy and efficiency of the SVM prediction model. C adjusts the balance between generalization and empirical error. When C is greater, the model’s complexity will be increased and it will fall into the “over-fitting” phenomenon easily, but if C is too small, the model’s complexity will be reduced and it will fall into the “under-fitting” phenomenon easily. The value of σ affects the complexity of the sample data distribution in feature space. In this paper, NAPSO algorithm is used to optimize the two parameters to achieve better prediction results.

3. SVM Parameters Optimization Based on NAPSO

3.1. PSO

Particle swarm optimization was proposed by Eberhart and Kennedy in 1995 [25], PSO was derived from research on bird flocks’ preying behavior. When a flock of birds is looking for food in an area randomly, if there is only one piece of food in the area being searched, the most effective and simple method to find the food is to follow the bird that is closest to the food.
In a PSO algorithm, every single solution is a particle in the search space. Each particle has a fitness value, which is determined by an optimization function, each particle has its own velocity and position. The velocity and position of each particle will be changed by the particle best position and global best position. The update equations of the velocity and position are shown by the following expression:
v i . d ( t + 1 ) = ω v i . d ( t ) + c 1 r 1 [ p b e s t x i . d ( t ) ) ] + c 2 r 2 [ g b e s t x i . d ( t ) ) ]
x i . d ( t + 1 ) = x i . d ( t ) + v i . d ( t + 1 )
In the d-dimensional space, t is the iteration number, υi,d(t) is the velocity of particle i at iteration t, υi,d(t + 1) is the velocity of particle i at iteration t + 1, xi,d(t) is the position of particle i at iteration t, xi,d(t + 1) is the position of particle t at iteration t + 1, ω is the inertia weight. c1 is the cognition learning factor, c2 is the social learning factor, r1 and r2 are random numbers that are uniformly distributed in [0,1], pbest is the particle best position for the individual variable of particle i, gbest is the global best position variable of the particle swarm.
The initial position and velocity of each particle are randomly generated and will be updated based on the Equations (9) and (10) until a satisfactory solution is found. In the PSO algorithm, a single particle moves to its pbest and gbest, each particle’s movement generates a fast convergence, thus the PSO algorithm converges rapidly. However, the fast convergence also makes the update of each particle depend too much on its pbest and gbest, which makes the algorithm fall into local optima and easily converge prematurely. Therefore, in this paper, an improved PSO algorithm (NAPSO) is used to optimize the parameters of SVM.

3.2. NAPSO

The NAPSO algorithm is an improved PSO algorithm based on the methods of natural selection and simulated annealing. In the NAPSO algorithm, the simulated annealing mechanism is used to improve the ability of the algorithm to jump out of a local optimum trap, while the natural selection method is employed to accelerate the rate of convergence.
The NAPSO algorithm starts with a set of random velocities and positions. Before the iteration, each particle’s personal best position and global best position are calculated by the fitness function. Each particles update its velocity and position by Equations (9) and (10) at each iteration.
After updating a particle’s speed, position l and fitness value f′, the particle moves to a random position l 1 in its neighborhood and computes its new fitness value f 1 . The movement formula is expressed as follows:
l 1 = l + r 3 × [ v m a x v m i n ] × r 1
where r3 is the normally distribution random numbers of d-dimension that are distributed in [0,1], υmax is the maximum value of the velocity, and υmin is the minimum value of the velocity.
When f 1 > g b e s t , we keep the position l. When f 1 < g b e s t , if f 1 > f , use the new position l 1 to replace the position l by the simulated annealing operation, where the operation of simulated annealing is expressed as follows:
l = { l i f   exp ( ( 1 ) × ( f 1 f ) / T > r 4 ) l 1 i f   exp ( ( 1 ) × ( f 1 f ) / T < = r 4 ) }
where r4 is a random number that is uniformly distributed in [0,1], T is the simulated annealing temperature.
Each particle uses the simulated annealing operation to determine whether to accept the new position, and then updates the particle’s pbest and gbest by its position. The simulated annealing operation can significantly enhance the ability of the algorithm to jump out of the local optimum trap. At the end of each iteration, all particles have been ranked by their fitness values, from best to worst, and using the better half to replace the other half. In this way, the stronger adaptability particles are saved. Finally, the NAPSO algorithm is terminated by the satisfaction of a termination criterion.
The pseudo code of the NAPSO algorithm is presented as the Algorithm 1.
Algorithm 1: NAPSO
Input ω, c1, c2, T
Output gbest
Initialization: x, pbest, gbest
while t < maximum number of iterations and gbest > minimum fitness do
  for each particle do
    update the velocity v, position l , and fitness f
    find a new position l 1 in the neighborhood and calculate its fitness value f 1
    if1 ( f 1 < gbest) then
      if2 ( f 1 f < 0 ) then
        accept the new position l 1
      else if2
        accept the new position l 1 by the simulated annealing operation
      end if2
    else if1
      accept the old position l
    end if1
    update the pbest, gbest and Simulated temperature T
  end for
  rank all particles by their fitness value, use the better half to replace the other half.
  t = t + 1
end while
return the gbest
The simulated annealing operation will slow the rate of convergence, thus increasing the convergence time. The natural selection operation will reduce the diversity of samples. However, these two operations can compensate for each other, as the simulated annealing operation can increase sample diversity, and the natural selection operation can speed up the convergence rate. These two operations are used to both ensure the convergence rate of the algorithm and guarantee that the ability of the algorithm to jump out of the local optimal trap can be enhanced.

3.3. Optimization Process

The NAPSO algorithm is applied to optimize the SVM parameters C and σ as follows:
Step 1:
Initialize the NAPSO algorithm, set the number of particles velocity, particles positions and the other parameters. Because the search space is 2-dimensional, the position of each particle contains two variables. Set T to be the simulated temperature; the initial T is 5000 °C, and the lower limit of T is 1 °C. Calculate the fitness value of each particle. The fitness evaluation function is defined as follows:
J = i = 1 n ( Y i Y i ) 2 / n
where Yi is the actual value, Y i is the predicted value and n is the number of training samples.
Step 2:
According to the fitness value of each particle to set the personal best position pbest and global best position gbest.
Step 3:
Update the position l and velocity of each particle. Evaluate the fitness value f′. Then, randomly find a new position l 1 in the neighborhood of the particle, calculate the new fitness value ( f 1 ) of the new position.
Step 4:
Calculate the difference between the fitness value f′ and the new fitness value f 1 , Δ f = f 1 f .
Step 5:
When f 1 g b e s t , keep the original position l. When Δ f > 0 and f 1 < g b e s t , according the Equation (12) accept the new position l 1 , if Δ f < 0 and f 1 < g b e s t , replace the original position with the new position. Then, update the pbest and gbest.
Step 6:
When the updates of each particle has completed, then rank all of the particles according to the each particle’s fitness value, employ the better half particles’ information to replace the other half particles’ information and update the temperature T = T × 0.9.
Step 7:
If the number of iterations is equal to the maximum iterations or the gbest is less than or equal to the least fitness, output the two variables of the gbest; otherwise, return to Step 2.

4. Experiments

4.1. Data Description

In this paper, two cases have been considered to illustrate the effectiveness of the proposed method. The data of case 1 is a dynamic error sequence, which is derived from the measuring error of an angular instrument with anticlockwise rotation (speed 2r/min) based on standard value interpolation under room temperature, the error sequence contains a total of 240 samples. In case 2, the measuring error sequence of the length grating contains a total 334 samples. A high precision Hewlett Packard 5529A laser interferometer was used as calibration system, A-quad-B pulse from the encoder of grating length gauge is used to trigger the laser interferometer and carry out dynamic data acquisition. The software package of the HP laser interferometer is used to implement data acquisition. The speed of the target mirror is 21.1 mm/s, the experiment has been repeated five times.

4.2. Preprocessing

The two datasets are both one-dimensional data, so in order to achieve the better predict results and get more information from the data, these two one-dimensional data must be converted to multi-dimensional data [26,27]. Assuming p is the dimension of the input vector, the reconstructed samples are listed in Table 1.
According to the reconstructed method listed in the Table 1, in case 1,the dimension number p is 16, the number of restructured sample is 224, selecting the first 124 samples for training and the final 100 samples for testing. The proportion of training samples to testing samples is 1.24:1, in case 2, the dimension p is 20, the number of restructured sample is 315, the first 200 samples are selected as training data and the rest are used as testing data. The proportion of training samples to testing samples is 1.73:1.
Preprocess the data by the normalized method, then perform parameter optimization and train the model.

4.3. Valuation Index

To further evaluate the prediction of the NAPSO-SVM model, the root mean square error (RMSE) and mean absolute percent error (MAPE) are used as evaluation indices. The definitions of MAPE and RMSE are as follows:
RMSE = 1 n i = 1 n ( Y i Y i ) 2
MAPE = 1 n i = 1 n | Y i Y i Y i |
where Yi is the actual value, Y i is the prediction value and n is the number of training samples.
The NAPSO algorithm is used to determine the punishment coefficient C and RBF kernel function width σ. The SVM model is built based on the training samples and optimal parameters. To show the performance of the proposed method, the particles swarm optimization and glowworm swarm optimization are also implemented.

4.4. GSO Algorithm

Glowworm swarm optimization (GSO) is a new swarm intelligence optimization algorithm, proposed by Krishnanand and Ghose [28] in 2006. GSO has some apparent differences with PSO, GSO is appropriate for multiple optima of multimodal functions. It simulates the luciferin properties of glowworms, by comparing the luciferin value to exchange information.
Like the PSO algorithm, in the GSO algorithm, each single solution is a glowworm in the search space. Each glowworm has its luciferin and neighborhood range. The value of luciferin is used to measure the quality of the solution. The value of the neighborhood range is used to determine the search scope of the glowworm. According a probabilistic mechanism, each glowworm move toward a neighbor that has a luciferin higher than its own.
Except initialization, GSO algorithm has three phases. In the first phase, each glowworm updates its luciferin, the luciferin update rule has the following equation:
l i ( t + 1 ) = ( 1 ρ ) l i ( t ) + γ J i ( t + 1 )
where li(t) is the luciferin level for glowworm i at iteration t, ρ is the luciferin decay constant. γ is the luciferin enhancement constant and Ji(t + l) is the fitness value of the objective function at glowworm i at iteration t + l.
In the second phase, each glowworm finds neighbors that have a luciferin level higher than its own. The set of neighbors of glowworm i at iteration t is expressed as follows:
N i ( t ) = { j : d i j ( t ) < r d i ( t ) ; l i ( t ) < l j ( t ) }
where dij(t) is the Euclidean distance between glowworms i and j at iteration t, r d i ( t ) is the variable neighborhood range at glowworm i at iteration t. The probability of glowworm t moving toward its neighbor j is calculated by the following equation:
p i j ( t ) = l j ( t ) l i ( t ) k N i ( t ) l k ( t ) l i ( t )
Then, the rule of the glowworm movements is given by Equation (19):
x i ( t + 1 ) = x i ( t ) + s [ x j ( t ) x i ( t ) x j ( t ) x i ( t ) ]
where xi(t) is the position of glowworm i at iteration t, x j ( t ) x i ( t ) is the distance between glowworms i and j at iteration t, s represents the step size.
In the third phase, each glowworm updates its neighborhood range by the following equation:
r d i ( t + 1 ) = m i n { r s , m a x { 0 , r d i ( t ) + β ( n t | N i ( t ) | ) } }
where β and nt are constant parameters. rs is the neighborhood range bound ( r d i ( t ) r s ).

5. Results

Our study used the Libsvm toolbox v3.21 (written by Chih-Jen Lin), and our algorithm are coded using Matlab 2010a and all experiments are implemented on a 4 GB RAM, 3.20 GHz Intel core i5 machine equipped with the Windows 7 operating system. In case 1, the prediction results of the three models are shown in Figure 1, Figure 2 and Figure 3, respectively.
To make a fair comparison, the maximum generation, population size, minimum fitness value, range of gains, dimension of search space and initial positions are identical for all the algorithms. The maximum number of generations is 100, the minimum fitness value is 0.1, the size of the population is 100, and the dimension of the search space is 2. The parameters for NAPSO were set as follows: the inertia weight w = 0.9, the acceleration constants c1 and c2 are 2, the initial temperature is 10,000 °C, the lower limit of temperature is 1 °C, the maximum value of velocity is 1, the minimum value of velocity is −1. In GSO algorithm, the light absorption coefficient is 50, the minimum value of attractiveness is 0.8, the maximum value of attractiveness is 1.0, the value of initial step size factor is 0.5. The PSO algorithm has the same inertia weight and acceleration constant as the NAPSO algorithm.
Figure 4 presents the comparison results of predicted residuals by the three models. The MAPE value and RMSE value of the three models are listed in Table 2.
In Figure 1, for the NAPSO-SVM model, the curve of predicted value is very close to the curve of the actual value. In Figure 2, the predicted value curve often lags the actual value curve. In Figure 3, the predicted value curve is close to the actual value curve, but there are many points are still outside the range of the actual curve. We could find that compared with the obvious hysteresis of the PSO-SVM and GSO-SVM models, there is a negligible hysteresis between the predicted and actual value curves with the NAPSO-SVM model, and it is closer to the actual value. We find that the NAPSO-SVM model outperforms the PSO-SVM and GSO-SVM model. The prediction performance of NAPSO-SVM is better than GSO-SVM model and accuracy much better than PSO-SVM.
The residual curves of the three models are shown in the Figure 4. The prediction residual curve of the PSO-SVM model is large, ranging from −11” to 8”, and the prediction residual of the GSO-SVM model is smaller than the PSO-SVM model. But it is still relatively large, ranging from −8” to 6”. The predicted residual of the NAPSO-SVM is smaller than the others and tends to more gentle, ranging from −5” to 4”.The results prove that dynamic measurement error prediction ability of NAPSO-SVM model is better than PSO-SVM and GSO-SVM model, and the NAPSO algorithm is an effective method for parameters optimization. To further verify the ability of the three models. Table 2 lists the comparison results between the three models for prediction accuracy indexes.
In Table 2, the MAPE value and RMSE value of the NAPSO-SVM model are smaller than those of the PSO-SVM and GSO-SVM models. The MAPE value is approximately 0.0744 for NAPSO-SVM model, compared with approximately 0.2423 and 0.1493 for the PSO-SVM and GSO-SVM models, respectively. Furthermore, the RMSE value is 0.1876 in the case of the NAPSO-SVM model. Compared with the NAPSO-SVM model, the RMSE values of the GSO-SVM model and PSO-SVM model are 0.4710 and 0.3128, respectively. In summary, the results of the Table 2 are in accord with Figure 4, and the NAPSO-SVM model has the best dynamic measurement error prediction ability among the three methods.
In case 2, the parameters of each algorithm are essentially the same as the previous case, and the prediction results of three models are shown in Figure 5, Figure 6 and Figure 7. Figure 8 shows the comparison results of the residuals predicted by the three models. The MAPE value and RMSE values of the three models are listed in the Table 3.
When the ratio of training samples and testing samples is approximately 1.73, in Figure 5, Figure 6 and Figure 7, for all models, there is hysteresis between the predicted and actual value curves on the four points (testing samples 248, 249, 299, 300). Except for the four points, the prediction curve of the NAPSO-SVM model is closest to the actual value curve, and the prediction curve of the NAPSO-SVM model is approximately the same as the actual value curve. However, unlike in case 1, the prediction results of the PSO-SVM model is nearly the same as the GSO-SVM model, but the prediction curves of these two models still lag behind the actual value curve.
In Figure 8, it is easy to see the distribution of the four hysteresis points. The prediction residual errors are very big on the four hysteresis points. Except for the four points, the prediction residual of the NAPSO-SVM model is smallest among the three models, ranging from −9.9086 to 11.3979 mm. The prediction residual of the GSO-SVM model is ranging from −9.3916 to 11.8487 mm, and the prediction residual of the PSO-SVM model is ranging from −10.7265 to 14.0340 mm. The prediction residual range of the NAPSO-SVM model is almost equal to the GSO-SVM model, and better than the PSO-SVM model.
As Table 3 shows, the MAPE value is approximately 0.3840 for the NAPSO-SVM model compared with approximately 0.5377 and 0.4403 for the PSO-SVM model and GSO-SVM model. NAPSO-SVM model has the smallest RMSE value of the three algorithms, acquiring RMSE value of 0.8015 and the RMSE values of the PSO-SVM model and GSO-SVM model are 0.8209 and 0.8356, respectively. The prediction ability of the NAPSO-SVM model is better than the other models, and GSO-SVM model has the worst RMSE value, PSO-SVM model has the worst MAPE value.
Although the data of cases is the actual data, a Gaussian noise (SNR = 15 dB) are added into the data of case 2 to prove the ability of NAPSO-SVM model, the MAPE values of the NAPSO-SVM model, PSO-SVM model and GSO-SVM model are 0.4239, 0.6035 and 0.5839, respectively. The RMSE values of the NAPSO-SVM model, PSO-SVM model and GSO-SVM model are 0.9839, 0.9868 and 0.9867. NAPSO-SVM model still has a better performance.
The results of the two cases show that the NAPSO-SVM model has the best prediction accuracy among the three methods. This indicates that the NAPSO algorithm has a better global search capability than the other two algorithms, the reason being that the updating of the position and velocity of the particles in the PSO algorithm are too dependent on the current best particle. Compared with the PSO algorithm, the NAPSO algorithm uses simulated annealing and a natural selection mechanism, so it is easier jump out of the local trap and search for the global optimal solution in the global space.

6. Conclusions

Dynamic measurement has been a hot area of research for several years, and dynamic measurement error prediction is a useful method to improve sensor measurement accuracy. In this study, a method of dynamic measurement error prediction based on NAPSO-optimized SVM parameters is proposed. To improve the prediction accuracy, the NAPSO algorithm is used to optimize the SVM parameters to avoid the problems of “over-fitting” and “under-fitting” of SVM. The results of the two cases show that compared with the PSO-SVM and GSO-SVM models, the NAPSO-SVM model has the better prediction accuracy. The proposed method provides a new way for predicting sensors’ dynamic measurement errors and has definite value for application in dynamic measurement. However, like the standard PSO, NAPSO has intrinsic property randomness. In the future, we plan to study which is the more effective method for improving the prediction results.

Acknowledgments

The work was supported in part by the National Natural Science Foundation of China (Nos. 51305407, 61571104, and 61071124), the General Project of Scientific Research of the Education Department of Liaoning Province (No. L20150174), the Program for New Century Excellent Talents in University (No. NCET-11-0075), the Fundamental Research Funds for the Central Universities (Nos. ZYGX2017KYQD170, N150402003, N120804004 and N130504003), and the State Scholarship Fund (201208210013).

Author Contributions

M.J. and L.J. were the main designers of the proposed scheme and wrote the paper. F.L. focused on the research, and part of the paper writing, D.J. and H.S. revised the paper and analyzed the results.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cheng, S.; Cai, Z.; Li, J.; Gao, H. Extracting Kernel Dataset from Big Sensory Data in Wireless Sensor Networks. IEEE Trans. Knowl. Data Eng. 2017, 29, 813–827. [Google Scholar] [CrossRef]
  2. Jiang, D.; Li, W.; Lv, H. An energy-efficient cooperative multicast routing in multi-hop wireless networks for smart medical applications. Neurocomputing 2017, 220, 160–169. [Google Scholar] [CrossRef]
  3. Jiang, D.; Wang, Y.; Han, Y.; Lv, H. Maximum connectivity-based channel allocation algorithm in cognitive wireless networks for medical applications. Neurocomputing 2017, 220, 41–51. [Google Scholar] [CrossRef]
  4. Jiang, D.; Xu, Z.; Li, W.; Yao, C.; Lv, Z.; Li, T. An energy-efficient multicast algorithm with maximum network throughput in multi-hop wireless networks. J. Commun. Netw. 2016, 18, 713–724. [Google Scholar]
  5. Yakovlev, V.T. Method of Determining Dynamic Measurement Error Due to Oscillographs. Meas. Tech. 1987, 30, 331–334. [Google Scholar] [CrossRef]
  6. Chen, Y. Area-Efficient Fixed-Width Squarer with Dynamic Error-Compensation Circuit. IEEE Trans. Circuits Syst. II 2015, 62, 851–855. [Google Scholar] [CrossRef]
  7. Jiang, D.; Nie, L.; Lv, Z.; Song, H. Spatio-temporal Kronecker compressive sensing for traffic matrix recovery. IEEE Access 2016, 4, 3046–3053. [Google Scholar] [CrossRef]
  8. Yakovlev, V.T.; Collins, E.; Flores, K.; Pershad, P.; Stemkovski, M.; Stephenson, L. Statistical Error Model Comparison for Logistic Growth of Green Algae (Raphidocelis subcapitata). Appl. Math. Lett. 2017, 64, 213–222. [Google Scholar]
  9. Cheng, S.; Cai, Z.; Li, J.; Fang, X. Drawing Dominant Dataset from Big Sensory Data in Wireless Sensor Networks. In Proceedings of the 34th Annual IEEE International Conference on Computer Communications, Kowloon, Hong Kong, China, 26 April–1 May 2015; pp. 531–539. [Google Scholar]
  10. Trapero, J.; Kourentzes, N.; Martin, A. Short-term solar irradiation forecasting based on dynamic harmonic regression. Energy 2015, 84, 289–295. [Google Scholar] [CrossRef] [Green Version]
  11. Yang, H.; Zhang, L.; Zhou, J.; Fei, Y.; Peng, D. Modelling of dynamic measurement error for parasitic time grating sensor based on Bayesian principle. Opt. Precis. Eng. 2015, 24, 2523–2531. [Google Scholar] [CrossRef]
  12. Ge, L.; Zhao, W.; Zhao, S.; Zhou, J. Novel error prediction method of dynamic measurement lacking information. J. Test. Eval. 2012, 40. [Google Scholar] [CrossRef]
  13. Jiang, D.; Xu, Z.; Lv, Z. A multicast delivery approach with minimum energy consumption for wireless multi-hop networks. Telecommun. Syst. 2016, 62, 771–782. [Google Scholar] [CrossRef]
  14. He, Z.; Cai, Z.; Cheng, S.; Wang, X. Approximate Aggregation for Tracking Quantiles and Range Countings in Wireless Sensor Networks. Theor. Comput. Sci. 2015, 607, 381–390. [Google Scholar] [CrossRef]
  15. Barbounis, T.G.; Theocharis, J.B. A locally recurrent fuzzy neural network with application to the wind speed prediction using spatial correlation. Neurocomputing 2007, 70, 1525–1542. [Google Scholar] [CrossRef]
  16. Cheng, S.; Cai, Z.; Li, J. Curve Query Processing in Wireless Sensor Networks. IEEE Trans. Veh. Technol. 2015, 64, 5198–5209. [Google Scholar] [CrossRef]
  17. Sasikala, S.; Balamurugan, S.; Geetha, S. A Novel Memetic Algorithm for Discovering Knowledge in Binary and Multi Class Predictions Based on Support Vector Machine. Appl. Soft Comput. 2016, 49, 407–422. [Google Scholar]
  18. Malvoni, M.; De Giorgi, M.G.; Congedo, P.M. Photovoltaic Forecast Based on Hybrid PCA–LSSVM Using Dimensionality Reduced Data. Neurocomputing 2016, 211, 72–83. [Google Scholar] [CrossRef]
  19. Jiang, D.; Xu, Z.; Liu, J.; Zhao, W. An optimization-based robust routing algorithm to energy-efficient networks for cloud computing. Telecommun. Syst. 2016, 63, 89–98. [Google Scholar] [CrossRef]
  20. Jiang, D.; Zhang, P.; Lv, Z.; Song, H. Energy-efficient multi-constraint routing algorithm with load balancing for smart city applications. IEEE Intern. Things J. 2016, 3, 1437–1447. [Google Scholar] [CrossRef]
  21. Jiang, D.; Huo, L.; Lv, Z.; Song, H.; Qin, W. A joint multi-criteria utility-based network selection approach for vehicle-to-infrastructure networking. IEEE Trans. Intell. Transp. Syst. 2017. [Google Scholar] [CrossRef]
  22. Zhong, Y.; Ning, J.; Zhang, H. Multi-agent Simulated Annealing Algorithm based on Particle Swarm Optimisation Algorithm. Int. J. Comput. Appl. Technol. 2012, 43, 335–342. [Google Scholar] [CrossRef]
  23. Zainal, N.; Zain, A.; Radzi, N. Glowworm Swarm Optimization (GSO) for optimization of machining parameters. J. Intell. Manuf. 2016, 27, 998–1006. [Google Scholar] [CrossRef]
  24. Lentka, Ł.; Smulko, J.M.; Ionescu, R.; Granqvist, C.G.; Kish, L.B. Determination of gas mixture components using fluctuation enhanced sensing and the LS-SVM regression algorithm. Metrol. Meas. Syst. 2015, 22, 341–350. [Google Scholar] [CrossRef]
  25. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the First IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  26. Li, J.; Cheng, S.; Cai, Z.; Yu, J.; Wang, C.; Li, Y. Approximate Holistic Aggregation in Wireless Sensor Networks. ACM Trans. Sens. Netw. 2017, 13, 11:1–11:24. [Google Scholar] [CrossRef]
  27. Jiang, M.; Luo, J.; Jiang, D.; Xiong, J.; Song, H.; Shen, J. A Cuckoo Search-support Vector Machine Model for Predicting Dynamic Measurement Errors of Sensors. IEEE Access 2016, 4, 5030–5037. [Google Scholar] [CrossRef]
  28. Krishnanand, K.N.; Ghose, D. Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intell. 2009, 3, 87–124. [Google Scholar] [CrossRef]
Figure 1. Predicted results of the NAPSO-SVM (case 1).
Figure 1. Predicted results of the NAPSO-SVM (case 1).
Sensors 18 00233 g001
Figure 2. Predicted results of the PSO-SVM (case 1).
Figure 2. Predicted results of the PSO-SVM (case 1).
Sensors 18 00233 g002
Figure 3. Predicted results of the GSO-SVM (case 1).
Figure 3. Predicted results of the GSO-SVM (case 1).
Sensors 18 00233 g003
Figure 4. Comparison of three models for predicted residuals (case 1).
Figure 4. Comparison of three models for predicted residuals (case 1).
Sensors 18 00233 g004
Figure 5. Predicted results of the NAPSO-SVM (case 2).
Figure 5. Predicted results of the NAPSO-SVM (case 2).
Sensors 18 00233 g005
Figure 6. Predicted results of the PSO-SVM (case 2).
Figure 6. Predicted results of the PSO-SVM (case 2).
Sensors 18 00233 g006
Figure 7. Predicted results of the GSO-SVM (case 2).
Figure 7. Predicted results of the GSO-SVM (case 2).
Sensors 18 00233 g007
Figure 8. Comparison of the predicted residuals of the three models (case 2).
Figure 8. Comparison of the predicted residuals of the three models (case 2).
Sensors 18 00233 g008
Table 1. Reconstructed samples.
Table 1. Reconstructed samples.
InputOutput
X ( 1 ) , X ( 2 ) , , X ( p ) X ( p + 1 )
X ( 2 ) , X ( 3 ) , , X ( p + 1 ) X ( p + 2 )
X ( n p ) , X ( n p + 1 ) , , X ( n 1 ) X ( n )
Table 2. Comparison of the index value among the three models (case 1).
Table 2. Comparison of the index value among the three models (case 1).
MODELMAPERMSE
NAPSO-SVM0.07440.1879
PSO-SVM0.24230.4710
GSO-SVM0.14930.3128
Table 3. Comparison of the index value among the three models (case 2).
Table 3. Comparison of the index value among the three models (case 2).
MODELMAPERMSE
NAPSO-SVM0.38400.8015
PSO-SVM0.53770.8209
GSO-SVM0.44030.8356

Share and Cite

MDPI and ACS Style

Jiang, M.; Jiang, L.; Jiang, D.; Li, F.; Song, H. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM. Sensors 2018, 18, 233. https://doi.org/10.3390/s18010233

AMA Style

Jiang M, Jiang L, Jiang D, Li F, Song H. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM. Sensors. 2018; 18(1):233. https://doi.org/10.3390/s18010233

Chicago/Turabian Style

Jiang, Minlan, Lan Jiang, Dingde Jiang, Fei Li, and Houbing Song. 2018. "A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM" Sensors 18, no. 1: 233. https://doi.org/10.3390/s18010233

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop