Next Article in Journal
A QoS Optimization Approach in Cognitive Body Area Networks for Healthcare Applications
Next Article in Special Issue
Distributed Data Service for Data Management in Internet of Things Middleware
Previous Article in Journal
Sensor Fusion of a Mobile Device to Control and Acquire Videos or Images of Coffee Branches and for Georeferencing Trees
Previous Article in Special Issue
A Quantitative Risk Assessment Model Involving Frequency and Threat Degree under Line-of-Business Services for Infrastructure of Emerging Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks

1
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China
2
Key Laboratory of Complex Systems Modeling and Simulation of Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China
3
School of Electronic and Information Engineer, Ningbo University of Technology, Ningbo 315211, China
4
Department of Mathematics and Computer Science, Northeastern State University, Tahlequah, OK 74464, USA
*
Authors to whom correspondence should be addressed.
Sensors 2017, 17(4), 787; https://doi.org/10.3390/s17040787
Submission received: 15 November 2016 / Revised: 15 March 2017 / Accepted: 24 March 2017 / Published: 6 April 2017
(This article belongs to the Special Issue Topology Control in Emerging Sensor Networks)

Abstract

:
Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.

1. Introduction

Time series analysis and forecasting are sometimes essential methods for conquering topology control issues, traffic control issues existed in sensor networks. For example, time series analysis and forecasting is used in optimizing energy-efficient topology organization of WSNs [1,2], in mitigating congestion problem in WSNs [3,4], in detecting faults and anomalies in multi-sensor networks [5,6] and in studying spatiotemporal dynamics of distributed sensor networks [7], and so on. However, one field that has received relatively little attention so far is the measurement of predictability or complexity of time series generated by sensor networks. In our opinion, measurement of time series complexity and predictability is a more fundamental and basic part of the whole solutions to challenges lie in wireless sensor networks. Here, the complexity of time series is regarded as the difficulty in predicting its future patterns.
While several types of entropies for measuring complexity of time series have been extensively studied in the literature, almost all of them are focused solely on the single-symbol properties of time series characteristics [8], which probably leading to the omission of large amount of information. In paper [9], the authors demonstrated that multiscale entropies provide new information about time series. In this paper, we further propose a novel method based on multiscale entropies, which is capable of discriminating the difference between noise and interference in the empirical time series. The proposed method is evaluated on both synthetic and real time series, and all of the results demonstrate that the proposed method has a significant improvement in reliability and stability of measuring complexity and predictability of time series. Our method could be applied in various ways to help researchers conquer the challenges in fields of sensor networks.
The rest of the paper is organized as follows: related works are introduced in Section 2. Section 3 introduces the preliminaries of sample entropy, multiscale entropy and composite multiscale entropy. The flexible multiscale entropy is proposed in Section 4 and evaluated in Section 5. Finally, we conclude the paper in Section 6 and future work is presented in Section 7.

2. Related Works

As mentioned in introduction, there have been many works in the literature regarding to applying time series analysis and forecasting methods to conquer challenges in sensor networks organizing, monitoring and anomaly detection. Furthermore, in many cases, the solutions to challenges lie in wireless sensor networks using time series analysis are based on the analysis of time series similarity and complexity. In paper [10], the authors presented a novel anomaly detection method based on analyzing similarities of time series in sensor networks. Kasetty et al. proposed a framework for classification of time series that collected by sensors [11]. Regarding to measuring complexity of time series, several entropy-based metrics have been proposed, e.g., approximate entropy [12] and sample entropy [13]. The two entropy metrics quantify the degree of regularity of a time series by evaluating the occurrence of repetitive patterns. Be different from approximate entropy, sample entropy excludes the case of self-matching. The sample entropy has a higher speed of stability convergence and a less dependency on the length of time series than approximate entropy. Previous works have reported that sample entropy is an effective and efficient method to gain insights into various signals, such as, electroencephalography signals [14] and heart rate time series [15], etc.
However, sample entropy analyzes time series only at a single time scale. Hence, it fails to capture the long-range dependence of a time series. In responses to this problem, Costa proposed multiscale entropy (MSE) to measure the structural complexity of a time series over different time scales [16,17]. Structural complexity refers to “meaningful structural richness” [18], incorporating correlations over multiple spatiotemporal scales. In the context of structural complexity, neither completely predictable signals, nor completely unpredictable signals are truly complex, since they can be described very compactly [17].
MSE has been widely applied in many fields. Costa used it to analyze complexity of biological and physical signals [16,17]. Ge et al. applied MSE theory in electroencephalograph (EEG) signal detection [19], finding that the value of entropy has an obvious change when people stay in different sleep stages and the change of MSE is consistent with the physiological mechanism of brain activity. Zhang et al. studied rolling bearing fault detection with MSE, and proposed a new metric named multiscale entropy mean deviation and applied in bush fault diagnosis and prediction [20]. Xie [21] applied MSE to the analysis of geophysical observation signals, and proposed concepts of local multiscale entropy and generalized entropy spectrum. These theories were applied to various kinds of complicate geographical signal analysis, digging out more signal characteristics and information. The authors of paper [9] applied MSE to the study of network traffic characteristics, and demonstrated that MSE has obvious advantages over information entropy and self-similar parameter. MSE has also been applied to many other aspects such as wireless mobile traffic analysis [22], early Alzheimer’s disease diagnosis and mild cognitive impairment of wave detection [23], electromyography (EMG) signal detection [24], crude oil price [25], mood states modulate complexity [26], soil transect data [27], financial time series [28], rainfall-runoff relationships [29], rotor fault diagnosis [30], etc.
In the coarse-graining process of MSE analysis casted on a time series with N points, only the first coarse-grained time series is used to calculate entropy values. As a result, the number of points decreases from N to N over time scale τ . The problem is that, when the length of the original or the coarse-grained time series is shorter than, for example, 750 points, the deviation of the estimated entropy values raises very quickly as the number of data points decreases. Large deviation of estimated entropy values leads to a considerably reduced reliability and stability in discriminating time series generated by different systems or by the same system under different conditions. Currently, there are several solutions to the problem. One type of solutions is to replace the coarse-graining process. Liu and Wei et al. employed an adaptive resampling procedure to replace the coarse-graining process in MSE, which is able to reduce the variation of entropy values caused by the length limitation of signals [31]. More recently, the authors proposed multivariate empirical mode decomposition enhanced multivariate multiscale entropy (MEMD-enhanced MMSE) to evaluate the balance stability of vibration shoes. The balance stability of shoes is significantly improved under the assistance of MEMD-enhanced MMSE, compared with the original MSE [32]. Wu et al. reported a modified MSE (MMSE) algorithm, which replaces the coarse-graining procedure of MSE with a moving-average procedure [33]. This study showed that the MMSE algorithm is more reliable than the conventional MSE in analysis of short-length time series. Authors in paper [34] applied the modified multiscale entropy (MMSE) to study the computer operating behavior characteristics of human beings and found that retiree group exhibits higher complexity than student group and worker group. Another type of solutions is to improve the coarse-graining procedure. Wu et al. proposed composite multiscale entropy (CMSE) [35] and refined composite multiscale entropy (RCMSE) [36], which significantly reduce the deviation of the estimated entropy values by taking the average of the entropy values of τ coarse-grained time series instead of that of just the first coarse-grained time series. Niu et al. studied the characteristics of stock indices using CMSE and confirmed that CMSE is better than MSE in stability and reliability [37]. They also adopted the CMSE to demonstrate the effectiveness of a financial time series agent-based model proposed by them [38]. CMSE has been applied in analyzing many other materials, such as bistable laminated plate signals [39], and magnetoencephalography recordings, etc. More recently, the composite coarse-graining procedure has also been adopted to reduce the length dependence of multiscale permutation entropy [40,41].
However, sample entropy-based methods, including MSE and CMSE, measure the similarity of two subsequences of a time series with either zero or one, but without in-between values. These methods probably output sudden changes of entropy values even if the time series embraces small differences. This problem becomes especially severe when the length of time series is getting short. With regard to this problem, we propose flexible multiscale entropy that measures the similarity of two subsequences with full-range values from zero to one, and thus decreases the fluctuation of entropy values. In order to demonstrate the effectiveness of the proposed method, we will carry out experiments with both synthetic and real time series.

3. Preliminaries

3.1. Sample Entropy

Sample entropy is now widely used for measuring the complexity of time series. Sample entropy reflects the conditional probability that two similar sequences of length m remain similar when one more consecutive point is added to each sequence. As an improved method of approximate entropy, sample entropy avoids self-match in the template matching process. This improvement enables sample entropy to reflect the complexity of data sequence more accurately and to be largely independent of time series length [13]. The calculation steps of sample entropy are as follows:
  • Step 1: Given a time series containing N data points { x ( i ) | 1 i N } , consider m dimensional vector sequences, X m ( i ) = [ x ( i ) , x ( i + 1 ) , ... , x ( i + m 1 ) ] , 1 i N m + 1 , m is called pattern length in the rest.
  • Step 2: Define d [ X m ( i ) , X m ( j ) ] as the distance between the two vectors, which equals the maximum absolute difference between the corresponding elements in the two vectors. The expression is as follow:
    d [ X m ( i ) , X m ( j ) ] = X m ( i ) X m ( j ) = max { | x ( i + k ) x ( j + k ) | }
    where: 0   k m 1 , 1 i , j N m + 1 .
  • Step 3: Set the similarity criterion r, the probability that the other vectors are similar to vector X m ( i ) is defined as B i m ( r ) :
    B i m ( r ) = 1 N m 1 n u m { d [ X m ( i ) , X m ( j ) ] < r } , 1 i , j N m , i j
    where num accumulates the number of similar vectors, which are vectors having a distance to X m ( i ) smaller than r.
  • Step 4: Calculate the average value of B i m ( r ) , denoted as B m ( r ) , which indicates the probability that two vectors will match for m points:
    B m ( r ) = 1 N m i = 1 N m B i m ( r )
  • Step 5: Set pattern length to m + 1, calculate A i m ( r ) :
    A i m ( r ) = 1 N m 1 n u m { d [ X m + 1 ( i ) , X m + 1 ( j ) ] < r } , 1 i , j N m , i j
  • Step 6: Calculate the average value of A i m ( r ) , denoted as A m ( r ) , which represents the probability that two vectors will match for m + 1 points:
    A m ( r ) = 1 N m i = 1 N m A i m ( r )
  • Step 7: Calculate sample entropy:
    S a m p E n ( m , r ) = lim N { ln A m ( r ) B m ( r ) }
In the actual calculation process, we often use the following formula:
S a m p E n ( m , r , N ) = ln [ A m ( r ) B m ( r ) ]
Conventionally, m = 2, r = 0.15, which means similarity criterion is 0.15 × SD. SD is the standard deviation of the original time series. Comparing to approximate entropy, sample entropy has better consistency. In the calculation process of vector matching, no self-match is included. It makes sample entropy more accurate in describing the complexity of time series and makes sample entropy largely independent of data sequence length [13].

3.2. Composite Multiscale Entropy

The composite multiscale entropy (CMSE) is proposed in paper [35]. CMSE improved the stability of calculation by using composite averaging method. The effectiveness of the theory is verified on two types of artificial noise signals and a real vibration data set. The “coarse-graining” process of CMSE is different from that of MSE. For every scale factor τ , the given time series { x ( i ) | 1 i N } will be transformed to:
y k ( τ ) = { y k , 1 ( τ ) , y k , 2 ( τ ) , ... , y k , P ( τ ) } , P = N k + 1 τ , 1 k τ
The specific transformation formula is as follow:
y k , j ( τ ) = 1 τ i = ( j 1 ) τ + k j τ + k 1 x i , 1 j P , 1 k τ
In the CMSE algorithm, for every scale factor τ , CMSE value is calculated as the mean of sample entropy values:
C M S E ( x , τ , m , r ) = 1 τ k = 1 τ S a m p E n ( y k ( τ ) , m , r )

4. Flexible Multiscale Entropy

As can be seen from Equation (2), sample entropy measures the similarity of two subsequences with either one or zero. If the distance between two vectors is less than r, then the two vectors are regarded as similar, in other words, the similarity of the two vectors is assigned as one. Otherwise, the similarity of the two vectors is zero. Such similarity metric has a problem that the similarity value changes suddenly from one to zero if the distance between two vectors cross the borderline of r and vice versa. As a result, entropy values are considerably impacted even if the time series embraces only small differences. This problem becomes especially severe when the length of time series is getting short, e.g., less than 750 points.
To this end, we propose flexible multiscale entropy (FMSE) that measures the similarity of two subsequences with full-range values from zero to one, and thus decreases the fluctuation of entropy values. Before introducing the calculation procedure of FMSE, it is necessary to point out the main difference between the calculations of FMSE and MSE. In the traditional MSE analysis, sample entropy is calculated as shown in Equation (7). In the formula, B m ( r ) and A m ( r ) are calculated in the same method as given in Equations (3) and (5). However, in FMSE, the calculation of A m ( r ) is different from that of sample entropy. In order to distinguish from A m ( r ) , we will define C m ( f ) in FMSE. The calculation process of FMSE is shown as follows:
  • Step 1: Incorporating the idea of composite coarse-graining from the CMSE. For every scale factor τ , we transform the original time series { x ( i ) | 1 i N } to new time series y k ( τ ) as Equations (8) and (9).
  • Step 2: For time series y k ( τ ) , calculate B m ( r ) same as Equations (1)–(3). Note that, in the process of calculating B m ( r ) , the length of time series is P, since the new time series has been coarse-grained from the original time series.
  • Step 3: In this step, calculate C i m ( f ) . Different from the similarity accumulating function in sample entropy, we define a new accumulative function s ( Y k , m + 1 ( τ ) ( i ) , Y k , m + 1 ( τ ) ( j ) ) , which is a piecewise function that avoids the similarity of vectors changing suddenly between 0 and 1:
    s ( Y k , m + 1 ( τ ) ( i ) , Y k , m + 1 ( τ ) ( j ) ) = { 0 d [ Y k , m + 1 ( τ ) ( i ) , Y k , m + 1 ( τ ) ( j ) ] f 1 d [ Y k , m + 1 ( τ ) ( i ) , Y k , m + 1 ( τ ) ( j ) ] f d [ Y k , m + 1 ( τ ) ( i ) , Y k , m + 1 ( τ ) ( j ) ] < f
    where f is called flexible similarity criterion, f is a proportion, usually set to 0.2, of standard deviation of the original time series.
The ratio of similar vectors for pattern length m + 1 is as follows:
C i m ( f ) = 1 P m 1 j = 1 P m + 1 s ( Y k , m + 1 ( τ ) ( i ) , Y k , m + 1 ( τ ) ( j ) ) , 1 i P m , i j
  • Step 4: Calculate C m ( f ) as follows:
    C m ( f ) = 1 P m i = 1 P m C i m ( f )
  • Step 5: Calculate the improved sample entropy for coarse-grained time series y k ( τ ) :
    F S a m p E n ( y k ( τ ) , m , r , f ) = ln [ C m ( f ) B m ( r ) ]
  • Step 6: Calculate flexible multiscale entropy:
    F M S E ( x , τ , m , r , f ) = 1 τ k = 1 τ F S a m p E n ( y k ( τ ) , m , r , f )
Figure 1 shows a time series { x ( i ) | 1 i 5 } for illustrating the calculation process of FMSE. The black dashed lines around x ( 1 ) and x ( 2 ) represent x ( 1 ) ± r × s d and x ( 2 ) ± r × s d , respectively, where sd stands for the standard deviation of the time series and r is the similarity criterion, which is typically set to be between 0.1 and 0.2. In the following, we take the case when m is 1 as the example. In order to compute the FMSE for this case, we need to obtain B 1 ( r ) and C 1 ( f ) . Consider the one-point-pattern for x 1 , we need to find out all the points that match with x 1 , which are the points fall in between the black dashed lines x ( 1 ) ± r × s d . In this example, x 3 is the only one point that satisfies the requirement. Then, the value of B 1 1 ( r ) is 1 / 3 . Similarly, for points x i ( 2 i 4 ) , the value of B i 1 ( r ) ( 2 i 4 ) is 1 / 3 , respectively. Thus, the value of B 1 ( r ) is 1 / 3 . For computing C 1 ( f ) , we need to consider the sequences with pattern length of m + 1, that is 2 in this case. As shown in Equations (11) and (12), we introduce a flexible factor f in the computation of C 1 ( f ) . The red dashed line shown in Figure 1 is x 1 ± f * sd. Consider the two-point-pattern ( x 1 , x 2 ), we can find that the pattern ( x 3 , x 4 ) matches with it. In this case, we also find that ( x 4 , x 5 ) matches with ( x 2 , x 3 ). The calculation of cumulative number of similar patterns has also been improved in our method. For example, for the pattern ( x 3 , x 4 ) and ( x 1 , x 2 ), we measure the similarity as ( 1 d [ ( x 1 , x 2 ) , ( x 3 , x 4 ) ] f ) instead of 1 here. In this way, the method can avoid the similarity of patterns changing suddenly between 0 and 1. After obtaining C 1 ( f ) , we then compute the FMSE according to Equations (14) and (15).

5. Experiment and Evaluation

In this section, the proposed FMSE will be evaluated against MSE and CMSE through two synthetic noise signals and a set of real vibration data collected using sensors by the Case Western Reserve University (CWRU) Bearing Data Center [42].

5.1. Synthetic Noise Time Series

FMSE is first evaluated using two synthetic noise signals, including white noise and 1/f noise. Since the length of time series is a factor that influences the performance of MSE analysis, four different lengths of time series are used in the experiment, which are N = 1000, N = 2000, N = 4000, N = 10,000, respectively. For each type and each length of noise signal, one hundred independent time series samples are used to calculate the MSE, CMSE and FMSE values. Examples of MSE, CMSE and FMSE values for white noise signals are shown in Figure 2, Figure 3 and Figure 4, respectively.
From each of the three figures, it is easy to find that the curve of entropy values of white noise time series with more points is smoother than that with fewer points. This indicates that the variance of the entropy values increases as the length of time series decreases. By comparing the three figures, it is clear that the corresponding curve of MSE values is the most fluctuated among the curves of the three metrics. In fact, the FMSE has better stability than CMSE, although the improvement is not easily observed in the figures. For this reason, we will present more evidence of the improvement of FMSE over CMSE by comparing the coefficient of variation of entropy values calculated with FMSE and CMSE later.
Examples of MSE, CMSE and FMSE values for 1/f noise signals are shown in Figure 5, Figure 6 and Figure 7, respectively. Different from white noise, 1/f noise signal is time-correlated, so the entropy values of 1/f noise theoretically remain the same for different time scales.
From the figures, we find two similar trends. First, the curve of entropy values associated with more points is smoother than that associated with fewer points. This indicates that the variance of the entropy values increases as the length of time series decreases. Second, it is clear that the corresponding curve of MSE values is the most fluctuated among the curves of the three metrics. However, for large scales, it is clearer for 1/f noise than white noise that the FMSE shows better stability than CMSE. For the time series with 1000 points, when the scale factor is more than 20, MSE and CMSE have obvious large degrees of decline. On the contrary, there is no sustained increase or decrease, which is in line with the characteristics of 1/f noise sequence. Furthermore, the fluctuation of FMSE values is much smaller than that of MSE and CMSE values.
From the entropy curves of both the white noise and 1/f noise time series, we can see that the FMSE has a better performance than MSE and CMSE, especially when the time series is short. In other words, the FMSE has a better tolerant of short length of time series than the other two metrics. Hence, the FMSE is able to measure the complexity of time series more accurately than MSE and CMSE.
In the next, the convergence of the entropy values estimated from one hundred independent noise signals is examined. It is reasonable to assume that entropy values estimated for different samples generated by the same noise function should be convergent. In other words, the dispersion of the estimations for different samples from the same noise function is the lower the better. Since, the mean value of FMSE is different from that of MSE, CMSE, we use coefficient of variation instead of standard deviation to measure the convergence. The coefficient of variation (CV) is defined as the ratio of the standard deviation σ , over the mean μ :
C v = σ μ
The lower the CV of estimations for samples generated by the same noise, the better the performance is. The CVs of white noise with two different data lengths (N = 1000 and 10,000) are shown in Figure 8 and Figure 9, respectively. As shown in the figures, the CVs of estimations for time series with 1000 points are much larger than those of 10,000 points. It is obvious that the CVs of FMSE values are always the lowest among the three metrics. Furthermore, as the scale factor increases, the improvement in CV of FMSE over the other two becomes larger.
Figure 10 and Figure 11 show the CVs of 1/f noise with two different data lengths (N = 1000 and 10,000). From the two figures, it is found that the performance of FMSE is also better than the other two, but with a smaller improvement in comparison with that of white noise.
From the results for both white noise and 1/f noise, CVs of FMSE are smaller than those of the other two metrics. The decrease in CV becomes more significant when the scale factor increases. The improvement in CV indicates that the FMSE measures the complexity of time series with a higher stability and reliability than the other two metrics.
Table 1 provides the CVs of the three entropy metrics under different scale factors. As can be seen from the table, the CVs of the FMSE in each scale factor are smaller. For white noise signals with 1000, 2000, 4000 and 10,000 points, the aggregate improvement in CV of FMSE over MSE and CMSE is around 50% and 30%, respectively. For 1/f noise signals, the aggregate improvement in CV of FMSE over MSE is higher than 40%. In the three out of the four cases, the improvement of FMSE over CMSE is larger than 15%. For the 1/f noise that with 2000 points, the improvement is nearly 10%, which is also considerable. It is necessary to mention that, due to the limitation of space, we have not shown the CVs for all of the 40 scale factors in Table 1, but only six of them are shown.

5.2. Real Vibration Data

In this section, the FMSE is evaluated using real vibration data, which were obtained from the Case Western Reserve University (CWRU) Bearing Data Center [42]. The bearing equipment that generated vibration data was composed of two horsepower motors, a torque transducer, a dynamometer, and control electronics. The vibration data sets were collected with fault diameters of 7 mils (one mil is one thousandth of an inch) and the motor speeds was 1772 rpm. The bearing equipment include 6 conditions, which are normal states, ball faults, inner race faults and outer race faults located at 3, 6 and at 12 o’clock. The set of data was collected at a rate of 48,000 samples per second for drive end bearing faults.
In the evaluation, the vibration data is divided into 6 groups based on the 6 conditions. For each group, we divided the data into non-overlapping time series. After division, each group includes about 240 time series and each has a length of 2000. We calculated the mean of MSE, CMSE and FMSE values, respectively, for each group, and the scale factor is from 1 to 40.
The means of MSE, CMSE, FMSE values of vibration signals are shown in Figure 12. From Figure 12, we can see that the mean of MSE values is very close to that of CMSE. The mean of FMSE values is higher than that of MSE and CMSE, but the trend of FMSE is similar to MSE and CMSE. It means that FMSE can reflect the complexity change of time series completely.
It is also reasonable to assume that the entropy values of different samples generated by the same system with same condition should be convergent. Thus, we also use CV to evaluate the performance of MSE, CMSE and FMSE. Figure 13 and Table 2 shows the CVs of MSE, CMSE and FMSE for the vibration signals in all scale factors. We can see from Figure 13 that the CVs of FMSE values are smaller than that of MSE and CMSE values in every scale factor. The superiority of FMSE is especially significant in large scale factor. We provide the CVs for some scale factors in Table 2. In the table, column fault class lists the 6 different conditions. N means normal, B means ball fault, I means inner race fault. O3, O6 and O12 means outer race faults located at 3, 6 and 12 o’clock, respectively. The last column is the total decrease of CVs obtained by FMSE against MSE and CMSE for all scale factors. From the table, we can see that FMSE has lower CVs in all the scale factors at each condition of vibration signals. The total decrease in CVs of FMSE values against MSE and CMSE values reaches up to 68.87% and 26.87%, and at least 47.7% and 18.45%, respectively.
In this section, we evaluate the FMSE using synthetic noise signals and real vibration data. As can be seen, all the results show that FMSE has a lower coefficient of variation, or better stability and reliability, in measuring the structural complexity of time series, compared to MSE and CMSE.

6. Conclusions

Measurement of time series complexity and predictability is sometimes the cornerstone of applying time series analysis in solving topology and traffic control problems in sensor networks. In this paper, we have introduced the entropy metrics that are used to measure the complexity and predictability of time series. The existing entropy metrics become limited when the length of empirical time series is short. To this end, we propose the flexible multiscale entropy (FMSE) to measure the complexity of time series in this paper. The flexible multiscale entropy proposed in the paper introduces a new function for measuring and accumulating the similarity between time series patterns. The new accumulative function avoids the similarity of time series patterns changing suddenly between 0 and 1. The proposed flexible multiscale entropy is evaluated with both synthetic noise signals and real vibration data. The results show that flexible multiscale entropy has a better reliability and stability in measuring complexity of time series. The proposed method FMSE is useful for improving the performance of topology and traffic control techniques relied on time series analysis.

7. Future Work

Our work could be extended in several possible future work directions. The first direction is to go further to analyze time series generated by sensor networks with our proposed method. For example, investigate the spatiotemporal characteristics of time series in sensor networks, e.g., how time and space scales affect the behavioral patterns of sensor networks. Based on the understandings of spatiotemporal characteristics of sensor networks, we go further to build a situational awareness model for monitoring the running status of sensor networks. We will use this model to analyze behavior patterns of sensor networks, monitor and predict network connectivity, detect anomaly among sensor networks, and mitigate the congestion problem in sensor networks and so on. Furthermore, analyzing time series generated by sensor networks in real time and more energy efficiently is an important part of our future work as well.

Acknowledgments

The authors are grateful to the anonymous reviewers for their valuable comments and to the editors for their work that improved this paper. This work was supported by NSF of Zhejiang under grant Nos. LY17F020030, LY16F020018, and NSF of China under grant Nos. 61300211, 61572163, 61300033, and National Key Technology Research and Development Program of China under grant No. 2014BAK14B04.

Author Contributions

Renjie Zhou, Jian Wan, Bo Guan and Naixue Xiong conceived of the main idea of the method proposed in the paper. Chen Yang and Wei Zhang implemented the proposed method in Java program. They also collected the experimental data and evaluated the performance of proposed method with the data. Renjie Zhou and Chen Yang prepared the manuscript. Jian Wan, Bo Guan, Wei Zhang and Naixue Xiong read, polished and approved the final manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Analytical FMSE Results for White Noises

In this appendix, we provide the analytical derivations of FMSE for white noises with Gaussian distributions. In order to make this derivation possible, we assume linear Gaussian correlation in the white noises. In this appendix, we use P() for probability distributions and p() for probability density functions.
For the case m = 1, since there is no correlation between any data point and its preceding data points in white noise, FMSE equals the negative natural logarithm of the weighted probability that the distance between any two data points is less than or equal to f. Then, we have:
P f ( | y j τ y i τ | f ) = + { y i τ f y i τ + f p ( y j τ ) d y j τ } p ( y i τ ) d y i τ + { 1 f y i τ f y i τ + f p ( y j τ ) | y i τ y j τ | d y j τ } p ( y i τ ) d y i τ
Without loss of generality, we considered a white noise has a Gaussian distribution with a mean of zero and variance of σ . The coarse-grained white noise time series still has a mean of zero. However, the variance of decreases from σ to σ τ :
σ τ = σ τ
where τ is the time scale, σ τ represents the variance of the coarse-grained white noise time series. Then, we can derive the first part of the Equation (A1) is as follows:
+ { y i τ f y i τ + f p ( y j τ ) d y j τ } p ( y i τ ) d y i τ = 1 2 σ τ 2 π + { e r f ( t + f σ 2 / τ ) e r f ( t f σ 2 / τ ) } e t 2 f / 2 σ 2 d t
Next, we will derive the second part of the Equation (A1). For the purpose of distinguishing symbols easier, we let t represents y i , x represents y j , then we have:
+ { 1 f y i τ f y i τ + f p ( y j τ ) | y i τ y j τ | d y j τ } p ( y i τ ) d y i τ = + { 1 f t f t + f p ( x ) | t x | d x } p ( t ) d t = 2 + { 1 f t t + f p ( x ) ( x t ) d x } p ( t ) d t = 2 + { 1 f t t + f p ( x ) x d x } p ( t ) d t 2 + { 1 f t t + f p ( x ) t d x } p ( t ) d t = 2 + { 1 f σ τ 2 π e t 2 / 2 σ τ 2 1 f σ τ 2 π e ( t + f ) 2 / 2 σ τ 2 } p ( t ) d t + 2 + t 2 f { e r f ( t / σ τ 2 ) e r f ( t + f σ τ 2 ) } p ( t ) d t = σ τ f π σ τ f π e f 2 / 4 σ τ 2 + 1 f σ τ 2 π + t e r f ( t / σ τ 2 ) e t 2 / 2 σ τ 2 d t 1 f σ τ 2 π + t e r f ( t + f σ τ 2 ) e t 2 / 2 σ τ 2 d t
Equations (A3) and (A4) can be approximated numerically. For example, we can set the following conditions for numerical calculation: (1) d t Δ t = 1 / 5000 ; (2) the range of the calculation is [ 3 , 3 ] = [ ( N / 2 ) Δ t , ( N / 2 ) Δ t ] , with N = 30,000. The values calculated from above analytical equations are in good agreement with those obtained by the FMSE algorithm on simulated white noise time series. It is necessary to mention that the derivation of FMSE values is similar to that of MSE, we would like to refer readers to reference [6] for more detailed information.

References

  1. Pardo, J.; Zamoramartínez, F.; Botellarocamora, P. Online learning algorithm for time series forecasting suitable for low cost wireless sensor networks nodes. Sensors 2015, 15, 9277–9304. [Google Scholar] [CrossRef] [PubMed]
  2. Xue, W.; Ma, J.J.; Sheng, W.; Bi, D.W. Time series forecasting energy-efficient organization of wireless sensor networks. Sensors 2007, 7, 1766–1792. [Google Scholar]
  3. Luo, C.; Zhang, Y.; Xie, W.X. Traffic regulation based congestion control algorithm in sensor networks. J. Inf. Hiding Multimedia Signal Process. 2014, 5, 187–198. [Google Scholar]
  4. Sun, Y.; Li, M.; Xu, P. A cross-layer congestion control algorithm based on traffic prediction in wireless sensor networks. Appl. Mech. Mater. 2013, 397–400, 2641–2646. [Google Scholar] [CrossRef]
  5. Serdio, F.; Lughofer, E.; Pichler, K.; Buchegger, T.; Pichler, M.; Efendic, H. Fault detection in multi-sensor networks based on multivariate time-series models and orthogonal transformations. Inf. Fusion 2014, 20, 272–291. [Google Scholar] [CrossRef]
  6. Khaleghi, B.; Khamis, A.; Karray, F.O.; Razavi, S.N. Multisensor data fusion: A review of the state-of-the-art. Inf. Fusion 2013, 14, 28–44. [Google Scholar] [CrossRef]
  7. Chen, Y.; Liu, G.; Yang, H. Sparse particle filtering for modeling space-time dynamics in distributed sensor networks. In Proceedings of the 2014 IEEE International Conference on Automation Science and Engineering (CASE), Taipei, Taiwan, 18–22 August 2014; pp. 626–631. [Google Scholar]
  8. Gu, Y.; Mccallum, A.; Towsley, D. Detecting anomalies in network traffic using maximum entropy estimation. In Proceedings of the ACM SIGCOMM Conference on Internet Measurement (IMC), Berkeley, CA, USA, 19–21 October 2005; pp. 345–350. [Google Scholar]
  9. Riihijärvi, J.; Wellens, M.; Mahonen, P. Measuring complexity and predictability in networks with multiscale entropy analysis. In Proceedings of the IEEE International Conference on Computer Communications (INFOCOM), Rio de Janeiro, Brazil, 19–25 April 2009; pp. 1107–1115. [Google Scholar]
  10. Steiger, M.; Bernard, J.; Mittelstädt, S.; Lücke-Tieke, H.; Keim, D.; May, T. Visual Analysis of Time-Series Similarities for Anomaly Detection in Sensor Networks. Comput. Graph. Forum 2014, 33, 401–410. [Google Scholar] [CrossRef]
  11. Kasetty, S.; Stafford, C.; Walker, G.P. Real-Time Classification of Streaming Sensor Data. In Proceedings of the 20th IEEE International Conference on Tools with Artificial Intelligence, Dayton, OH, USA, 3–5 November 2008; pp. 149–156. [Google Scholar]
  12. Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef] [PubMed]
  13. Richman, J.S.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [PubMed]
  14. Wei, Q.; Liu, Q.; Fan, S.Z.; Lu, C.W.; Lin, T.Y.; Abbod, M.F.; Shieh, J.S. Analysis of EEG via Multivariate Empirical Mode Decomposition for Depth of Anesthesia Based on Sample Entropy. Entropy 2013, 15, 3458–3470. [Google Scholar] [CrossRef]
  15. Lake, D.E.; Richman, J.S.; Griffin, M.P.; Moorman, J.R. Sample entropy analysis of neonatal heart rate variability. Am. J. Physiol. Regul. Integr. Comp. Physiol. 2002, 283, 789–797. [Google Scholar] [CrossRef] [PubMed]
  16. Costa, M.; Goldberger, A.L.; Peng, C.K. Multiscale entropy analysis of complex physiologic time series. Phys. Rev. Lett. 2002, 89, 068102. [Google Scholar] [CrossRef] [PubMed]
  17. Costa, M.; Goldberger, A.L.; Peng, C.K. Multiscale entropy analysis of biological signals. Phys. Rev. E 2005, 71, 021906. [Google Scholar] [CrossRef] [PubMed]
  18. Grassberger, P. Information and Complexity Measures in Dynamical Systems. In Information Dynamics; Atmanspacher, H., Scheingraber, H., Eds.; Springer: New York, NY, USA, 1991; pp. 15–33. [Google Scholar]
  19. Ge, J.Y.; Zhou, P.; Zhao, X.; Liu, H.Y. Multiscale entropy analysis of EEG signal. Comput. Eng. Appl. 2009, 45, 13–15. [Google Scholar]
  20. Zhang, L.; Huang, W.Y.; Xiong, G.L. Assessment of rolling element bearing fault severity using multi-scale entropy. J. Vib. Shock 2014, 33, 185–189. [Google Scholar]
  21. Xie, Z.M. Multiscale entropy method for analysis of complex geophysical signals. Technol. Earthq. Disaster Prev. 2009, 4, 380–385. [Google Scholar]
  22. Chen, X.M.; Wang, H.Q.; Lin, J.Y.; Feng, G.S.; Zhao, C. Network Traffic Analysis for Mobile Terminal Based Multi-scale Entropy. In Proceedings of the Asia-Pacific Services Computing Conference, Fuzhou, China, 4–6 December 2014; pp. 74–80. [Google Scholar]
  23. McBride, J.; Zhao, X.P.; Munro, N.; Jicha, G. EEG multiscale entropy dynamics in mild cognitive impairment and early Alzheimer’s disease. In Proceedings of the Biomedical Science and Engineering Center Conference (BSEC), Oak Ridge, TN, USA, 6–8 May 2014; pp. 1–4. [Google Scholar]
  24. Zhang, X.; Chen, X.; Barkhaus, P.E. Multiscale entropy analysis of different spontaneous motor unit discharge patterns. IEEE J. Biomed. Health Inf. 2013, 17, 470–476. [Google Scholar] [CrossRef] [PubMed]
  25. Martina, E.; Rodriguez, E.; Perez, R.E.; Ramirez, J.A. Multiscale entropy analysis of crude oil price dynamics. Energy Econ. 2011, 33, 936–947. [Google Scholar] [CrossRef]
  26. Valenza, G.; Nardelli, M.; Bertschy, G.; Lanata, A.; Scilingo, E.P. Mood states modulate complexity in heartbeat dynamics: A multiscale entropy analysis. EPL 2014, 107, 109–166. [Google Scholar] [CrossRef]
  27. Tarquis, A.M.; Bird, N.R.A.; Whitmore, A.P.; Cartagena, M.C.; Pachepsky, Y. Multiscale entropy-based analysis of soil transect data. Vadose Zone J. 2008, 7, 563–569. [Google Scholar] [CrossRef]
  28. Xia, J.; Shang, P.; Wang, J.; Shi, W. Classifying of financial time series based on multiscale entropy and multiscale time irreversibility. Phys. A Stat. Mech. Appl. 2014, 400, 151–158. [Google Scholar] [CrossRef]
  29. Chou, C.M. Applying Multiscale Entropy to the Complexity Analysis of Rainfall-Runoff Relationships. Entropy 2012, 14, 945–957. [Google Scholar] [CrossRef]
  30. Zheng, J.D.; Cheng, J.S.; Hu, S.Y. Rotor Fault Diagnosis Based on Multiscale Entropy. J. Vibr. Meas. Diagnosis 2013, 33, 294–297. [Google Scholar]
  31. Liu, Q.; Wei, Q.; Fan, S.Z.; Lu, C.W.; Lin, T.Y.; Abbod, M.F.; Shieh, J.S. Adaptive Computation of Multiscale Entropy and its Application in EEG Signals for Monitoring Depth of Anesthesia During Surgery. Entropy 2012, 14, 978–992. [Google Scholar] [CrossRef]
  32. Wei, Q.; Liu, D.H.; Wang, K.H.; Liu, Q.; Abbod, M.F.; Jiang, B.C.; Chen, K.P.; Wu, C.; Shieh, J.S. Multivariate Multiscale Entropy Applied to Center of Pressure Signals Analysis: An Effect of Vibration Stimulation of Shoes. Entropy 2012, 14, 2157–2172. [Google Scholar] [CrossRef]
  33. Wu, S.D.; Wu, C.W.; Lee, K.Y.; Lin, S.G. Modified multiscale entropy for short-term time series analysis. Physica A 2013, 392, 5865–5873. [Google Scholar] [CrossRef]
  34. Pan, J.; Hu, H.; Liu, X.; Hu, Y. Multiscale Entropy Analysis on Human Operating Behavior. Entropy 2016, 18, 3. [Google Scholar] [CrossRef]
  35. Wu, S.D.; Wu, C.W.; Lin, S.G.; Wang, C.C.; Lee, K.Y. Time Series Analysis Using Composite Multiscale Entropy. Entropy 2013, 15, 1069–1084. [Google Scholar] [CrossRef]
  36. Wu, S.D.; Wu, C.W.; Lin, S.G.; Lee, K.Y.; Peng, C.K. Analysis of complex time series using refined composite multiscale entropy. Phys. Lett. A 2014, 378, 1369–1374. [Google Scholar] [CrossRef]
  37. Niu, H.L.; Wang, J. Quantifying complexity of financial short-term time series by composite multiscale entropy measure. Commun. Nonlinear Sci. Numer. Simul. 2015, 22, 375–382. [Google Scholar] [CrossRef]
  38. Niu, H.L.; Wang, J. Entropy and Recurrence Measures of a Financial Dynamic System by an Interacting Voter System. Entropy 2015, 17, 2590–2605. [Google Scholar] [CrossRef]
  39. Borowiec, M.; Rysak, A.; Betts, D.-N.; Bowen, C.R.; Kim, H.A.; Litak, G. Complex response of a bistable laminated plate: Multiscale entropy analysis. Eur. Phys. J. Plus 2014, 129, 1–7. [Google Scholar] [CrossRef]
  40. Humeau-Heurtier, A.; Wu, C.W.; Wu, S.D. Refined Composite Multiscale Permutation Entropy to Overcome Multiscale Permutation Entropy Length Dependence. IEEE Signal Process. Lett. 2015, 22, 2364–2367. [Google Scholar] [CrossRef]
  41. Humeau-Heurtier, A. The Multiscale Entropy Algorithm and its Variants: A Review. Entropy 2015, 17, 3110–3123. [Google Scholar] [CrossRef]
  42. Case Western Reserve University Bearing Data Center Website. Available online: http://www.oalib.com/references/13155135/ (accessed on 16 September 2014).
Figure 1. A time series for illustrating the calculation of flexible multiscale entropy.
Figure 1. A time series for illustrating the calculation of flexible multiscale entropy.
Sensors 17 00787 g001
Figure 2. MSE values for white noise signals with different lengths.
Figure 2. MSE values for white noise signals with different lengths.
Sensors 17 00787 g002
Figure 3. CMSE values for white noise signals with different lengths.
Figure 3. CMSE values for white noise signals with different lengths.
Sensors 17 00787 g003
Figure 4. FMSE values for white noise signals with different lengths.
Figure 4. FMSE values for white noise signals with different lengths.
Sensors 17 00787 g004
Figure 5. MSE values for 1/f noise signals with different lengths.
Figure 5. MSE values for 1/f noise signals with different lengths.
Sensors 17 00787 g005
Figure 6. CMSE values for 1/f noise signals with different lengths.
Figure 6. CMSE values for 1/f noise signals with different lengths.
Sensors 17 00787 g006
Figure 7. FMSE values for 1/f noise signals with different lengths.
Figure 7. FMSE values for 1/f noise signals with different lengths.
Sensors 17 00787 g007
Figure 8. CVs of white noise with length 1000.
Figure 8. CVs of white noise with length 1000.
Sensors 17 00787 g008
Figure 9. CVs of white noise with length 10,000.
Figure 9. CVs of white noise with length 10,000.
Sensors 17 00787 g009
Figure 10. CVs of 1/f noise with length 1000.
Figure 10. CVs of 1/f noise with length 1000.
Sensors 17 00787 g010
Figure 11. CVs of 1/f noise with length 10,000.
Figure 11. CVs of 1/f noise with length 10,000.
Sensors 17 00787 g011
Figure 12. The means of MSE, CMSE, FMSE values on bearing vibration data (1730 rpm, 7 mils). (a) Normal state; (b) Outer race fault (3 o’clock position); (c) Outer race fault (6 o’clock position); (d) Outer race fault (12 o’clock position); (e) Ball fault; (f) Inner race fault.
Figure 12. The means of MSE, CMSE, FMSE values on bearing vibration data (1730 rpm, 7 mils). (a) Normal state; (b) Outer race fault (3 o’clock position); (c) Outer race fault (6 o’clock position); (d) Outer race fault (12 o’clock position); (e) Ball fault; (f) Inner race fault.
Sensors 17 00787 g012
Figure 13. The CVs of MSE, CMSE, FMSE values on bearing vibration data (1730 rpm, 7 mils). (a) Normal state; (b) Outer race fault (3 o’clock position); (c) Outer race fault (6 o’clock position); (d) Outer race fault (12 o’clock position); (e) Ball fault; (f) Inner race fault.
Figure 13. The CVs of MSE, CMSE, FMSE values on bearing vibration data (1730 rpm, 7 mils). (a) Normal state; (b) Outer race fault (3 o’clock position); (c) Outer race fault (6 o’clock position); (d) Outer race fault (12 o’clock position); (e) Ball fault; (f) Inner race fault.
Sensors 17 00787 g013aSensors 17 00787 g013b
Table 1. CVs with different entropy and scale factor.
Table 1. CVs with different entropy and scale factor.
Data LengthNoiseEntropyScale FactorDecrease in CV
1816243240
1000white noiseMSE0.0230.0740.1360.1710.2060.28355.90%
CMSE0.0230.0390.0740.1030.1480.20329.28%
FMSE0.020.030.0550.0730.1020.132
1/f noiseMSE0.0420.1190.2610.520.8481.16565.72%
CMSE0.0420.0710.0950.1950.3810.4919.57%
FMSE0.0350.060.1020.1430.2850.415
2000white noseMSE0.0120.040.0820.1160.1420.15451.92%
CMSE0.0120.030.0510.0740.0960.11629.01%
FMSE0.0110.0240.0370.0520.0660.076
1/f noiseMSE0.0380.0730.1160.1620.2890.3462.38%
CMSE0.0380.0520.0650.0810.0950.0899.42%
FMSE0.0320.0430.0540.0670.0930.105
4000white noiseMSE0.0070.0270.0540.0640.1010.10948.29%
CMSE0.0070.020.0360.0530.0720.08429.89%
FMSE0.0060.0160.0270.0370.0490.055
1/f noiseMSE0.0330.0490.0690.0740.1260.16352.78%
CMSE0.0330.0350.0420.0520.0560.06616.79%
FMSE0.0270.0280.0350.0430.0460.055
10,000white noiseMSE0.0030.0170.0290.040.0550.06348.37%
CMSE0.0030.0120.020.030.0390.04729.28%
FMSE0.0030.010.0150.0210.0270.03
1/f noiseMSE0.020.0230.0330.0470.0510.0743.74%
CMSE0.020.020.0220.0270.0350.03917.6%
FMSE0.0160.0160.0180.0220.0290.032
Table 2. The CVs with different entropy and scale factor of vibration signal.
Table 2. The CVs with different entropy and scale factor of vibration signal.
Fault ClassEntropyScaleDecrease in CV
1816243240
NMSE0.0180.0640.1130.1540.1710.20965.27%
CMSE0.0180.030.0510.0620.0670.08118.45%
FMSE0.0140.0250.0430.0510.0550.063
O3MSE0.030.0660.0920.110.1430.14759.99%
CMSE0.030.0360.0570.0610.090.06924.87%
FMSE0.0220.0290.0450.0470.0620.047
O6MSE0.0320.060.0950.1020.1640.17357.43%
CMSE0.0320.0420.0580.0690.080.08526.32%
FMSE0.0230.0340.0450.0520.0540.055
O12MSE0.0750.0690.1070.1240.1510.17247.7%
CMSE0.0750.0510.070.0760.1010.10623.1%
FMSE0.0580.0410.0560.060.0740.075
BMSE0.0210.0740.1180.1260.1240.13862.71%
CMSE0.0210.0410.0550.050.0590.05720.63%
FMSE0.0160.0330.0450.040.0450.042
IMSE0.0270.0730.1030.1120.1240.13168.87%
CMSE0.0270.0320.0420.0440.0430.0520.92%
FMSE0.0190.0280.0350.0340.0320.036

Share and Cite

MDPI and ACS Style

Zhou, R.; Yang, C.; Wan, J.; Zhang, W.; Guan, B.; Xiong, N. Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks. Sensors 2017, 17, 787. https://doi.org/10.3390/s17040787

AMA Style

Zhou R, Yang C, Wan J, Zhang W, Guan B, Xiong N. Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks. Sensors. 2017; 17(4):787. https://doi.org/10.3390/s17040787

Chicago/Turabian Style

Zhou, Renjie, Chen Yang, Jian Wan, Wei Zhang, Bo Guan, and Naixue Xiong. 2017. "Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks" Sensors 17, no. 4: 787. https://doi.org/10.3390/s17040787

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop