Next Article in Journal
Re-Evaluating Electromyogram–Force Relation in Healthy Biceps Brachii Muscles Using Complexity Measures
Previous Article in Journal
Inquiry Calculus and the Issue of Negative Higher Order Informations
Article Menu
Issue 11 (November) cover image

Export Article

Entropy 2017, 19(11), 623; doi:10.3390/e19110623

Article
Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy
Duo Hao 1, Qiuming Li 2 and Chengwei Li 1,*
1
School of Electrical Engineering and Automation, Harbin Institute of Technology, Harbin 150001, China
2
School of Astronautics, Harbin Institute of Technology, Harbin 150001, China
*
Correspondence: Tel.: +86-451-8641-5178
Received: 11 September 2017 / Accepted: 16 November 2017 / Published: 18 November 2017

Abstract

:
Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD) and relative entropy (RE). In this paper, the global motion vector (GMV) is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF), Kalman filter (KF), wavelet decomposition (MD) method, empirical mode decomposition (EMD)-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.
Keywords:
digital image stabilization; variational mode decomposition; relative entropy; jitter motion; intentional motion

1. Introduction

Digital cameras are frequently used to record video information. However, cameras mounted on vehicles frequently suffer from image shaking caused by the vehicles’ motion [1,2]. In particular, serious image shake occurs in complex terrains or under strenuous motions, thereby blurring the video sequences captured by cameras. Image shake does not only reduce the accuracy of observation, but also increases eye strain of users. To solve this problem, image stabilization has been widely studied in recent years [3,4,5].
Recent image stabilization systems can be generally classified into four categories: (1) optical image stabilization systems, which feature a kind of mechanism that stabilizes video sequences by optical computing with high accuracy and speed [6,7]; (2) electronic image stabilization systems, that use accelerometers or motion gyroscopes to detect camera motion and then compensate the jitter motion [8]; (3) orthogonal transfer charge-coupled device (CCD) stabilization systems, which use CCDs to measure image displacement and shifts the deviation according to the motion of bright stars [9]; (4) digital image stabilization (DIS), which estimates the global motion vector (GMV) and removes unintentional motion components from the GMV to generate stable video sequences using image processing algorithms [10,11,12]. DIS methods outperform other image stabilization methods because they are more flexible and are hardware-independent.
Motion separation is the most critical step in DIS. In signal processing, jitter motion separation from GMV can be considered a noise removal issue. Intentional motions can be considered useful signals, whereas jitter motions can be considered noisy signals. Therefore, various traditional filter methods can be used to remove jitter motions. MF includes a simple mathematical model and is a widely used scheme [13,14]. In this method, intentional motion vector is smoothed by averaging GMVs within a window. However, MF performance highly depends on window size. Another traditional method is KF, which estimates intentional motions using a dynamic motion model [15,16,17]. KF uses the current observation and previously estimated state to generate intentional motion. KF can be easily designed; however, it is unsuitable for nonlinear conditions [18]. WD method is proposed to satisfy nonlinear conditions [19]. However, the WD method must determine a proper wavelet basis function in advance, and this task becomes very difficult in complex conditions. Recently, many empirical mode decomposition (EMD)-based DIS algorithms have been proposed [20,21]. These techniques can adaptively separate jitter and intentional motions from GMV. However, EMD-based methods present many defects, such as having no precise mathematical model, sensitivity to noise, and sampling and mode mixing, which may result in inaccurate separation [22,23].
In the current study, a hybrid DIS method is proposed that uses variational mode decomposition (VMD) and relative entropy (RE). First, the GMV of video sequence is estimated using the scale-invariant feature transform (SIFT) feature matching algorithm. Then, the GMV is decomposed into several band-limit modes via VMD. Intentional motions possess low frequency and high amplitude because they are much slower than the frame rate, whereas jitter motion exhibit the opposite nature [13,20]; thus, jitter and intentional motions usually exhibit different statistical properties. Therefore, the RE value between two jitter motion modes is low, whereas the RE value between the intentional and jitter motion modes is high. Based on this fact, jitter motion modes can be determined. The summation of jitter motion modes constitutes the jitter motion vector, while the substraction of the resulting sum from the GMV represents the intentional motion vector. Several algorithms are then compared, and the experimental results show that the proposed method has better performance than the other algorithms.
The main contributions of this work are listed as follows: (1) A VMD-based motion separation method is proposed in this work. The VMD divides GMV into several narrow-banded modes, which has different center frequencies. The modes with different frequency characteristics decomposed by VMD can reproduce the original GMV. (2) A RE method is proposed to identify relevant modes. The proposed method utilizes statistical information to represent the internal relationship between different modes. Thus, compared with other existing methods (Hausdorff distance [24], power of amplitude [20], correlation coefficients [25]), the proposed method can better differentiate the intentional and jitter motions.
The rest of this paper is organized as follows: Section 2 introduces the related work, including the mathematical model of jitter motions, the VMD theory, and the RE theory. Section 3 illustrates the proposed DIS framework. Section 4 provides the experimental results of the proposed method compared with other methods. Finally, conclusions are drawn in Section 5.

2. Related Work

2.1. Mathematic Model of Jitter Motion

In a vehicle-mounted camera system, irregular pavement, engine, transmission system, and tire vibration all cause random jitters in the camera holder, which makes the video sequences unstable. Among them, the irregular pavement is the most serious factor. The jitter level of the camera pan has strong relationship with road roughness (RR) [26]. The statistical characteristics of RR can be illustrated by the power spectral density of pavement displacement:
G d ( n ) = G d ( n 0 ) ( n n 0 ) W
where n denotes spatial frequency; n 0 is the reference spatial frequency, which can be set as 0.1 m 1 ; G d ( n 0 ) is the coefficient of RR; and W is the frequency index, which is set as 2.
Aside from RR, vehicle speed can affect the frequency of jitter motions [27], as expressed as follows:
f = u × n
J ( f ) = 1 u × G ( n ) d
where J ( f ) is the time spectral of RR, which reflects the frequency of jitter motions; and u represents the vehicle speed. Equation (3) indicates that the frequency of jitter motions is only relative to vehicle speed and RR. In general, the sampling frequency of the video frequency is considerably higher than the frequency of motion vector. Thus, RR and vehicle speed can be assumed invariable within a short time. Then the frequency of jitter motions will be band-limited with large probability within a short time.

2.2. VMD Theory

VMD is different from traditional recursive model. This method concurrently searches modes and their center frequencies. By performing VMD, the signal can be decomposed into several band-limit modes u k   ( k = 1 ,   2 , ,   K ) , where K is the number of modes. Each mode converges around the center frequency ω k   ( k = 1 ,   2 ,   , K ) . Therefore, variational problem can be constructed, as shown by Equation (4) [28]:
min { u k } , { ω k } { k t [ ( δ ( t ) + j π t ) *   u k ( t ) ] e j ω k t } 2 2 } ,        s . t .    k u k = f ,
where f is the input signal, δ is Dirac distribution, t is time script, and denotes convolution.
To solve Equation (4), a quadratic penalty term α and Lagrangian multiplier λ are used to transform the constrained variational problem into the following unconstrained variational problem:
L ( { u k } , { ω k } , λ ) = α k t [ ( δ ( t ) + j π t ) e j ω k t ] 2 2 + f ( t ) k u k ( t ) 2 2 + λ ( t ) , f ( t ) k u k ( t ) ,
Then, using the alternate direction method of multipliers (by updating the u k n + 1 , ω k n + 1 , and λ n + 1 alternately), the solution of the optimal problem can be obtained by searching the saddle point of Equation (5) [29]. VMD is implemented as follows:
(1)
Initialize the modes u k , center pulsation ω k , Lagrangian multiplier λ and the maximum iterations N (5000 in this paper). The cycle index is set to n = 0 .
(2)
The cycle is started, n = n + 1 .
(3)
The first inner loop is executed, and u k is updated according to following function:
u k n + 1 = arg min u k L ( { u i < k n + 1 } , { u i k n } , { ω i n } , λ n ) .
(4)
The second inner loop is executed, and ω k is updated according to the following function:
ω k n + 1 = arg min ω K L ( { u i n + 1 } , { ω i < k n + 1 } , { ω i k n } , λ n ) .
(5)
λ is updated according to the following:
λ n + 1 = λ n τ ( f k u k n + 1 ) .
(6)
Steps (2)–(5) are repeated until convergence, as follows:
k u k n + 1 u k n 2 2 u k n 2 2 < ε ,
where τ is an update parameter, ε is a small number (0.00001 in this paper). The solution to update u ^ k and ω k can be solved in the spectral domain, as follows:
u ^ k n + 1 ( ω ) = f ^ ( ω ) i k u ^ i ( ω ) + λ ^ ( ω ) 2 1 + 2 α ( ω ω k ) 2 ,
ω k n + 1 = 0 ω | u ^ k ( ω ) | 2 d ω 0 | u ^ k ( ω ) | 2 d ω .
Then, the obtained modes u ^ k ( ω ) in frequency domain are transformed into the time domain via inverse Fourier transform. According to Dragomeretskiy’s theory, there are two important parameters that has influence on the result: the penalty parameter α and the mode number K [28]. First, Dragomeretskiy suggested that if the principle frequencies of the sub-components are estimated a priori, then a low α is preferred to use because ω k gains freedom of mobility to the appropriate modes [28]. In the proposed method, a low α (100) is preferred because no prior frequencies of the sub-components are given. Second, when α is small, either one of the modes is shared by the neighboring modes (underbinning) or several additional modes will generally consist of texture with a low structure (overbinning). In the first case, the intentional motion and jitter motion may be in the same mode, thereby impeding the good results. If the excess modes are decomposed, then the performance will not be significantly improved, but the computation will be increased. In our simulation, the number 5 can meet the requirement of most tests.

2.3. RE Theroy

In mathematical statistics, RE measures the difference between two probability distributions [30]. For discrete probability distributions P and Q , RE from Q to P is defined as follows:
D ( P | | Q ) = ( P ( i ) log ( P ( i ) / Q ( i ) ) )
The RE between two modes reflects the difference of probability distribution. In most cases, jitter and intentional motions exhibit different statistical properties. The jitter motion vector is wide-sense stationary or approximate to the Gaussian distribution, whereas the intentional motion vector is arbitrary. Thus, the RE value between two jitter motion modes is low, whereas that between the intentional and jitter motion modes is high.

3. Proposed Digital Image Stabilization Framework

There are three key procedures in the proposed DIS framework, namely, motion estimation, motion separation, and intentional motion vector reconstruction. During the first step, GMV is estimated using the SIFT feature point matching algorithm. Subsequently, VMD is applied to decompose GMV into different modes. RE is used in determining the intentional and jitter motion modes to separate them. Finally, the summation of the jitter motion modes constitutes the jitter motion vector, whereas the subtraction of the resulting sum from the GMV represents the intentional motion vector. The framework of the proposed DIS method is shown in Figure 1.

3.1. Motion Estimation

Lowe proposed SIFT in 1999 [31]. SIFT feature is robust against rotation, scaling, and illumination changes and considered one of the best feature extraction methods. SIFT searches extreme values in the scale space and generates 128 dimensions descriptors. Figure 2 and Figure 3 show the SIFT feature points and feature points of matching results of two test images, respectively. SIFT feature significantly reduces the probability of mismatch. Nevertheless, false matching can still occur among candidate points, as presented in Figure 3. Matching results may also represent the local motion vector instead of GMV when SIFT feature points are on the foreground objects. In general, random sample consensus (RANSAC) is used to solve the mismatching problem [32]. Finally, GMV between two consecutive frames are calculated by averaging the displacements of different feature points. The motion vector between arbitrary two frames can be obtained by following method: Designate a frame as the reference frame, and calculate the motion vectors between the reference frame and current frames.

3.2. Motion Separation

Although the GMV contains translation, rotation, and scaling motions, the motion can be analyzed independently [20]. We take 1D translation displacement as an example in this study.
After conducting motion estimation, the obtained GMV sequence can be considered as a time-varying variable G . The amplitude of G can be regard as the motion displacement of the camera. Consider the following typical GMV:
G ( t ) = I ( t ) + J ( t ) ,
where G ( t ) represents GMV, I ( t ) is the intentional motion vector, and J ( t ) is the jitter motion vector. To separate the jitter and intentional motion components, GMV is decomposed via VMD. For a testing GMV (as Figure 4 shows), the generated modes are shown in Figure 5, which are arranged from low to high frequencies.
On the basis of VMD theory, the relationship between the obtained GMV and its modes is exhibited as follows:
G ( t ) = i I M M i + i J M M i ,
where M represents the modes; I M and J M are the indexes of intentional and jitter motion modes, respectively.

3.3. Intentional Motion Vector Reconstruction

In the current study, RE is used to identify relevance among modes. The first mode is an intentional motion mode because it features the lowest frequency and largest amplitude [21]. Then, REs between the first mode and the other modes are calculated in sequence (denoted as R E i ( i = 1 , 2 , , K ) ). As the intentional motion is much slower than the frame rate, intentional motion shows smooth transition with high amplitude and low frequency between frames. On the other hand, jitter motion is characterized by low amplitude and high frequency. In general, jitter motions can be considered to approximately obey Gaussian distribution [10,15]. Therefore, RE value will remain at low levels when the two modes are both intentional motion components; otherwise, RE value will remain at high levels. The modes exhibit low RE values with the first mode being dominated by intentional motion, whereas the remaining modes are dominated by jitter motion.
The corresponding REs for the modes presented in Figure 5 are shown in Figure 6. R E 1 is the smallest, and R E 2 stays at a low level. However, a sudden increase is observed at R E 3 , and the subsequent REs all stay at a high level. From the preceding analysis, the modes behind the third mode correspond to jitter motions (including the third mode), whereas the first and second modes comprise intentional motions.
The following procedures describe the DIS steps:
(1)
Calculate the GMV by SIFT point matching algorithm.
(2)
Decompose the GMV into modes via VMD.
(3)
Calculate the REs between the first mode and other modes.
(4)
If R E i is smaller than a threshold T (usually, T = 1 2 × max ( R E i ) can meet the demands of most situations), then the mode M i is considered an intentional motion mode.
(5)
Obtain the reconstructed intentional motion by summing the intentional motion modes as follows:
I ˜ ( t ) = i I M M i = G ( t ) i J M M i .

4. Experimental Results and Discussions

4.1. Performance of the RE

To illustrate the effectiveness of RE, three different tests are performed to evaluate mode separation performance. Given a known clean signal f h ( t ) , contaminate the signal with different kind of noises (including the Gaussian noise, office noise, and factory noise) as follows:
f ( t ) = f h ( t ) + n ( t ) ,
where n ( t ) is the noise signal with different input SNRs.
The probability density function of Gaussian noise obeys the Gaussian distribution, which includes a fixed mean and variance. Office noise consists of many signals with different frequencies and high amplitude. Factory noise is caused by mechanical shock, rub impact, and air disturbance and includes numerous intermittent and impulse noises. For signals with different noises, we compare several selection criteria, including the Hausdorff distance [24], power amplitude [20], and correlation coefficient [25]. The evaluation steps are as follows. Noises are downloaded from NoiseX-92 database. Signal length is set as 200.
(1)
Noises are added to the original clean signal f h ( t ) , and input SNR ranges from −8 dB to 8 dB with interval of 2 dB.
(2)
Noisy signals are decomposed into several modes via VMD.
(3)
According to different selection criteria, the modes are classified.
(4)
The reconstructed clean signals are calculated by summing the relevant modes.
(5)
Output SNRs are calculated for different reconstructed signals:
S N R = 10 × log 10 ( P / P ¯ ) ,
where P and P ¯ correspond to the powers of the original and reconstructed signals, respectively.
The plots of input SNR (SNRin) versus output SNR (SNRout) for different noisy signals are shown in Figure 7, Figure 8 and Figure 9, respectively. These figures showed that the SNRouts of the RE selection criteria are higher than other selection criteria, which indicates that the RE selection criteria outperforms the other selection criteria.

4.2. Performance of the VMD-RE Method in DIS

Several simulation tests are performed to verify the effectiveness of the proposed VMD-RE method. A camera that mounted on holder mechanism is used to capture the video sequences, as shown in Figure 10. In this paper, we use the SNR and root mean square error (RMSE) [21]:
S N R = 10 × log 10 ( P / P ¯ )
R M S E = 1 N n = 1 N ( x ¯ n x n ) 2 ,
where P and P ¯ are the powers of the ground truth and the resulted intentional motion, respectively. N is the number of sample points; and x n and x ¯ n are the amplitude of each point in the ground truth and reconstructed motion vectors, respectively.
Four typical unstable scouting video sequences are tested. For Test 1, intentional motion is approximately linear, and jitter motion obeys a Gaussian distribution with fixed mean and variance. For Test 2, intentional motion contains multi-frequency components, and jitter motion obeys a Gaussian distribution. For Test 3, the level of jitter motion varies, and variance is low at former frames and increases along with time. For Test 4, the amplitude of jitter motion is maintained at high levels compared with that of intentional motions, and the level of jitter motion is time-varying. Experimental tests are performed using MATLAB® R2013a running on a PC equipped with a 2.60 GHz Intel Core i7-6700HQ CPU with 8 GB RAM. As shown in Figure 11, Figure 12, Figure 13 and Figure 14, four pairs of images are extracted from different video sequences. The displacement between two frames can be obtained using the SIFT feature point matching algorithm. The first picture in each group is the reference frame, whereas the second picture is the current frame. The blue lines show the image matching results. The actual GMV, ground truth intentional motions, and retrieved intentional motions are shown in Figure 15, Figure 16, Figure 17 and Figure 18. Table 1 and Table 2 show the RMSE and SNR values obtained using six different DIS algorithms, including the MF [11], KF [15], wavelet decomposition (WD) method [19], EMD-based method [20], enhanced EMD-based (E-EMD) method [21], and the proposed method.
First, from Table 1 and Table 2, we can conclude that the MF generates the poorest results in Tests 1 and 2, KF in Test 4, and WD in Test 3. These three kinds of methods show unstable performances. MF performance highly depends on window size [11]. Larger window size generates a smoother intentional motion vector and vice versa. In this paper, window size is set as 5. Window size is accurate in some conditions but not in others. KF is not adaptive to changing jitter levels in Tests 3 and 4, and stabilization results are insufficiently accurate. KF requires that observation and transition noises obey the Gaussian distribution, and variances must be constant. However, in many cases, transition variance is time-varying, causing KF to generate poor results in Test 4. The WD method can hardly select an appropriate wavelet basis function applicable in all conditions [19]. The performance may improve if basis function is well-selected and vice versa. These three traditional methods cannot be adapted to changing conditions and cannot be used in complex vehicle-mounted DIS systems. Second, comparing mode decomposition methods with traditional methods, we can conclude that mode decomposition methods generally perform better than traditional ones. Nevertheless, we also note that EMD method performs well in Tests 3 and 4 but poorly in Tests 1 and 2. This result can be attributed to the difficulty of determining the relevant model in complex condition because frequency information of intentional and jitter motions may overlap (mode mixing). E-EMD method generates better results than the traditional EMD method (mode mixing problem can be alleviated by adding white noise series to the targeted data and averaged corresponding intrinsic mode functions). However, compared with the proposed method, such performance remains at a disadvantage. By contrast, the proposed method calculates jitter motion variance and generates considerably better results than the other methods. The proposed method produces the lowest RMSE values and the highest SNR values in all tests.

5. Conclusions

This study proposed a DIS method based on VMD and RE. GMV is estimated using a SIFT feature point matching algorithm. Then, GMV is decomposed via VMD. According to the RE value between modes, relevant modes of intentional and jitter motions are determined. Performance of the proposed method is compared with several state-of-the-art methods. Simulation results show better performance of the proposed method than other related methods based on quantitative comparisons of RMSE and SNR values.

Author Contributions

Hao Duo conceived the algorithm and wrote the manuscript. Chengwei Li, Qiuming Li and Hao Duo designed and performed the experiment. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sato, K.; Ishizuka, S.; Nikami, A.; Sato, M. Control techniques for optical image stabilizing system. IEEE Trans. Consum. Electron. 1993, 39, 461–466. [Google Scholar] [CrossRef]
  2. Egusa, Y.; Akahori, H.; Morimura, A.; Wakami, N. An application of fuzzy set theory for an electronic video camera image stabilizer. IEEE Trans. Fuzzy Syst. 1995, 3, 351–356. [Google Scholar] [CrossRef]
  3. Tsytsulin, A.K.; Fakhmi, S.S. Stabilization of images on the basis of a measurement of their displacement, using a photodetector array and two linear photodetectors in combination. J. Opt. Technol. 2012, 79, 727–732. [Google Scholar] [CrossRef]
  4. Xu, L.; Lin, X. Digital Image Stabilization Based on Circular Block Matching. IEEE Trans. Consum. Electron. 2006, 52, 566–574. [Google Scholar]
  5. Jin, J.S.; Zhu, Z.; Xu, G. A stable vision system for moving vehicles. IEEE Trans. Intell. Transp. Syst. 2000, 1, 32–39. [Google Scholar] [CrossRef]
  6. Chiu, C.W.; Chao, P.C.P.; Wu, D.Y. Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm. IEEE Trans. Magn. 2007, 6, 2582–2584. [Google Scholar] [CrossRef]
  7. Qian, Y.; Li, Y.; Shao, J.; Miao, H. Real-time Image Stabilization for Arbitray Motion Blurred Image Based on Opto-Electronic Hybrid Joint Transform Correlator. Opt. Express 2011, 19, 10762–10768. [Google Scholar] [CrossRef] [PubMed]
  8. Kinugasa, T.; Yamamoto, N.; Komatsu, H.; Takase, S.; Imaide, T. Electronic image stabilizer for video camera use. IEEE Trans. Consum. Electron. 1900, 36, 520–525. [Google Scholar] [CrossRef]
  9. Burke, B.E.; Reich, R.K.; Savoye, E.D.; Tonry, J.L. An orthogonaltransfer CCD imager. IEEE Trans. Electron. Devices 1994, 41, 2482–2484. [Google Scholar] [CrossRef]
  10. Wang, C.T.; Kim, J.H.; Byun, K.Y.; Ko, S.J. Robust digital image stabilization using Kalman filter. IEEE Trans. Consum. Electron. 2009, 55, 6–14. [Google Scholar] [CrossRef]
  11. Zvantsev, S.P.; Merzlyutin, E.Y. Digital stabilization of images under conditions of planned movement. J. Opt. Technol. 2012, 79, 721–726. [Google Scholar] [CrossRef]
  12. Kumar, S.; Azartash, H.; Biswas, M.; Nguyen, T. Real-Time Affine Global Motion Estimation Using Phase Correlation and Its Application for Digital Image Stabilization. IEEE Trans. Image Process. 2011, 20, 3406–3418. [Google Scholar] [CrossRef] [PubMed]
  13. Ko, S.J.; Lee, S.H.; Lee, K.H. Digital image stabilizing algorithms based on bit-plane matching. IEEE Trans. Consum. Electron. 1998, 44, 617–622. [Google Scholar]
  14. Ko, S.J.; Lee, S.H.; Jeon, S.W.; Kang, E.S. Fast Digital Stabilizer Based on Gray-Coded Bit-Plane Matching. IEEE Trans. Consum. Electron. 1999, 45, 598–630. [Google Scholar]
  15. Kir, B.; Kurt, M.; Urhan, O. Local Binary Pattern Based Fast Digital Image Stabilization. IEEE Signal Process. Lett. 2015, 22, 341–345. [Google Scholar] [CrossRef]
  16. Yang, J.; Schonfeld, D.; Mohamed, M. Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion. IEEE Trans. Circuit Syst. Video Technol. 2009, 19, 945–954. [Google Scholar] [CrossRef]
  17. Ryu, Y.G.; Chung, M.J. Robust Online Digital Image Stabilization Based on Point-Feature Trajectory without Accumulative Global Motion Estimation. IEEE Signal Process. Lett. 2012, 19, 223–226. [Google Scholar] [CrossRef]
  18. Li, C.; Zhan, L.; Shen, L. Friction Signal Denoising Using Complete Ensemble EMD with Adaptive Noise and Mutual Information. Entropy 2015, 17, 5965–5979. [Google Scholar] [CrossRef]
  19. Xia, R.; Ke, M.; Feng, Q.; Wang, Z. Online wavelet denoising via a moving window. Acta Autom. Sin. 2007, 33, 897–901. [Google Scholar] [CrossRef]
  20. Ioannidis, K.; Andreadis, I. A Digital Image Stabilization Method Based on the Hilbert-Huang Transform. IEEE Trans. Instrum. Meas. 2012, 61, 2446–2457. [Google Scholar] [CrossRef]
  21. Hao, D.; Li, Q.; Li, C. Digital image stabilization in mountain areas using complete ensemble empirical mode decomposition with adaptive noise and structural similarity. J. Electron. Imaging 2016, 25, 33007. [Google Scholar] [CrossRef]
  22. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.-C.; Tung, C.C.; Liu, H.H. The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis. Proc. Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  23. Wu, Z.; Huang, N.E. Ensemble empirical mode decomposition: A noise-assisted data analysis method. Adv. Adapt. Data Anal. 2009. [Google Scholar] [CrossRef]
  24. Komaty, A.; Boudraa, A.; Dare, D. Emd-based filtering using the Hausdorff distance. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, Ho Chi Minh City, Vietnam, 12–15 December 2013; pp. 292–297. [Google Scholar]
  25. Zhang, S.Y.; Liu, Y.Y.; Yang, G.L. EMD interval thresholding denoising based on correlation coefficient to select relevant modes. In Proceedings of the 34th Chinese Control Conference (CCC), Hangzhou, China, 28–30 July 2015; pp. 4801–4806. [Google Scholar]
  26. Zhao, Q.; Li, J.; Cui, N. Time field simulation of vibration performance for military automobile under road random input. China Sciencepap. 2012, 7, 862–875. [Google Scholar]
  27. Yang, Y.; Shen, Y.L.; Cao, Y.; Li, T.S. The exploiture of road simulation shaker table and control system. J. Syst. Simul. 2004, 16, 1044–1046. [Google Scholar]
  28. Dragomiretskiy, K.; Zosso, D. Variational Mode Decomposition. IEEE Trans. Signal Process. 2014, 62, 531–544. [Google Scholar] [CrossRef]
  29. Rockafellar, T. A Dual Approach to Sloving Nonlinear Programming Problems by Uncontrained Optimization. Math. Program. 1973, 5, 354–373. [Google Scholar] [CrossRef]
  30. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  31. Lowe, D.G.; Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  32. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. In Readings in Computer Vision: Issues, Problems, Principles, and Paradigms; Fischler, M.A., Ed.; Morgan Kaufmann: San Francisco, CA, USA, 1987; pp. 726–740. [Google Scholar]
Figure 1. Proposed DIS framework based on VMD and RE.
Figure 1. Proposed DIS framework based on VMD and RE.
Entropy 19 00623 g001
Figure 2. SIFT feature points of testing images.
Figure 2. SIFT feature points of testing images.
Entropy 19 00623 g002
Figure 3. Matching results of SIFT feature points.
Figure 3. Matching results of SIFT feature points.
Entropy 19 00623 g003
Figure 4. Testing GMV.
Figure 4. Testing GMV.
Entropy 19 00623 g004
Figure 5. Modes decomposed via VMD.
Figure 5. Modes decomposed via VMD.
Entropy 19 00623 g005
Figure 6. REs between the first mode and other modes.
Figure 6. REs between the first mode and other modes.
Entropy 19 00623 g006
Figure 7. SNRin vs. SNRout of signal contaminated by Gaussian noise.
Figure 7. SNRin vs. SNRout of signal contaminated by Gaussian noise.
Entropy 19 00623 g007
Figure 8. SNRin vs. SNRout of signal contaminated by office noise.
Figure 8. SNRin vs. SNRout of signal contaminated by office noise.
Entropy 19 00623 g008
Figure 9. SNRin vs. SNRout of signal contaminated by factory noise.
Figure 9. SNRin vs. SNRout of signal contaminated by factory noise.
Entropy 19 00623 g009
Figure 10. Holder system.
Figure 10. Holder system.
Entropy 19 00623 g010
Figure 11. SIFT feature point matching result for Test 1.
Figure 11. SIFT feature point matching result for Test 1.
Entropy 19 00623 g011
Figure 12. SIFT feature point matching result for Test 2.
Figure 12. SIFT feature point matching result for Test 2.
Entropy 19 00623 g012
Figure 13. SIFT feature point matching result for Test 3.
Figure 13. SIFT feature point matching result for Test 3.
Entropy 19 00623 g013
Figure 14. SIFT feature point matching result for Test 4.
Figure 14. SIFT feature point matching result for Test 4.
Entropy 19 00623 g014
Figure 15. Resulting intentional motion compared with the ground truth in Test 1 video sequence.
Figure 15. Resulting intentional motion compared with the ground truth in Test 1 video sequence.
Entropy 19 00623 g015
Figure 16. Resulting intentional motion compared with the ground truth in Test 2 video sequence.
Figure 16. Resulting intentional motion compared with the ground truth in Test 2 video sequence.
Entropy 19 00623 g016
Figure 17. Resulting intentional motion compared with the ground truth in Test 3 video sequence.
Figure 17. Resulting intentional motion compared with the ground truth in Test 3 video sequence.
Entropy 19 00623 g017
Figure 18. Resulting intentional motion compared with the ground truth in Test 4 video sequence.
Figure 18. Resulting intentional motion compared with the ground truth in Test 4 video sequence.
Entropy 19 00623 g018
Table 1. RMSE values of the DIS algorithms.
Table 1. RMSE values of the DIS algorithms.
MethodMFKFWDEMDE-EMDVMD
Test 10.01880.01610.01100.01510.01200.0071
Test 20.03840.03710.21250.07940.03370.0319
Test 30.09270.11310.16460.08380.05470.0426
Test 40.10380.13510.08150.06920.06700.0610
Table 2. SNR (DB) of the DIS algorithms.
Table 2. SNR (DB) of the DIS algorithms.
MethodMFKFWDEMDE-EMDVMD
Test 117.904319.502122.567819.824021.803026.3579
Test 227.420927.932112.572121.124528.572129.0528
Test 322.090220.351117.106022.972226.674128.8509
Test 410.05758.030712.150813.582113.852714.6741
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top