Next Article in Journal
A Rapid Thermal Nanoimprint Apparatus through Induction Heating of Nickel Mold
Next Article in Special Issue
Fabrications of L-Band LiNbO3-Based SAW Resonators for Aerospace Applications
Previous Article in Journal
A Nanomechanical Analysis of Deformation Characteristics of 6H-SiC Using an Indenter and Abrasives in Different Fixed Methods
Previous Article in Special Issue
Hypersonic Aerodynamic Force Balance Using Micromachined All-Fiber Fabry–Pérot Interferometric Strain Gauges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Time–Frequency Feature Extraction Method for Action Detection on Artificial Knee by Fractional Fourier Transform

1
Beijing Key Laboratory of High Dynamic Navigation Technology, Beijing Information Science and Technology University, Beijing 100101, China
2
Beijing Institute of Technology, School of Automation, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Micromachines 2019, 10(5), 333; https://doi.org/10.3390/mi10050333
Submission received: 5 April 2019 / Revised: 30 April 2019 / Accepted: 9 May 2019 / Published: 20 May 2019
(This article belongs to the Special Issue MEMS for Aerospace Applications)

Abstract

:
With the aim of designing an action detection method on artificial knee, a new time–frequency feature extraction method was proposed. The inertial data were extracted periodically using the microelectromechanical systems (MEMS) inertial measurement unit (IMU) on the prosthesis, and the features were extracted from the inertial data after fractional Fourier transform (FRFT). Then, a feature vector composed of eight features was constructed. The transformation results of these features after FRFT with different orders were analyzed, and the dimensions of the feature vector were reduced. The classification effects of different features and different orders are analyzed, according to which order and feature of each sub-classifier were designed. Finally, according to the experiment with the prototype, the method proposed above can reduce the requirements of hardware calculation and has a better classification effect. The accuracies of each sub-classifier are 95.05%, 95.38%, 91.43%, and 89.39%, respectively; the precisions are 78.43%, 98.36%, 98.36%, and 93.41%, respectively; and the recalls are 100%, 93.26%, 86.96%, and 86.68%, respectively.

1. Introduction

The knee joint is an important supporting joint of the lower limbs. For a knee joint injury that seriously impedes the movement of the lower limbs, the best treatment is to replace the joint with an artificial joint. Among patients, a large number need amputation, and prosthesis is an important guarantee for the physical and physiological recovery of amputees. In the lower limb prosthesis, the artificial knee joint is the core component and it can help the human body to achieve supported standing and flexible walking so that patients show good gait [1].
Before the 1990s, the movement of prosthetics could not keep pace with the speed of amputees and road conditions, and the movement lacked stability [2]. With the continuous development of bionic intelligent knee prosthesis technology, there are many intelligent knee prostheses, which can achieve dynamic adjustment, real control of gait, and other functions. The intelligent lower limb usually refers to the unpowered intelligent lower limb. The flexion and extension movement of the prosthetic knee joint is driven by the leg stump. The intelligent control system only adjusts the size of the damping torque of the knee joint [3]. For the knee joint of the unpowered intelligent lower limb prosthesis that controls gait by adjusting the damping torque, the accuracy of damping adjustment directly affects the comfort, convenience and safety of the knee joint in question. Generally, the damping adjustment is based on the recognition of the current action state. Among the existing action detection methods of intelligent prosthetics, methods that detect the motion frequency or the position of the piston of a cylinder and the pressure at the bottom of the foot have long lag times and low accuracy. Methods that extract the features of acceleration or angular velocity from the time domain have few features from which to choose, hence they are susceptible to interference and low robustness. So, an accurate, fast, low-cost and stable action detection method is needed.
The action detection often needs to go through the process of data collection, data preprocessing, feature extraction, the establishment of an action detection model, and the use of the model for action detection [4]. Currently, sensors used to collect data can be divided into two major categories of wearable and nonwearable. The nonwearable methods include motion detection methods based on radar [5], and machine vision [6]. The wearable methods include motion detection methods based on accelerometers [7], accelerometers and gyroscopes [8], accelerometers and other sensors [9], and myoelectricity [10]. The nonwearable sensors are not suitable for human motion recognition on prosthesis due to their severe constraints of scene and high cost. In the wearable methods, myoelectricity is inconvenient to wear on a daily basis due to the need to install electrodes in multiple places on the wearer’s legs. The microelectromechanical systems (MEMS) inertial measurement unit (IMU) integrated with accelerometers, gyroscopes and barometers can be integrated into the control circuit and does not require additional operation. So, it is more suitable for human action data acquisition on commercial products.
For feature extraction, the preprocessed data are usually divided into a series of windows by the sliding window mechanism of fixed length first-in, first-out, and the feature values are extracted by a series of methods. Aziz et al., for example, calculated the mean and variance of data in each window to construct 18 dimensional features [11]; Pierleoni et al. discovered that the RMS (Root Mean Square) of acceleration was no less than 2.5 g when the impact process of motion occurs [12]; Cheng et al. judged the state of motion according to an amplitude of acceleration higher than a given threshold [13].
For the establishment of an action detection model, the commonly used methods are the threshold method and the machine learning method. The threshold method determines motion state by comparing the extracted features with a setting threshold; generally, it can be divided into two types: a single state recognition and a multiple state recognition. The machine learning method regards action detection as a typical classification problem and constructs a motion detection model based on the training set composed of various motion data. Typical machine learning algorithms such as support vector machine (SVM) [11,14,15,16], decision tree (DT), Naive Bayes [17], deep learning [18], artificial neural network (ANN) [19], and K nearest neighbor (K-NN) [20] can be used for model construction. The preferable effective algorithms include convolutional neural network (CNN), I support vector machine (1SVM), CNN+1SVM [21], hidden Markova model [22], K-NN and ANN [23].
However, in most action detection methods the machine learning method can achieve higher precision, but it needs lots of features and is a complicated detection model with a slow detection speed and high calculate cost. The threshold method is simple in calculation and fast in speed, which is very suitable for long-term operation on micro-wearable devices with limited resources. However, the selection of its threshold value often depends on the simulated motion data, and there are few features from which to choose. When the distribution of test data and training data is greatly different, it will seriously affect the detection accuracy of the model, so it lacks certain applicability. Therefore, a detection method that can provide sufficient features for selection, at the same time as satisfying high computing speed and low model complexity, is needed. Considering that human action is a time-varying movement, and we cannot only look for features in the time domain or frequency domain, we came up with the idea of using the fractional Fourier transform (FRFT) in order to provide sufficient features from the fractional domain, and also in order to obtain suitable features for low power consumption wearable devices.
Fractional Fourier transform (FRFT) is a representation method on the fractional-order Fourier domain by the signal on the time–frequency plane after the coordinate axis rotates counterclockwise around the origin at any angle. It is a time–frequency analysis method and a generalized Fourier transform. Fractional Fourier transform has many properties that traditional Fourier transform does not have, and is widely applied in scientific research and engineering technology [24]. In this paper, firstly the effect of FRFT is analyzed, and the results of transformation with the same action in different orders, or different actions in the same order, are compared. Then, eight features are selected from the commonly used features to construct a feature vector space. The variation of each feature after FRFT in different orders is analyzed. Then, the dimension of the feature vector is further reduced, and the results of different features after FRFT in different orders are analyzed, according to which we design a classifier. Finally, a prototype of an intelligent knee joint is designed, and the method is verified by experiments.

2. Theory

2.1. Fractional Fourier Transform (FRFT)

In the field of signal processing, the traditional Fourier transform is a mature and widely used mathematical tool. The fractional order Fourier transform (FRFT) is proposed in the form of pure mathematics by V. Namias from the view of feature and feature function [25]. Then, researchers proposed the concept of FRFT from an optical point of view. It can be proved that these definitions are completely equivalent [26]. FRFT is first applied to optical signal processing because it can be implemented by simple optical devices. In recent years, several rapid algorithms for FRFT have been discovered, so that FRFT has received attention in multiple areas of signal processing.

2.1.1. Definition

Generally, the p-order fractional Fourier transform of a function f p ( u ) can be expressed as: f p ( u ) or F p f ( u ) , where, F p f ( u ) can be interpreted as the operator F p on the function f ( u ) whose result is in the domain u .
The definition of the fractional Fourier transform [27] is
f p ( u ) = + K p ( u , t ) f ( t ) d t
where K p ( u , t ) = { A α exp [ j π ( u 2 cot α 2 u t csc α + t 2 cot α ) ] , α n π δ ( u t ) , α = 2 n π δ ( u + t ) , α = ( 2 n + 1 ) π is the kernel function of the fractional Fourier transform, A α = exp [ j π sgn ( sin α ) / 4 + j α / 2 ] | sin α | 1 / 2 , α = p π 2 , n is integer.
After sorting, it is shown as follows:
f α ( u ) = A α T t ( u ) + T s ( u x ) [ T t ( x ) f ( x ) ] d x
where T t ( x ) = exp ( j π t x 2 ) , t = tan ( α / 2 ) , s = csc ( α ) .
Noticing that, F 4 n and F 4 n ± 2 are equivalent to identity operator τ and parity operator P respectively. For p = 1 , there are α = π 2 , A a = 1 and f 1 ( u ) = + e j 2 π u t f ( t ) d t .
Apparently, f 1 ( u ) is the Fourier transformation of f ( u ) . The 0-order transformation is defined as the function itself, and the definition by p or α is periodic in 4 or 2 π due to α = p π 2 only appearing in the parameter position of the trig function.
Except for comparison with the time domain or the frequency domain, to explain the reason that choose FRFT to extract features, Li Chao derived the relationship between the amplitude mean A ¯ in the time domain and the amplitude B in the fractional domain by the timewidth–bandwidth in theory [28].
From the fraction domain sampling theorem, the relationship between the fraction domain bandwidth B u and the frequency domain bandwidth B is:
B u = B sin α
where α is the rotation angle of the fraction domain.
The timewidth and bandwidth of the fraction domain signal are defined as follows:
Δ t 2 + | ( t t 0 ) x ( t ) | 2 d t
Δ u a 2 + | ( u a u a 0 ) x ( u a ) | 2 d u a
where u a is the frequency of the fractional domain and x ( u a ) is the FRFT of x ( t ) .
From the uncertainty principle of the signal frequency domain:
Δ t 2 Δ u 2 1 4 is ( Δ t Δ u 1 2 )
where Δ t is the timewidth and Δ u is the bandwidth.
The uncertainty principle of the fractional domain can be derived from the three Equations (4)–(6):
Δ t 2 Δ u a 2 sin 2 α 4 i s ( Δ t Δ u a sin α 2 )
From Parseval, the energy of the signal in the time domain is the same as that in the fractional domain:
E = E α
where E is the total energy in the time domain, and E α is the total energy in fraction domain. Furthermore:
E = 1 2 A ¯ Δ t 2 , E α = 1 2 B ¯ Δ u α 2
Combining Equations (7)–(9), it can be observed that the signal’s amplitude average A ¯ in the time domain and the B ¯ of the signal show a non-linear change, and it is related to the changed angle in the fractional domain [29,30].
Therefore, a sequence of values with no significant difference in amplitude in time domain can be converted into a fractional sequence with significant differences in amplitude by choosing an appropriate FRFT order [28].

2.1.2. Discrete FRFT

Equation (2) is the calculation method in the continuous domain. Such continuous transformation cannot be calculated in practice, and it is usually necessary to carry out numerical calculation by sampling and interpolating continuous signals. According to the Shannon interpolation equation and numerical integration operation, the discrete calculation equation of FRFT is shown in Equation (10):
f α ( k 2 Δ ) A α 2 Δ exp [ j π tan ( α / 2 ) ( k 2 Δ ) 2 ] × l = N N 1 exp [ j π ( k l 2 Δ ) 2 csc α ] { exp [ j π ( l 2 Δ ) 2 tan ( α / 2 ) f ( l 2 Δ ) ] }
After sorting, it is shown as follows:
f α ( x k ) x k C α k E t ( u k ) l = N N 1 { E s ( x k l ) [ E t ( x l ) f ( x l ) ] }
where E t ( u k ) = exp ( j π t u k 2 ) , t = tan ( α / 2 ) , s = csc ( α ) and x k = u k = k / 2 Δ . Δ is the dimension normalized time domain or frequency domain scale.

2.1.3. Transform Effect

According to the theory above, the inertial data collected by MEMS can be processed by FRFT. In contrast to the time domain and the frequency domain, the fractional domain has a better diversity and can provide more optional features. Figure 1 and Figure 2 are the results of the same data after FRFT in different orders. Figure 3a is the results of the same class data after FRFT in the same order. Figure 3b is the results of different data after FRFT in the same order. It can be observed that the results of the same data after FRFT in different orders show significant differences, and the results of different data after FRFT in the same order also show differences. Therefore, different actions can be distinguished by selecting an appropriate order and an appropriate feature.

2.2. Feature

After the performance of FRFT on the collected data, the appropriate features should be extracted and then classified according to the features. The commonly used features are shown in Table 1. However, not all the features are appropriate for the data in MEMS IMU, besides, too many features will lead to excessive computing costs, time costs, storage costs, and reduce the endurance of the device. However, too few features will not reach enough accuracy to classify different actions. Therefore, it is necessary to select appropriate features for action recognition.
In this paper, the extreme difference r a n g e , standard deviation s t d , variance v a r , interquartile range I Q R , mean m e a n , mean of peaks p k s M e a n , and number of peaks p k s N u m were selected to form the f e a t u r e _ v e c t o r :
f e a t u r e _ v e c t o r = [ r a n g e , s t d , v a r , r m s , I Q R , m e a n , p k s M e a n , P k s N u m ]
Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 are the analysis of each feature during the order of FRFT changed from 0 to 1. Figure 4, Figure 6, Figure 8 and Figure 10 are the FRFT results of the eight features of four actions, namely walk, run, upstairs, and dwstairs (downstairs), in different orders. Figure 5, Figure 7, Figure 9 and Figure 11 are the standard deviation and mean of eight features of each action. Table 2, Table 3, Table 4 and Table 5 are the mean, minimum standard deviation and corresponding order of standard deviation of each feature of each action. It can be observed that, except for r m s , all the other features change significantly, but this does not mean that r m s cannot be used for classification. Besides, it can be observed that it might be useful to classify walk and dwstairs by r m s and m e a n , to classify upstairs and dwstairs by r a n g e and p k s N u m , and to classify walk, dwstairs and run by s t d and p k s N u m .
Different features have different classification effects for the inertia data in MEMS IMU. Figure 12 shows the effect of distinguishing two actions by two kinds of features. It can be observed that to classify walk and dwstairs, the combination of p k s N u m and s t d is better than the combination of r a n g e and p k s M e a n , and the order = 0.67 is better than order = 0.20. To classify upstairs and dwstairs, the combination of r a n g e and p k s N u m is better than the combination of r m s and I Q R , and the order = 0.64 is better than order = 0.20. To classify walk and upstairs, the combination of r m s and m e a n is better than the combination of s t d and I Q R , and the order = 0.71 is better than order = 0.20. To separate run from the other actions, the combination of r m s and m e a n is better than the combination of r a n g e and I Q R , and it is better when order = 0.75. Figure 13 shows the effect of classification with two features extracted directly in the time domain without FRFT. It can be observed that the effect was not good for classification.

2.3. Classifier

According to the conclusion above, different features and different orders have different effects for classification. The optimal order and feature of each action are different. A binary classifier was designed which classifies by changing the order and feature value of each sub-classifier. Only one action is separated at a time, and then all motion gestures are recognized by multiple classifications. The structure of the classifier is shown in Figure 14.

3. Experiment

3.1. Experiment Design

According to the conclusion above, an experiment for verification is designed. The information of the subject is shown in Table 6. To protect the privacy of amputees, the photos of amputees will not be shown here. The artificial knee used in the experiment is shown in Figure 15, the MEMS IMU used in the experiment is shown in Figure 16, and the main specifications of the devices are given in Table 7.

3.2. Results and Analysis

After the experimental data collection, the same processing method is used to extract the feature vector through FRFT after dividing the cycle, and the classifier designed above is used for classification, the results are shown in Figure 17 and the performance of the sub-classifier is detailed in Table 8. The comparison of the accuracy in fractional domain and time domain is in Table 9. It can be seen that the effect of fractional domain is better than the effect of time domain.

4. Conclusions

With the aim of designing an action detection method on artificial knee, a new time–frequency feature extraction method is proposed. This method is targeted at four common actions of the artificial knee wearer, and extracted features from the inertia data are measured by MEMS IMU, using fractional Fourier transform (FRFT) to magnify the diversity of features. FRFT is employed to extract the appropriate feature vectors and construct a feature vector composed of eight features.
By analyzing the results of these features after FRFT in different orders, it was discovered that it may have a good effect to classify walk and dwstairs by r m s and m e a n , to classify upstairs and dwstairs by r a n g e and p k s N u m , and to classify walk, dwstairs and run by s t d and p k s N u m .
The results of the classification with different features and orders are also analyzed. It was discovered that in order to classify walk and dwstairs, the combination of p k s N u m and s t d is better than the combination of r a n g e and p k s M e a n , and it is better when order = 0.67 than order = 0.20. To classify upstairs and dwstairs, the combination of r a n g e and p k s N u m is better than the combination of r m s and I Q R , and it is better when order = 0.64 than order = 0.20. To classify walk and upstairs, the combination of r m s and m e a n is better than the combination of s t d and I Q R , and it is better when order = 0.71 than order = 0.20. To separate run from the other actions, the combination of r m s and m e a n is better than the combination of r a n g e and I Q R , and it is better when order = 0.75.
Finally, the verification experiment is designed with the artificial lower limb knee prototype. The results show that the method that we proposed has a good classification performance while also reducing the requirements of hardware calculation. The accuracies of each sub-classifier are 95.05%, 95.38%, 91.43%, and 89.39%, respectively; the precisions are 78.43%, 98.36%, 98.36%, and 93.41%, respectively; and the recalls are 100%, 93.26%, 86.96%, and 86.68%, respectively.

Author Contributions

Conceptualization, Z.S. and N.L.; methodology, N.L.; software, T.W.; validation, T.W.; formal analysis, C.L.; investigation, T.W.; resources, T.W.; data curation, N.L.; writing—original draft preparation, T.W.; writing—review and editing, N.L.; visualization, N.L.; supervision, N.L.; project administration, N.L.; funding acquisition, N.L.

Funding

This work is supported by the National Natural Science Foundation of China (Grant No. 61771059, 61801032), the Beijing Natural Science Foundation (3184046), and the Beijing Key Laboratory of High Dynamic Navigation Technology.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, N.; Diao, X. Summary of knee prosthesis. Orthop. J. China 2006, 14, 225–226. [Google Scholar]
  2. Tan, G.Z.; Xiao, H.F.; Wang, Y.C. Optimal Fuzzy PID Controller with Incomplete Derivation and Its Simulation Research on Application of Intelligent Artificial Legs. Control Theory Appl. 2002, 190, 462–466. [Google Scholar]
  3. Ma, S.; Wang, R.; Shen, Q.; Zhang, T.; Liu, Q. The Research Advance of Active Artificial Knee-joint Prosthesis. In Proceedings of the 6th Beijing International Forum on Rehabilitation, Beijing China, 21 October 2011; pp. 259–264. [Google Scholar]
  4. Lisha, H.; Wang, S.; Chen, Y. Fall detection algorithms based on wearable device: A review. J. Zhejiang Univ. Eng. 2018, 52, 1717–1728. [Google Scholar]
  5. Chen, M. Research on the Method of Falling Detection Based on Doppler Radar; Taiyuan Technology University: Tai Yuan, China, 2018. [Google Scholar]
  6. Yuan, J. The Design and Research of Visual Fall Detection System for Elderly People; Jiangxi Science and Technology University: Gan Zhou, China, 2018. [Google Scholar]
  7. Ren, L.; Shi, W.; Yu, Z.; Cao, Y. ALARM: A novel fall detection algorithm based on personalized threshold. In Proceedings of the International Conference on E-health Networking, Application & Services, Boston, MA, USA, 14–17 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 410–415. [Google Scholar]
  8. De Cillis, F.; de Simio, F.; Guido, F.; Incalzi, A.R.; Setola, R. Fall-detection solution for mobile platforms using accelerometer and gyroscope data. In Proceedings of the International Conference on Engineering in Medicine and Biology Society, Milan, Italy, 25–29 August 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 3727–3730. [Google Scholar]
  9. Cheng, S.H. An intelligent fall detection system using triaxial accelerometer integrated by active RFID. In Proceedings of the International Conference on Machine Learning and Cybernetics, Lanzhou, China, 13–16 June 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 517–522. [Google Scholar]
  10. Ha, K.H.; Varol, H.A.; Goldfarb, M. Volitional Control of a Prosthetic Knee Using Surface Electromyography. IEEE Trans. Biomed. Eng. 2018, 58, 144–151. [Google Scholar] [CrossRef] [PubMed]
  11. Aziz, O.; Russell, C.M.; Park, E.J.; Robinovitch, S.N. The effect of window size and lead time on pre-impact fall detection accuracy using support vector machine analysis of waist mounted inertial sensor data. Conf. Proc. IEEE Eng. Med. Biol. Sci. 2014, 2014, 30–33. [Google Scholar]
  12. Pierleoni, P.; Belli, A.; Palma, L.; Pellegrini, M.; Pernini, L.; Valenti, S. A high reliability wearable device for elderly fall detection. Sens. J. 2015, 15, 4544–4553. [Google Scholar] [CrossRef]
  13. Cheng, J.; Chen, X.; Shen, M. A framework for daily activity monitoring and fall detection based on surface electromyography and accelerometer signals. IEEE J. Biomed. Health Inf. 2013, 17, 38–45. [Google Scholar] [CrossRef] [PubMed]
  14. Zhou, C.C.; Tu, C.L.; Gao, Y.; Wang, F.-X.; Gong, H.-W.; Lian, P.; He, C.; Ye, X.-S. A low-power, wireless, wrist-worn device for long time heart rate monitoring and fall detection. In Proceedings of the International Conference on Orange Technologies, Xi’an, China, 20–23 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 33–36. [Google Scholar]
  15. Park, S.Y.; Ju, H.; Park, C.G. Stance Phase Detection of Multiple Actions for Military Drill Using Foot-mounted IMU. Sensors 2016, 14, 16. [Google Scholar]
  16. Chou, Y.H.; Cheng, H.C.; Cheng, C.H.; Su, K.H.; Yang, C.Y. Dynamic time warping for IMU based activity detection. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Budapest, Hungary, 9–12 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 3107–3112. [Google Scholar]
  17. Zhang, Z.; Lo, B. A Multi-sensor Fusion Approach for Intention Detection. In Converging Clinical and Engineering Research on Neuro-Rehabilitation III: Proceedings of the 4th International Conference on Neuro-Rehabilitation, Pisa, Italy, 16–20 October 2008; Springer: Cham, Switzerland, 2008; Volume 21, pp. 454–458. [Google Scholar]
  18. Mohammadian, R.N.; van Laarhoven, T.; Furlanello, C.; Marchiori, E. Novelty Detection using Deep Normative Modeling for IMU-Based Abnormal Movement Monitoring in Parkinson’s Disease and Autism Spectrum Disorders. Sensors 2018, 18, 3533. [Google Scholar] [CrossRef] [PubMed]
  19. Nukala, B.T.; Shibuya, N.; Rodriguez, A.I.; Tsay, J.; Nguyen, T.Q.; Zupancic, S.; Lie, D.Y.C. A real-time robust fall detection system using a wireless gait analysis sensor and an Artificial Neural Network. In Proceedings of the International Conference on Healthcare Innovation, Seattle, WA, USA, 8–10 October 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 219–222. [Google Scholar]
  20. Jian, H.; Chen, H. A portable fall detection and alerting system based on k-NN algorithm and remote medicine. Communications 2015, 12, 23–31. [Google Scholar] [CrossRef]
  21. Lisowska, A.; Wheeler, G.; Inza, V.; Poole, I. An evaluation of supervised, novelty-based and hybrid approaches to fall detection using silmee accelerometer data. In Proceedings of the International Conference on Computer Vision Workshops, Santiago, Chile, 7–13 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 10–16. [Google Scholar]
  22. Wang, J.; Chen, R.; Sun, X.; She, M.; Kong, L. Generative models for automatic recognition of human daily activities from a single triaxial accelerometer. In Proceedings of the International Joint Conference on Neural Networks, Brisbane, Australia, 10–15 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–6. [Google Scholar]
  23. Li, C.; Lin, M.; Yang, L.T.; Ding, C. Integrating the enriched feature with machine learning algorithms for human movement and fall detection. J. Supercomput. 2014, 67, 854–865. [Google Scholar] [CrossRef]
  24. Jiang, Z. Fractional Fourier Transform. Chin. J. Quantum Electron. 1996, 4, 289–300. [Google Scholar]
  25. Tao, R.; Qi, L.; Wang, Y. Theory and Applications of the Fractional Fourier Transform; Tsinghua University Press: Beijing, China, 2004. [Google Scholar]
  26. Liang, W. Fractional Fourier Transform and Application; Chongqing University: Chong Qing, China, 2008. [Google Scholar]
  27. Namias, V. The fractional order Fourier transform and its applications in the quantum mechanics. Inst. Math. Appl. 1980, 25, 241–265. [Google Scholar] [CrossRef]
  28. Li, C.; Su, Z.; Li, Q.; Zhao, H. An Indoor Positioning Error Correction Method of Pedestrian Multi-Motions Recognized by Hybrid-Orders Fraction Domain Transformation. IEEE Access. 2019, 7, 11360–11377. [Google Scholar] [CrossRef]
  29. Zhao, J.; Tao, R.; Li, Y.L.; Wang, Y. Uncertainty principles for linear canonical transform. IEEE Trans. Signal Process. 2009, 57, 2856–2858. [Google Scholar] [CrossRef]
  30. Shinde, S.; Gadre, V.M. An uncertainty principle for real signals in the fractional Fourier transform domain. IEEE Trans. Signal Process. 2001, 49, 2545–2548. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Results of Multiorder fractional Fourier transform (FRFT) with single action (order: 0–1).
Figure 1. Results of Multiorder fractional Fourier transform (FRFT) with single action (order: 0–1).
Micromachines 10 00333 g001
Figure 2. Results of Multiorder FRFT with single action in 3D (order: 0–1).
Figure 2. Results of Multiorder FRFT with single action in 3D (order: 0–1).
Micromachines 10 00333 g002
Figure 3. (a) The results of FRFT of walking in 0.7 order. (b) The results of FRFT of different actions in 0.7 order.
Figure 3. (a) The results of FRFT of walking in 0.7 order. (b) The results of FRFT of different actions in 0.7 order.
Micromachines 10 00333 g003
Figure 4. The Order–Amplitude figures of different features of 40 groups for the walk action. (a) r a n g e , (b) s t d , (c) v a r , (d) r m s , (e) I Q R , (f) m e a n , (g) p k s M e a n and (h) p k s N u m .
Figure 4. The Order–Amplitude figures of different features of 40 groups for the walk action. (a) r a n g e , (b) s t d , (c) v a r , (d) r m s , (e) I Q R , (f) m e a n , (g) p k s M e a n and (h) p k s N u m .
Micromachines 10 00333 g004
Figure 5. Standard deviation and mean of eight features of the walk action. (a) The standard deviation of each feature, (b) The mean of each feature.
Figure 5. Standard deviation and mean of eight features of the walk action. (a) The standard deviation of each feature, (b) The mean of each feature.
Micromachines 10 00333 g005
Figure 6. The Order–Amplitude figures of different features of 40 groups for the upstairs action. (a) r a n g e , (b) s t d , (c) v a r , (d) r m s , (e) I Q R , (f) m e a n , (g) p k s M e a n and (h) p k s N u m .
Figure 6. The Order–Amplitude figures of different features of 40 groups for the upstairs action. (a) r a n g e , (b) s t d , (c) v a r , (d) r m s , (e) I Q R , (f) m e a n , (g) p k s M e a n and (h) p k s N u m .
Micromachines 10 00333 g006
Figure 7. Standard deviation and mean of eight features of the upstairs action. (a) The standard deviation of each feature, (b) The mean of each feature.
Figure 7. Standard deviation and mean of eight features of the upstairs action. (a) The standard deviation of each feature, (b) The mean of each feature.
Micromachines 10 00333 g007
Figure 8. The Order–Amplitude figures of different features of 40 groups for the dwstairs action. (a) r a n g e , (b) s t d , (c) v a r , (d) r m s , (e) I Q R , (f) m e a n , (g) p k s M e a n and (h) p k s N u m .
Figure 8. The Order–Amplitude figures of different features of 40 groups for the dwstairs action. (a) r a n g e , (b) s t d , (c) v a r , (d) r m s , (e) I Q R , (f) m e a n , (g) p k s M e a n and (h) p k s N u m .
Micromachines 10 00333 g008
Figure 9. Standard deviation and mean of eight features of the dwstairs action. (a) The standard deviation of each feature, (b) The mean of each feature.
Figure 9. Standard deviation and mean of eight features of the dwstairs action. (a) The standard deviation of each feature, (b) The mean of each feature.
Micromachines 10 00333 g009
Figure 10. The Order–Amplitude figure of different features of 40 groups for the run action. (a) r a n g e , (b) s t d , (c) v a r , (d) r m s , (e) I Q R , (f) m e a n , (g) p k s M e a n and (h) p k s N u m .
Figure 10. The Order–Amplitude figure of different features of 40 groups for the run action. (a) r a n g e , (b) s t d , (c) v a r , (d) r m s , (e) I Q R , (f) m e a n , (g) p k s M e a n and (h) p k s N u m .
Micromachines 10 00333 g010
Figure 11. Standard deviation and mean of eight features of the run action. (a) The standard deviation of each feature, (b) The mean of each feature.
Figure 11. Standard deviation and mean of eight features of the run action. (a) The standard deviation of each feature, (b) The mean of each feature.
Micromachines 10 00333 g011
Figure 12. The effect of classification. (a) walk and dwstairs with the features s t d and p k s N u m when Order = 0.67; (b) walk and dwstairs with the features s t d and p k s N u m when Order = 0.20; (c) walk and dwstairs with the features r a n g e and p k s M e a n when Order = 0.67; (d) dwstairs and upstairs with the features r a n g e and p k s N u m when Order = 0.64; (e) dwstairs and upstairs with the features r a n g e and p k s N u m when Order = 0.20; (f) dwstairs and upstairs with the features r m s and I Q R when Order = 0.64; (g) walk and upstairs with the features r m s and m e a n when Order = 0.71; (h) walk and upstairs with the features r m s and m e a n when Order = 0.20; (i) walk and upstairs with the features s t d and I Q R when Order = 0.71; (j) all the actions with the features r m s and m e a n when Order = 0.75; (k) all the actions with the features r a n g e and I Q R when Order = 0.75; (l) all the actions with the features p k s M e a n and p k s N u m when Order = 0.75.
Figure 12. The effect of classification. (a) walk and dwstairs with the features s t d and p k s N u m when Order = 0.67; (b) walk and dwstairs with the features s t d and p k s N u m when Order = 0.20; (c) walk and dwstairs with the features r a n g e and p k s M e a n when Order = 0.67; (d) dwstairs and upstairs with the features r a n g e and p k s N u m when Order = 0.64; (e) dwstairs and upstairs with the features r a n g e and p k s N u m when Order = 0.20; (f) dwstairs and upstairs with the features r m s and I Q R when Order = 0.64; (g) walk and upstairs with the features r m s and m e a n when Order = 0.71; (h) walk and upstairs with the features r m s and m e a n when Order = 0.20; (i) walk and upstairs with the features s t d and I Q R when Order = 0.71; (j) all the actions with the features r m s and m e a n when Order = 0.75; (k) all the actions with the features r a n g e and I Q R when Order = 0.75; (l) all the actions with the features p k s M e a n and p k s N u m when Order = 0.75.
Micromachines 10 00333 g012
Figure 13. The effect of classification with two features extracted directly in the time domain without FRFT. (a) walk and dwstairs with the features s t d and p k s N u m ; (b) dwstairs and upstairs with the features r a n g e and p k s N u m ; (c) walk and upstairs with the features r m s and m e a n ; (d) all the actions with the features v a r and I Q R .
Figure 13. The effect of classification with two features extracted directly in the time domain without FRFT. (a) walk and dwstairs with the features s t d and p k s N u m ; (b) dwstairs and upstairs with the features r a n g e and p k s N u m ; (c) walk and upstairs with the features r m s and m e a n ; (d) all the actions with the features v a r and I Q R .
Micromachines 10 00333 g013
Figure 14. The structure and process of the classifier. As different orders and feature vectors are needed to classify walk, upstairs and dwstairs, one can distinguish walk and dwstairs first, and then separate upstairs from walk and dwstairs, respectively, then separate upstairs from walk and dwstairs, respectively.
Figure 14. The structure and process of the classifier. As different orders and feature vectors are needed to classify walk, upstairs and dwstairs, one can distinguish walk and dwstairs first, and then separate upstairs from walk and dwstairs, respectively, then separate upstairs from walk and dwstairs, respectively.
Micromachines 10 00333 g014
Figure 15. The artificial knee. The position of the pneumatic cylinder, servo motor and control circuit with inertial measurement unit (IMU) have been pointed out.
Figure 15. The artificial knee. The position of the pneumatic cylinder, servo motor and control circuit with inertial measurement unit (IMU) have been pointed out.
Micromachines 10 00333 g015
Figure 16. MEMS (microelectromechanical systems)-IMU, the position of IMU and barometer have been pointed out.
Figure 16. MEMS (microelectromechanical systems)-IMU, the position of IMU and barometer have been pointed out.
Micromachines 10 00333 g016
Figure 17. The results of each sub-classifier. (a) run and other actions with the features r m s and p k s N u m ; (b) walk and dwstairs with the features s t d and p k s N u m ; (c) walk and upstairs with the features r m s and m e a n ; (d) dwstairs and upstairs with the features r a n g e and p k s N u m .
Figure 17. The results of each sub-classifier. (a) run and other actions with the features r m s and p k s N u m ; (b) walk and dwstairs with the features s t d and p k s N u m ; (c) walk and upstairs with the features r m s and m e a n ; (d) dwstairs and upstairs with the features r a n g e and p k s N u m .
Micromachines 10 00333 g017
Table 1. The commonly used features and the calculation method.
Table 1. The commonly used features and the calculation method.
FeaturesCalculation Method
Maximum m a x ( A )
Minimum m i n ( A )
Mean m e a n ( A ) = i A i n
Extreme Difference r a n g e ( A ) = m a x ( A ) m i n ( A )
Variance v a r ( A ) = i ( A i m e a n ( A ) ) 2 n
Standard Deviation s t d ( A ) = v a r ( A )
Root Mean Square r m s ( A ) = i A i 2 n
Absolute Value a b s ( A i ) = | A i |
Signal Amplitude Area s m a ( A ) = 1 t ( 0 t | A x | d t + 0 t | A y | d t + 0 t | A z | d t )
Correlation Coefficient c c ( A ) = c o v ( A x , A y ) v a r ( A x , A y )
Interquartile Range I Q R = Q 3 Q 1
Number of PeaksThe number of peaks in signal ( p k s N u m )
Mean of Peaks p k s M e a n = i p k s i n
Table 2. Mean and min of standard deviation of each feature of walk.
Table 2. Mean and min of standard deviation of each feature of walk.
FeatureSTD
MinMean
ValueOrder
Range0.69960.3901.1189
Standard Deviation (Std)0.14940.4530.1854
Variance (Var)0.47850.4530.6612
Root Mean Square (RMS)0.03820.6350.0387
Interquartile Range (IQR)0.21710.3100.3105
Mean0.279910.3076
Mean of Peaks (pksMean)0.68570.0431.1725
Number of Peaks (pksNum)0.54300.9922.6806
Table 3. Mean and min of standard deviation of each feature of upstairs.
Table 3. Mean and min of standard deviation of each feature of upstairs.
FeatureSTD
MinMean
ValueOrder
Range1.99130.9813.0821
Std0.48980.9970.5659
Var2.48330.4283.1291
RMS0.05270.6070.0535
IQR0.37430.1300.7365
Mean0.324910.7060
pksMean1.92080.7012.8684
pksNum0.563312.7401
Table 4. Mean and min of standard deviation of each feature of dwstairs.
Table 4. Mean and min of standard deviation of each feature of dwstairs.
FeatureSTD
MinMean
ValueOrder
Range0.64530.4001.2644
Std0.15890.2810.1978
Var0.48160.3080.7028
RMS0.02090.6240.0213
IQR0.23690.2120.3018
Mean0.168110.2500
pksMean0.66810.1941.3048
pksNum0.64850.9972.9343
Table 5. Mean and min of standard deviation of each feature of run.
Table 5. Mean and min of standard deviation of each feature of run.
FeatureSTD
MinMean
ValueOrder
Range2.38350.8033.3444
Std0.46120.1680.5488
Var3.66460.1694.9012
RMS0.07830.6630.0803
IQR0.67800.3180.8944
Mean0.64710.0200.8000
pksMean0.99360.0311.2464
pksNum2.82740.9914.4601
Table 6. Information of the subject.
Table 6. Information of the subject.
ProjectValue
Age24
Weight65 kg
Position of prosthesisLeft
High174 cm
GenderM
Table 7. Main specifications of devices.
Table 7. Main specifications of devices.
ParametersMain Specifications
Sustainable Working Hours≥ 24 h
Operating Voltage3.3 to 30 V
Sustainable Working Temperature−40 °C to 85 °C
Power Consumption550 [email protected] V
Core Circuit Board Dimensions33 mm × 20 mm × 22 mm
Weight<100 g
Sensing Range 0° to 360°
Static Accuracy±0.5° (roll, pitch); ±1° (yaw)
Dynamic Accuracy±1° RMS
Resolution0.05°
Output Frequency0.01 to 100 Hz
SensorAccelerometerGyroscopeParametersBarometer
Measure range±10 g±1000°/sMeasure range10 to 1200 mbar
Nonlinear<0.2% of FS<0.1% of FSResolution10 cm
Bias stability±4 mg9.2°/hBias stability±1 mbar/year
Table 8. Performance of each sub-classifier.
Table 8. Performance of each sub-classifier.
Sub-Classifierbcde
Accuracy0.95050.95380.91430.8939
Precision0.78430.98360.98360.9341
Recall10.93260.86960.8667
F1–Score0.30530.70180.71570.6753
Table 9. Accuracy of each sub-classifier in fractional domain and time domain.
Table 9. Accuracy of each sub-classifier in fractional domain and time domain.
Sub-ClassifierFractional DomainTime Domain
b0.95050.9011
c0.95380.8259
d0.91430.7424
e0.89390.7629

Share and Cite

MDPI and ACS Style

Wang, T.; Liu, N.; Su, Z.; Li, C. A New Time–Frequency Feature Extraction Method for Action Detection on Artificial Knee by Fractional Fourier Transform. Micromachines 2019, 10, 333. https://doi.org/10.3390/mi10050333

AMA Style

Wang T, Liu N, Su Z, Li C. A New Time–Frequency Feature Extraction Method for Action Detection on Artificial Knee by Fractional Fourier Transform. Micromachines. 2019; 10(5):333. https://doi.org/10.3390/mi10050333

Chicago/Turabian Style

Wang, Tianrun, Ning Liu, Zhong Su, and Chao Li. 2019. "A New Time–Frequency Feature Extraction Method for Action Detection on Artificial Knee by Fractional Fourier Transform" Micromachines 10, no. 5: 333. https://doi.org/10.3390/mi10050333

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop