Next Article in Journal
Leveraging Computational Intelligence Techniques for Defensive Deception: A Review, Recent Advances, Open Problems and Future Directions
Next Article in Special Issue
Application of Blockchain Technology in Production Scheduling and Management of Human Resources Competencies
Previous Article in Journal
Initial Study Using Electrocardiogram for Authentication and Identification
Previous Article in Special Issue
Study on Multi-Model Soft Sensor Modeling Method and Its Model Optimization for the Fermentation Process of Pichia pastoris
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recreating the Motion Trajectory of a System of Articulated Rigid Bodies on the Basis of Incomplete Measurement Information and Unsupervised Learning

by
Bartłomiej Nalepa
1,*,
Magdalena Pawlyta
2,
Mateusz Janiak
3,
Agnieszka Szczęsna
2,
Aleksander Gwiazda
1,* and
Konrad Wojciechowski
2,3
1
Department of Engineering Processes Automation and Integrated Manufacturing Systems, Faculty of Mechanical Engineering, Silesian University of Technology, Konarskiego 18A, 44-100 Gliwice, Poland
2
Department of Computer Graphics, Vision and Digital Systems, Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
3
Polish-Japanese Academy of Information Technology, Koszykowa 86, 02-008 Warsaw, Poland
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(6), 2198; https://doi.org/10.3390/s22062198
Submission received: 11 December 2021 / Revised: 23 February 2022 / Accepted: 4 March 2022 / Published: 11 March 2022
(This article belongs to the Collection Artificial Intelligence in Sensors Technology)

Abstract

:
Re-creating the movement of an object consisting of articulated rigid bodies is an issue that concerns both mechanical and biomechanical systems. In the case of biomechanical systems, movement re-storation allows, among other things, introducing changes in training or rehabilitation exercises. Motion recording, both in the case of mechanical and biomechanical systems, can be carried out with the use of sensors recording motion parameters or vision systems and with hybrid solutions. This article presents a method of measuring motion parameters with IMU (Inertial Measurement Unit) sensors. The main assumption of the article is to present the method of data estimation from the IMU sensors for the given time moment on the basis of data from the previous time moment. The tested system was an industrial robot, because such a system allows identifying the measurement errors from IMU sensors and estimating errors basing on the reference measurements from encoders. The aim of the research is to be able to re-create the movement parameters of an object consisting of articulated rigid bodies on the basis of incomplete measurement information from sensors. The developed algorithms can be used in the diagnostics of mechanical systems as well as in sport or rehabilitation. Limiting sensors will allow, for example, athletes defining mistakes made during training only on the basis of measurements from one IMU sensor, e.g., installed in a smartphone. Both in the case of rehabilitation and sports, minimizing the number of sensors allows increasing the comfort of the person performing a given movement as part of the measurement.

1. Introduction

This article presents the problem of re-creating the motion of a system consisting of articulated rigid bodies. The motion was reproduced on the basis of data from IMU (Inertial Measurement Unit) sensors-angular velocity and linear acceleration. The mechanical parameters measured from the IMU sensors made it possible to calculate the torsion angles of individual members, i.e., the necessary parameter to re-create the movement of a given system. Measurement errors of IMU sensors were determined on the basis of calculating the difference between the values of parameters from IMU sensors and the measurement from the VICON marker vision system, which was taken as the reference measurement.
The main goal of this article is to define the minimum number of IMU sensors that would allow re-creating the motion of a system consisting of articulated rigid bodies. The presented minimization algorithm is ultimately used in biomechanical systems, in particular in the reconstruction of human movement during sports activities and in the rehabilitation process. In this article, man has been replaced with an industrial robot. The purpose of changing the measuring system is to be able to use a different reference measurement, which in the case of an industrial robot are encoders. The encoders have a higher sampling frequency in relation to the marker vision system and lower susceptibility to external factors (e.g., the level of system illumination).

State of Art

In the problem of data acquisition from systems consisting of articulated rigid bodies, the main sub-problems can be distinguished:
(a)
the method of dividing the system into rigid bodies and the calculation method,
(b)
selection of the measurement method,
(c)
(optional) placement of the elements of the measurement system on the site and execution of the measurement,
(d)
(purpose of this work) minimizing the elements of the measurement system on the site.
The method of dividing an object into rigid bodies depends mainly on the structure of the system under consideration, in which the connections between individual rigid bodies are also taken into account and certain limitations concerning the mobility of a given joint are defined. An example of dividing an object into rigid solids articulated are industrial robots [1,2,3,4,5] and humanoid robots [6]. Other examples are biomechanical systems, especially humans. In the case of humans, unlike robots, it is more often used to impose additional constraints (receiving degrees of freedom) that modify the original system, which can be observed in skiing, where the ankle joint is immobilized by a ski boot [7]. Therefore, human models are prepared depending on the movements performed in a given environment. The basic activities that can be distinguished are walking [8,9], swimming [10], skydiving [11].
The next stage, after dividing the system into rigid bodies and defining connections between solids and additional constraints, is to define kinematics or dynamics equations in the form of a simple or inverse task [1,2,3,4,5]. The result of the calculations is to obtain, among others, angles between individual members and the location of individual points in a given coordinate system. The description of the model can also be realized using the Denavit-Hartenberg notation together with the Newton-Euler equations [12,13,14,15,16,17,18,19,20] or the algebra of quaternions and dual quaternions [21,22,23,24,25].
There are two sub-problems in the problem of human movement acquisition: the first concerns the acquisition of human movement in a controlled laboratory environment using, for example, MoCap technology or pressure platforms [26,27,28,29,30,31,32,33], the second in uncontrolled conditions of everyday life, e.g., with the use of IMU sensors and shoes pressure inserts [25,34,35,36,37]. The selection of the test environment does not affect the defined model of the rigid body system, while the measurement technique and the time of the measurements are changed. The main goal is to determine the angles between individual members due to the importance of this parameter in diagnostics and rehabilitation. Techniques for human movement acquisition in a controlled laboratory environment are well developed, while in the everyday environment they are still under development and improvement. The main goal of work on measuring motion parameters in an uncontrolled environment is to minimize the number of sensors installed on a human body. A large number of sensors (often stuck to the body) may cause discomfort to the tested person, so the measurements may contain additional errors. The extreme case and, at the same time, the most comfortable for a human being is to perform the measurement with the use of only one IMU sensor, which would be in the smartphone of the person participating in the measurements.
The study attempts to analytically determine the relationship between the number of IMU sensors placed on individual rigid bodies and the possibility of determining the configuration of the kinematic chain on the basis of equations describing the relationships between individual members. So far, scientific works [38,39,40,41] have used the Kalman Filter or its extended form to filter measurement data on the basis of a known system model. The study [42] considered the case of minimizing the number of IMU sensors on the basis of known equations of the system model. The minimization of IMU sensors on the basis of the Kalman Filter was performed in [43]. The study investigated the pendulum model in which, on the basis of the Kalman Filter, the number of IMU sensors was minimized from three to one. However, in the work [43] each joint did not contain its own drive, and the input was applied to one of the components, therefore when considering a human system or a robot that has a drive (muscles or motor) in each joint, the algorithm presented in [43] will not be sufficient. In this study, the ICA (Independent Component Analysis) algorithm was used, assuming that one of the sensors from the IMU contains the resultant vectors (mixtures) of angular velocities and linear accelerations of all other IMU sensors.

2. Industrial Robot Model and Estimation

The FANUC ARC Mate 100iB industrial robot model was used in the conducted research. IMU sensors are mounted on the robot members as follows:
  • IMU-1 → mounted on the axis of rotation of the robot base,
  • IMU-2 → mounted halfway between the two joints of the other member,
  • IMU-3 → mounted halfway between the two joints of the third segment.
Figure 1 and Figure 2 show the industrial robot with the placement of IMU sensors on individual members. Figure 3 shows a partial kinematic diagram of an industrial robot (description of the coordinate systems only for the IMU-3 sensor) containing the index numbers of the individual axes of the coordinate systems and the location of the IMU sensors.

2.1. Description of the Robot’s Kinematics

The robot kinematics equations can be written in several ways. This article will use the Denavit-Hartenberg (D-H) notation required to define the Newton-Euler equations. The first step is to define Table 1 of the D-H notation.
The following symbols can be distinguished in Table 1 [44]:
  • θ i —the angle drawn around the Z axis,
  • r i —distance calculated between two coordinate systems, perpendicular to the X axis,
  • d i —distance calculated between two coordinate systems, perpendicular to the Z axis,
  • α i —the angle drawn around the X axis.
Dependence on the general transformation matrix on the basis of which Table 1 was prepared [44]:
T i i 1 = R x ( α i 1 ) D x ( d i 1 ) R z ( θ i ) D z ( r i )
Based on Table 1 containing the Denavit-Hartenberg notation, Newton-Euler equations can be written for angular velocities (measured by IMU sensors) [44]:
ω i + 1 i + 1 = R i i + 1 ω i i + θ ˙ i + 1 Z ^ i + 1 i + 1
where:
  • rotation matrix: R i i + 1 = [ cos θ i sin θ i cos α i sin θ i sin α i sin θ i cos θ i cos α i cos θ i sin α i 0 sin α i cos α i ] ,
  • angular velocity of the preceding term: ω i i = [ X i Y i Z i ] ,
  • angular speed of the axis drive (in the robot it is a servo motor): θ ˙ i + 1 Z ^ i + 1 i + 1 .
The speed value of the first robot segment is given by the relationship (the speed of the previous segment is equal to 0):
ω 1 1 = θ ˙ 1 Z ^ 1 1
The angular velocity of the second term is given by the relationship:
ω 2 2 = [ cos θ 2 sin θ 2 0 sin θ 2 cos θ 2 0 0 0 1 ] [ W 1 0 0 ] + [ 0 0 W 2 ]
where:
  • [ W 1 0 0 ] —value of the angular velocity of the preceding term ω 1 1 including the D-H notation (Table 1),
  • W 1 , W 2 —the assumed value of the own drive of the given axis (servo motor).
Equation for the third IMU sensor:
ω 3 3 = [ cos θ 3 sin θ 3 0 sin θ 3 cos θ 3 0 0 0 1 ] [ W 1 sin θ 2 W 1 cos θ 2 W 2 ] + [ 0 0 W 3 ]
Equation for the last IMU sensor:
ω 4 4 = [ cos θ 4 0 sin θ 4 sin θ 4 0 cos θ 4 0 0 1 ] [ W 1 sin θ 2 sin θ 3 + W 1 cos θ 2 cos θ 3 W 1 sin θ 2 cos θ 3 W 1 cos θ 2 cos θ 3 W 2 + W 3 ] + [ 0 0 W 4 ]
where:
  • W 1 , W 2 , W 3 , W 4 —the assumed value of the own drive of a given axis (servo motor).
In the above equations, assume that:
(a)
ω i + 1 i + 1 —it is the result of the measurement with the IMU sensor, i.e., the known value,
(b)
ω i i —speed of the term preceding the term in question,
(c)
θ ˙ i + 1 —own rotational speed of the considered element,
(d)
θ i , α i —torsion angles of individual members drawn around the Z axis and the X axis, (Table 1).
In the standard case, when IMU sensors are mounted on each part of the object, the values of ω i + 1 i + 1 and ω i i are known. Sensors are usually mounted at a characteristic point of each element (taking into account the geometrical characteristics of each term or at the centre of mass), therefore the value from the θ ˙ i + 1 is taken as a variable. Angle values from the θ i , α i , are also unknown.
In order to achieve the goal of this article, i.e., to minimize the number of IMU sensors, it should be assumed that only the values from the variable ω i + 1 i + 1 will be known.

2.2. Modification of the ICA Algorithm and Experiment

By making the multiplication in relation (6) and extracting the equations for individual coordinates, it is possible to write an equation for the X coordinate:
ω 4 4 x = W 1 sin θ 2 sin θ 3 cos θ 4 + W 1 cos θ 2 cos θ 3 cos θ 4 W 2 sin θ 4 W 3 sin θ 4
The components of Equation (7) are periodic functions. Equation (7) was extracted from Equation (6). It should also be noted that all the angles in Equations (6) and (7) are time-varying. The signal contained in Equation (7) is the resultant of signals on the remaining system members. It is therefore a mixture of signals from different IMU sensors. The analysed sensor, which contains the resultant values of angular velocities, is the IMU-3. Figure 4, Figure 5, Figure 6 and Figure 7 shows data diagrams from robot encoders showing the given angles on individual axes [45,46].
An example of a mixture of signals is the diagram in Figure 8, which shows the summary of the signal for the X axis of the second IMU-2 sensor and the Y axis of the third IMU-3 sensor. The list of different axes (X and Y) results from the configuration of sensors mounted on the robot and shown in Figure 3.
The IMU-3 sensor on the Y axis saves data that was made and saved by the X axis of the IMU-2 sensor. The signal on the IMU-3 sensor is a composite of the signal generated by the unit on which the IMU-3 is attached and the signals recorded from the previous units. The Y-axis signal of the IMU-3 sensor is therefore a mixture of signals. The algorithm for solving problems related to mixing two signals and then their separation is the ICA (Independent Component Analysis) algorithm, given by the relationship [47]:
x = A S
where:
  • x —an input signal containing mixtures of signals,
  • A —mixing matrix,
  • S —source signals.
The ICA algorithm belongs to the set of unsupervised training algorithms. The algorithm includes computing the gradient of the equation containing the update of the matrix W (inverse of the matrix A). The update equation also includes the entropy function given as the activation function:
U = tanh ( x )
By multiplying the terms contained in Equation (7):
ω 3 3 x = W 1 ( sin θ 2 sin θ 3 cos θ 4 + cos θ 2 cos θ 3 cos θ 4 ) W 2 sin θ 4 W 3 sin θ 4
The individual components of Equation (10) can be calculated on the basis of trigonometric identities and ultimately equation is equal to:
sin θ 2 sin θ 3 cos θ 4 = cos ( θ 2 θ 3 θ 4 ) + cos ( θ 2 θ 3 + θ 4 ) cos ( θ 2 + θ 3 θ 4 ) cos ( θ 2 + θ 3 + θ 4 ) 4
cos θ 2 cos θ 3 cos θ 4 = cos ( θ 2 θ 3 θ 4 ) + cos ( θ 2 θ 3 + θ 4 ) + cos ( θ 2 + θ 3 θ 4 ) + cos ( θ 2 + θ 3 + θ 4 ) 4
From Equations (11) and (12) it follows that each component of the angular velocity equation consists of the function sin() or cos(). When determining the entropy form for the ICA algorithm, it should be assumed that the trigonometric functions mentioned will be variable depending on one parameter x and compensated by appropriate constants in accordance with the dependencies:
cos ( θ 2 θ 3 θ 4 ) + cos ( θ 2 θ 3 + θ 4 ) cos ( θ 2 + θ 3 θ 4 ) cos ( θ 2 + θ 3 + θ 4 ) 4      = cos ( x ) + a + cos ( x ) + b cos ( x ) c cos ( x ) d 4 = a + b c d 4
cos ( θ 2 θ 3 θ 4 ) + cos ( θ 2 θ 3 + θ 4 ) + cos ( θ 2 + θ 3 θ 4 ) + cos ( θ 2 + θ 3 + θ 4 ) 4      = cos ( x ) + e + cos ( x ) + f + cos ( x ) + g + cos ( x ) + h 4      = cos ( x ) + e + f + g + h 4
W 2 sin θ 4 = W 2 sin ( x ) + i                   W 3 sin θ 4 = W 3 sin ( x ) + j
where:
  • a , b , c , d , e , f , g , h , i , j —compensation constants.
According to the ICA algorithm, the process of updating the inverse mixing matrix occurs through an entropy gradient. Therefore, the Equation (10) after the transformations contained in Equations (11)–(15) should be differentiated:
x ( ω 4 4 x ) = W 1 sin ( x ) ( W 2 + W 3 ) cos ( x )
As a result of the above considerations, Equation (15) will replace the standardly used tanh() function in the ICA algorithm. The proof of the thesis may be the juxtaposition presented in Figure 9 and Figure 10 comparing the angular velocity of the second IMU-2 sensor with its transform, and the results presented in the following figures.
Based on Figure 10, it can be concluded that the signal consists of many component signals. The amplitudes will be used for the analysis starting with the largest value and the number corresponding to the number of terms in the angular velocity equation.
In the generalized case the Function (9) will be given by the relation:
U = x ( ω i + 1 i + 1 ( x / y / z ) ( x ) )
An exemplary function element (containing a sin() or cos() function) is given by a relationship:
A i e i x + e i x 2
where:
  • A i —the amplitude determined on the basis of the FFT.
The form of the activation function is shown in Figure 11.
The set of signals shown in Figure 8 is a mixture of signals. The comparison, of these two signals, is resulted from the fact that the IMU sensors, from which the data are read, belong to other rigid bodies with separate coordinate systems. By analyzing Figure 8, it can be concluded that it is possible to distinguish two signals, i.e., a signal that was recorded on the second segment of the system and recorded on a sensor located on the third segment. One can also see a signal that comes only from the third term. The main task of this study is to select a sensor that contains the largest possible number of mixed data and to attempt identifying individual signals.
Figure 12, Figure 13 and Figure 14 show the measurement and estimation for the first IMU sensor, and Figure 15, Figure 16 and Figure 17 for the second IMU sensor. Figure 12 and Figure 16 show the signal estimation (Z axis IMU-1 and Z axis IMU-3, respectively) using the ICA algorithm along with the standard entropy gradient function with the given relationship (9). Figure 14 and Figure 17 show the application of the modified (Functions (17) and (18)) entropy gradient function. Figure 13 and Figure 15 present the analysis of the estimation error using the DTW (Dynamic Time Warping) method [48,49,50].
Based on the DTW algorithm (Figure 13 and Figure 15), the average measurement error was determined (tanh()//new function):
  • axis Z IMU-1 → 0.046 r a d s   / /   0.0018 r a d s .
Based on the analysis of the above result, it can be concluded that the average error has been reduced by almost 96%.

3. Discussion

The article examines the possibility of re-creating the robot’s movement on the basis of incomplete measurement information. The aim of the research was to prepare an algorithm for estimating the parameters of sensors located on an object consisting of articulated rigid bodies in controlled laboratory conditions. The result of the research was an algorithm that could be used in uncontrolled working conditions.
The obtained test results were presented in relation to two entropy gradient functions. The use of the standard entropy gradient function (also called the activation function) is not efficient because this function is designed for the standard ICA algorithm and is based on changes in the probability function. In the case under consideration, the standard entropy gradient function was replaced with a derivative of the function describing the system model. The results presented in Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17 confirm the correctness of the change in the entropy gradient function. The error in estimating the angle in the case of Figure 12, i.e., the system in which tanh () was used as a function of the entropy gradient, can be estimated, taking into account the maximum measurement error, at the level of about 20°. The use of the derivative of the function of the system model as a function of the entropy gradient resulted in the reduction of the measurement error to the level of about 6°. The reduction in error is significant, although it cannot be clearly defined as sufficient. The system is intended for sportsmen (both professionally and amateur) or people who are undergoing rehabilitation. The effectiveness of the method and the error rate must be assessed by specialists in the above-mentioned areas, i.e., trainers and doctors.

4. Conclusions

This article examines the possibility of reducing the number of measurement data, which would not result in a significant increase in the measurement error. The research was carried out on an industrial robot due to the possibility of using the reference measurement sources in the form of encoders. The tests were carried out with the use of IMU sensors mounted on the members of an industrial robot. The purpose of these tests was to verify the algorithm that could be applied to biomechanical systems. Verification of the athlete’s movement is important in terms of training, while in the case of rehabilitation it helps in the appropriate selection of exercises or loads.
The defined activation function allowed for a significant reduction of the estimation error, which allows for further work on the ICA algorithm along with the entropy gradient function, which can be used to estimate mechanical quantities. A significant reduction in error was obtained in the case of applying the entropy gradient function based on the object model than in the case of the standard entropy gradient function used in the classic form of the ICA algorithm, i.e., tanh().
In the next stages, an attempt should be made to use a neural network in the process of reducing the measurement error in the ICA algorithm.

Author Contributions

Methodology, Data curation, M.P., M.J. and A.S.; formal analysis, M.P.; funding acquisition, M.J.; investigation, B.N. and A.S.; methodology, B.N., A.G. and K.W.; supervision, A.G.; validation, A.G. and K.W.; writing—original draft, B.N.; writing—review & editing, A.G. All authors have read and agreed to the published version of the manuscript.

Funding

Work presented in this study was supported by the European Union through the European Social Funds through the Centre of Modern Education as a part of a Silesian University of Technology based on research and innovation project, number of grant agreement: POWR.03.05.00-00.z098/17-00.

Data Availability Statement

Measurement data from IMU sensors are available after contacting the correspondent author due to the need to report data sharing in the interdepartmental project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saedan, M.; Ang, M.H., Jr. 3D Vision-Based Control on an Industrial Robot. In Proceedings of the IASTED International Conference on Robotic and Applications, Tampa, FL, USA, 19–22 November 2001; pp. 152–157. [Google Scholar]
  2. Sariyildiz, E.; Cakiray, E.; Temeltas, H. A comparative Study of Three Inverse Kinematic Methods of Serial Industrial Robot Manipulators in the Screw Theory Framework. Int. J. Adv. Robot. Syst. 2011, 8, 9–24. [Google Scholar] [CrossRef]
  3. Ninomiya, Y.; Arita, Y.; Tanaka, R.; Nishida, T.; Giannoccaro, N.I. Automatic Calibration of Industrial Robot and 3D Sensors using Real-Time Simulator. In Proceedings of the 2018 International Conference on Information and Communication Technology Robotics (ICT-ROBOT), Busan, Korea, 6–8 September 2018; pp. 1–4. [Google Scholar] [CrossRef]
  4. Deoria, A.; Cocuzza, S.; Comand, N.; Bottin, M.; Rossi, A. Analysis of the Compliance Properties of an Industrial Robot with the Mozzi Axis Approach. Robotics 2019, 8, 80. [Google Scholar] [CrossRef] [Green Version]
  5. Aydin, Y.; Kucuk, S. Quaternion Based Inverse Kinematics for Industrial Robot Manipulators with Euler Wrist. In Proceedings of the 2006 IEEE International Conference on Mechatronics, Budapest, Hungary, 3–5 July 2006; pp. 581–586. [Google Scholar] [CrossRef]
  6. Peng, W.Z.; Mummolo, C.; Kim, J.H. Stability criteria of balanced and stoppable unbalanced states for full-body systems with implications in robotic and human gait. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), France, Paris, 31 May–31 August 2020. [Google Scholar]
  7. Shahabpoor, E.; Pavic, A. Estimation of vertical walking ground reaction force in real-life environments using single IMU sensor. J. Biomech. 2018, 79, 181–190. [Google Scholar] [CrossRef] [PubMed]
  8. Pamies-Vila, R.; Font-Llagunes, J.M.; Cuadrado, J.; Javier Alonso, F. Analysis of different uncertainties in the inverse dynamic analysis of human gait. Mech. Mach. Theory 2012, 58, 153–164. [Google Scholar] [CrossRef]
  9. Xiang, Y. Muscle force prediction of 2D gait using predictive dynamics optimization. In Proceedings of the ASME 2016 International Design Engineering Technical Conference and Computers and Information in Engineering Conference, Charlotte, NC, USA, 21–24 August 2016. [Google Scholar]
  10. Nakashima, M.; Satou, K.; Miura, Y. Development of Swimming Human Simulation Model Considering Rigid Body Dynamics and Unsteady Fluid Force for Whole Body. J. Fluid Sci. Technol. 2007, 2, 56–67. [Google Scholar] [CrossRef] [Green Version]
  11. Gruber, K.; Ruder, H.; Denoth, J.; Schneider, K. A comparative study of impact dynamics: Wobbling mass model versus rigid body models. J. Biomech. 1998, 31, 439–444. [Google Scholar] [CrossRef]
  12. Qiu, S.; Wang, Z.; Zhao, H.; Hu, H. Using distributed wearable sensors to measure and evaluate human lower limb motions. IEEE Trans. Instrum. Meas. 2016, 65, 939–950. [Google Scholar] [CrossRef] [Green Version]
  13. Nakhaee, K.; Farahmand, F.; Salarieh, H. Studying the effect of kinematical pattern on the mechanical performance of paraplegic gait with reciprocating orthosis. J. Eng. Med. 2012, 226, 600–611. [Google Scholar] [CrossRef]
  14. Xiang, Y.; Arora, J.S.; Chung, H.J.; Kwon, H.J.; Rahmatalla, S.; Bhatt, R.; Abdel-Malek, K. Predictive simulation of human walking transitions using an optimization formulation. Struct. Multidiscip. Optim. 2012, 45, 759–772. [Google Scholar] [CrossRef]
  15. Yanga, J.; Kima, J.H.; Abdel-Maleka, K.; Marlera, T.; Becka, S.; Koppb, G.R. A new digital human environment and assessment of vehicle interior design. Comput.-Aided Des. 2007, 39, 548–558. [Google Scholar] [CrossRef]
  16. Hayat, A.A.; Chittawadigi, R.G.; Udai, A.D.; Saha, S.K. Identification of Denavit—Hartenberg Parameters of an Industrial Robot. In Proceedings of the Conference on Advances in Robotics, AIR’13, Pune, India, 4–6 July 2013; ACM: Pune, India, 2013; pp. 1–6. [Google Scholar]
  17. Mitsi, S.; Bouzakis, K.D.; Mansour, G.; Sagris, D.; Maliaris, G. Off-line programming of an industrial robot for manufacturing. Int. J. Adv. Manuf. Technol. 2005, 26, 262–267. [Google Scholar] [CrossRef]
  18. Svaco, M.; Sekoranja, B.; Suligoj, F.; Jerbic, B. Calibration of an Industrial Robot using a Stereo Vision System. Procedia Eng. 2014, 69, 459–463. [Google Scholar] [CrossRef] [Green Version]
  19. Jang, J.; Kim, S.; Kwak, Y. Calibration of geometric and non-geometric errors of an industrial robot. Robotica 2001, 19, 311–321. [Google Scholar] [CrossRef] [Green Version]
  20. Shiakolas, P.S.; Conrad, K.L.; Yih, T.C. On the accuracy, repeatability, and degree of influence of kinematics parameters for industrial robots. Int. J. Model. Simul. 2002, 22, 245–254. [Google Scholar] [CrossRef] [Green Version]
  21. Guo, Y.; Li, Y.; Shao, Z. RPV: A spatiotemporal descriptor for rigid body motion recognition. IEEE Trans. Cybern. 2017, 48, 1513–1525. [Google Scholar] [CrossRef] [Green Version]
  22. Wang, C.; Sun, T.; Duan, L.; Liu, Q.; Lu, Z.; Li, M.; Chen, P.; Wei, C.; Hou, A.; Shen, Y.; et al. Gait motion analysis based on WB-4 sensor with quaternion algorithm. In Proceedings of the 6th Annual IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER 2016), Chengdu, China, 19–22 June 2016; pp. 279–283. [Google Scholar]
  23. Sharf, I.; Wolf, A.; Rubin, M.B. Arithmetic and geometric solution for average rigid-body rotation. Mech. Mach. Theory 2010, 45, 1239–1251. [Google Scholar] [CrossRef]
  24. Szczesna, A. Quaternion entropy for analysis of gait data. Entropy 2019, 21, 79. [Google Scholar] [CrossRef] [Green Version]
  25. Challis, J.H. Quaternions as a solution to determining the angular kinematics of human movement. BMC Biomed. Eng. 2020, 2, 5. [Google Scholar] [CrossRef] [Green Version]
  26. Ding, Y.; Galiana, I.; Siviy, C.; Panizzolo, F.A.; Walsh, C. IMU-based iterative control for hip extension assistance with a soft exosuit. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweeden, 16–21 May 2016. [Google Scholar] [CrossRef]
  27. Hamdi, M.M.; Awad, M.I.; Abdelhameed, M.M.; Tolbah, F.A. Lower limb motion tracking using IMU sensor network. In Proceedings of the 2014 7th Cairo International Biomedical Engineering Conference, Cairo, Egypt, 11–13 December 2014. [Google Scholar]
  28. Mayorca-Torres, D.; Caicedo-Eraso, J.C.; Peluffo-Ordóñez, D.H. Knee joint angle measuring portable embedded system based on Inertial Measurement Units for gait analysis. Int. J. Adv. Sci. Eng. Inf. Technol. 2020, 10, 430–437. [Google Scholar] [CrossRef]
  29. Szczesna, A.; Skurowski, P.; Lach, E.; Pruszowski, P.; Peszor, D.; Paszkuta, M.; Słupik, J.; Lebek, K.; Janiak, M.; Polanski, A.; et al. Inertial motion capture costume design study. Sensors 2017, 17, 612. [Google Scholar] [CrossRef] [Green Version]
  30. Cerveri, P.; Pedotti, A.; Ferrigno, G. Robust recovery of human motion from video using Kalman filters and virtual humans. Hum. Mov. Sci. 2003, 22, 377–404. [Google Scholar] [CrossRef]
  31. Chakraborty, S.; Mondal, D.; Nandy, A. A study on human gait kinematic validation in Multi-Kinect v2 environment. In Proceedings of the 15th IEEE India Council International Conference (INDICON), Coimbatore, India, 16–18 December 2018; pp. 1–4. [Google Scholar] [CrossRef]
  32. Ahmed, F.; Paul, P.P.; Gavrilova, M.L. Kinect-Based gait recognition using sequence of the most relevant joint relative angles. J. WSCG 2015, 23, 147–156. [Google Scholar]
  33. Sousse, R.; Verdu, J.; Jauregui, R.; Ferrer-Roca, V.; Balocco, S. Non-rigid alignment pipeline applied to human gait signals acquired with optical motion capture systems and inertial sensors. J. Biomech. 2020, 98, 109429. [Google Scholar] [CrossRef]
  34. Hirose, K.; Doki, H.; Kondo, A. Dynamic analysis and motion measurement of ski turns using inertial and force sensors. Procedia Eng. 2013, 60, 355–360. [Google Scholar] [CrossRef] [Green Version]
  35. Waegli, A.; Skaloud, J. Optimization of two GPS/MEMS-IMU integration strategies with application to sports. GPS Solut. 2009, 13, 315–326. [Google Scholar] [CrossRef] [Green Version]
  36. Tausel, L.; Cifuentes, C.A.; Rodriguez, C.; Frizera, A.; Bastos, T. Human-Walker interaction on slopes based on LRF and IMU sensors. In Proceedings of the 2014 5th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), São Paulo, Brazil, 12–15 August 2014. [Google Scholar]
  37. Bregler, C.; Malik, J.; Pullen, K. Twist based acquisition and tracking of animal and human kinematics. Int. J. Comput. Vis. 2004, 56, 179–194. [Google Scholar] [CrossRef]
  38. Yuan, Q.; Asadi, E.; Lu, Q.; Yang, G.; Chen, I. Uncertainty based IMU orientation tracking algorithm for dynamic motions. IEEE/ASME Trans. Mechatron. 2019, 24, 872–882. [Google Scholar] [CrossRef]
  39. Jeon, T.H.; Lee, J.K. IMU-based joint angle estimation under various walking and running conditions. J. Korean Soc. Precis. Eng. 2018, 35, 1199–1204. [Google Scholar] [CrossRef]
  40. Watanabe, T.; Ohashi, K. Angle measurements during 2D and 3D movements of a rigid body model of lower limb. Comparison between Integral-based and Quaternion-based Methods. In Proceedings of the International Conference on Bio-inspired Systems and Signal Processing (BIOSIGNALS-2014), Angers, France, 3–6 March 2014; pp. 35–44. [Google Scholar] [CrossRef] [Green Version]
  41. Van Nguyen, L.; La, H.M. A human foot motion localization algorithm using IMU. In Proceedings of the 2016 American Control Conference (ACC), Boston Marriott Copley Place, Boston, MA, USA, 6–8 July 2016. [Google Scholar]
  42. Yuan, Q.; Chen, I.-M. Localization and velocity tracking of human via 3 IMU sensors. Sens. Actuators A Phys. 2014, 212, 25–33. [Google Scholar] [CrossRef]
  43. Nalepa, B.; Pawlyta, M.; Janiak, M.; Szczesna, A.; Wojciechowski, K.; Gwiazda, A. Research on algorithms for estimating kinematic parameters of a system of articulated rigid bodies based on vectors of accelerations and angular velocities of selected ones. IJMMT 2019, XI, 94–102. [Google Scholar]
  44. Craig, J.J. Introduction to Robotics: Mechanics and Control; Pearson Education: New York, NY, USA, 2005. [Google Scholar]
  45. Swider, J.; Zbilski, A. The Modeling and Analysis of a Partial Loads in the FANUC AM100IB Robot Joints. IJMMT 2013, 2, 89–96. [Google Scholar]
  46. Cholewa, A.; Sekala, A.; Swider, J.; Zbilski, A. Forward Kinematics and Numerical Model of a FANUC AM100IB ROBOT. IJMMT 2018, 2, 37–44. [Google Scholar]
  47. Comon, P. Independent Component Analysis; Elsevier: Amsterdam, The Netherlands, 1992; pp. 29–38. [Google Scholar]
  48. Rabiner, L.; Myers, C. Connected digit recognition using level-building DTW algorithm. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 351–363. [Google Scholar]
  49. Senin, P. Dynamic Time Warping Algorithm Review; Information and Computer Science Department, University of Hawaii at Manoa Honolulu: Honolulu, HI, USA, 2008. [Google Scholar]
  50. Piyush Shanker, A.; Rajagopalan, A. Off-line signature verification using DTW. Pattern Recognit. Lett. 2007, 28, 1407–1414. [Google Scholar] [CrossRef]
Figure 1. FANUC ARC Mate 100iB industrial robot with marked IMU sensors and encoders.
Figure 1. FANUC ARC Mate 100iB industrial robot with marked IMU sensors and encoders.
Sensors 22 02198 g001
Figure 2. Placement of IMU sensors on the robot.
Figure 2. Placement of IMU sensors on the robot.
Sensors 22 02198 g002
Figure 3. Partial kinematic diagram of the industrial robot (only 4axes for 3rd IMU-3 sensor) from Figure 1, containing a description of the coordinate systems only for the IMU-3 sensor.
Figure 3. Partial kinematic diagram of the industrial robot (only 4axes for 3rd IMU-3 sensor) from Figure 1, containing a description of the coordinate systems only for the IMU-3 sensor.
Sensors 22 02198 g003
Figure 4. The angle set on the robot axis to which the coordinate system is assigned z 1   x 1 in Figure 3. Data from FANUC robot encoders collected using the apparatus.
Figure 4. The angle set on the robot axis to which the coordinate system is assigned z 1   x 1 in Figure 3. Data from FANUC robot encoders collected using the apparatus.
Sensors 22 02198 g004
Figure 5. The angle set on the robot axis to which the coordinate system is assigned z 2   x 2 in Figure 3. Data from FANUC robot encoders collected using the apparatus.
Figure 5. The angle set on the robot axis to which the coordinate system is assigned z 2   x 2 in Figure 3. Data from FANUC robot encoders collected using the apparatus.
Sensors 22 02198 g005
Figure 6. The angle set on the robot axis to which the coordinate system is assigned z 3   x 3 in Figure 3. Data from FANUC robot encoders collected using the apparatus.
Figure 6. The angle set on the robot axis to which the coordinate system is assigned z 3   x 3 in Figure 3. Data from FANUC robot encoders collected using the apparatus.
Sensors 22 02198 g006
Figure 7. The angle set on the robot axis to which the coordinate system is assigned z 4   x 4 in Figure 3. Data from FANUC robot encoders collected using the apparatus.
Figure 7. The angle set on the robot axis to which the coordinate system is assigned z 4   x 4 in Figure 3. Data from FANUC robot encoders collected using the apparatus.
Sensors 22 02198 g007
Figure 8. Summary of signals from the IMU-2 sensor of the X axis with the IMU-3 sensor of the Y axis.
Figure 8. Summary of signals from the IMU-2 sensor of the X axis with the IMU-3 sensor of the Y axis.
Sensors 22 02198 g008
Figure 9. The angular velocity value for the axis Z IMU-2.
Figure 9. The angular velocity value for the axis Z IMU-2.
Sensors 22 02198 g009
Figure 10. Fast Fourier Transform for IMU-2z.
Figure 10. Fast Fourier Transform for IMU-2z.
Sensors 22 02198 g010
Figure 11. The ICA algorithm activation function determined on the basis of the dependence (9).
Figure 11. The ICA algorithm activation function determined on the basis of the dependence (9).
Sensors 22 02198 g011
Figure 12. The result of the ICA algorithm on the basis of the activation Function (9).
Figure 12. The result of the ICA algorithm on the basis of the activation Function (9).
Sensors 22 02198 g012
Figure 13. Analysis of the error from Figure 12 with the DTW algorithm.
Figure 13. Analysis of the error from Figure 12 with the DTW algorithm.
Sensors 22 02198 g013
Figure 14. The result of the ICA algorithm on the basis of the activation Functions (17) and (18).
Figure 14. The result of the ICA algorithm on the basis of the activation Functions (17) and (18).
Sensors 22 02198 g014
Figure 15. Analysis of the error from Figure 14 with the DTW algorithm.
Figure 15. Analysis of the error from Figure 14 with the DTW algorithm.
Sensors 22 02198 g015
Figure 16. The result of the ICA algorithm on the basis of the activation Function (9).
Figure 16. The result of the ICA algorithm on the basis of the activation Function (9).
Sensors 22 02198 g016
Figure 17. The result of the ICA algorithm on the basis of the activation Functions (17) and (18).
Figure 17. The result of the ICA algorithm on the basis of the activation Functions (17) and (18).
Sensors 22 02198 g017
Table 1. Partial parameter table of D-H to IMU-3 notation.
Table 1. Partial parameter table of D-H to IMU-3 notation.
No. α i d i r i θ i
1000 θ 1
2 π 2 d 1 0 π 2
3000 θ 2
40 d 2 r 1 π 2
5000 θ 3
6000 π 2
7 π 2 0 r 3 θ 4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nalepa, B.; Pawlyta, M.; Janiak, M.; Szczęsna, A.; Gwiazda, A.; Wojciechowski, K. Recreating the Motion Trajectory of a System of Articulated Rigid Bodies on the Basis of Incomplete Measurement Information and Unsupervised Learning. Sensors 2022, 22, 2198. https://doi.org/10.3390/s22062198

AMA Style

Nalepa B, Pawlyta M, Janiak M, Szczęsna A, Gwiazda A, Wojciechowski K. Recreating the Motion Trajectory of a System of Articulated Rigid Bodies on the Basis of Incomplete Measurement Information and Unsupervised Learning. Sensors. 2022; 22(6):2198. https://doi.org/10.3390/s22062198

Chicago/Turabian Style

Nalepa, Bartłomiej, Magdalena Pawlyta, Mateusz Janiak, Agnieszka Szczęsna, Aleksander Gwiazda, and Konrad Wojciechowski. 2022. "Recreating the Motion Trajectory of a System of Articulated Rigid Bodies on the Basis of Incomplete Measurement Information and Unsupervised Learning" Sensors 22, no. 6: 2198. https://doi.org/10.3390/s22062198

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop