Next Article in Journal
FPGA Implementation of High-Efficiency ECC Point Multiplication Circuit
Next Article in Special Issue
Reliability of Recurrence Quantification Analysis Measures for Sit-to-Stand and Stand-to-Sit Activities in Healthy Older Adults Using Wearable Sensors
Previous Article in Journal
Intelligent Cognitive Assistants for Attitude and Behavior Change Support in Mental Health: State-of-the-Art Technical Review
Previous Article in Special Issue
Objective Assessment of Walking Impairments in Myotonic Dystrophy by Means of a Wearable Technology and a Novel Severity Index
 
 
Article
Peer-Review Record

Concurrent Validation of 3D Joint Angles during Gymnastics Techniques Using Inertial Measurement Units

Electronics 2021, 10(11), 1251; https://doi.org/10.3390/electronics10111251
by Joana Barreto 1, César Peixoto 2, Sílvia Cabral 3, Andrew Mark Williams 4, Filipe Casanova 5,6, Bruno Pedro 3 and António P. Veloso 3,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Electronics 2021, 10(11), 1251; https://doi.org/10.3390/electronics10111251
Submission received: 8 April 2021 / Revised: 14 May 2021 / Accepted: 17 May 2021 / Published: 24 May 2021
(This article belongs to the Special Issue Wearable Electronics for Assessing Human Motor (dis)Abilities)

Round 1

Reviewer 1 Report

The article evaluates and analyzes the movements of different athletes during a series of gymnastic exercises. For this, IMUs are used in different positions of the body, the results of the movements are checked and reproduced in a simple and graphic way.

I hope that in the final version, the graphs shown in Figure 1 are improved, since the quality is not very good.

Likewise, the figures included in the appendices are not collected in an adequate order, making them difficult to read. In fact they are included in table A1 and this creates confusion. The same happens with table A2.

There is no inconsistency in the results presented

Author Response

We would like to thank you for your questions and suggestions, they were very useful to improve the first version of our manuscript, the authors hope that the reviewed versions are an improvement.

Reviewer 1:

The article evaluates and analyzes the movements of different athletes during a series of gymnastic exercises. For this, IMUs are used in different positions of the body, the results of the movements are checked and reproduced in a simple and graphic way.

 

1) I hope that in the final version, the graphs shown in Figure 1 are improved, since the quality is not very good.

A: The quality of the graphics was improved and are now as Figures 3a and 3b (L361).

 

2) Likewise, the figures included in the appendices are not collected in an adequate order, making them difficult to read. In fact, they are included in table A1 and this creates confusion. The same happens with table A2.

A: The tables in the appendices were improved (it seems that they were unformatted). The figures are included in the tables because they are related with the movement description in the tables. Now the figures are in an adequate order (Appendices – Tables B1 and B2, L664).

Reviewer 2 Report

Dear Authors,

I think that your draft presents a very interesting approach of comparison between two movement detection technologies. I have appreciated the detailed description of the move set performed by your subjects. Nevertheless, I think that this version of your draft is not enough technology-oriented to meet the criteria of this Journal. For this reason, please consider the following indications for your redrafting activity.

1) In my opinion your draft needs a mild English form polishing activity.

2) r. 22 please clarify the meaning of “ecological validity of the task” in this context.

3) r. 31 please clarify the meaning of “SPM-1D {t}”.

4) Fig. 1 please increase the graphical resolution. Also, please add labels to declare what is present described in the x axis.

5) I suggest that you describe in greater detail the adopted IMUS system, from an electronic and sensing point of view. In my opinion, this would increase the readability of the draft and its matching with a hardware-oriented Journal.

6) I suggest that you highlight in greater detail the analysis that you performed on the gathered data, and what are the main drivers of your comparison between the two movement detection systems.

7) In r. 223-224 you state that “our results are better than some studies that included error from biomechanical models”. I suggest that you clarify how the inclusion of such errors in your analysis method could further improve your response.

Kind regards.

Author Response

Thank you for your questions and suggestions, they were very useful to improve the first version of our manuscript.

I think that your draft presents a very interesting approach of comparison between two movement detection technologies. I have appreciated the detailed description of the move set performed by your subjects. Nevertheless, I think that this version of your draft is not enough technology-oriented to meet the criteria of this Journal. For this reason, please consider the following indications for your redrafting activity.

 

1) In my opinion your draft needs a mild English form polishing activity.

A: The English was revised and corrected throughout the entire manuscript.

 

2) r. 22 please clarify the meaning of “ecological validity of the task” in this context.

A: The ecological validity of the task in study is preserved when analysed in its natural context of practice (e.g., training hall). The evaluation of the task in an artificial context (i.e., in laboratory), does not completely represents the characteristics of the real contexts and may result in a non-representative movement pattern, since subjects adapt their movement to this new context. This is now clarified in Introduction section, L53-55.

 

3) r. 31 please clarify the meaning of “SPM-1D {t}”.

A: “SPM-1D {t}” refers to the two-tailed paired t-test that was computed using the Statistical Parametric Mapping technique for 1-dimensional series (Pataky, 2012). This technique was implemented in our study because it calculates the statistical test on every point of the time series, using the entire data set. This procedure is better explained now and described in detail in L199-205.

 

4) Fig. 1 please increase the graphical resolution. Also, please add labels to declare what is present described in the x axis.

A: The graphical resolution of the Figure 1, that is now Figures 3a and 3b (L361), was improved. The labels were added to the x axis.

 

5) I suggest that you describe in greater detail the adopted IMUS system, from an electronic and sensing point of view. In my opinion, this would increase the readability of the draft and its matching with a hardware-oriented Journal.

A: In the section “Materials and Methods”, we improved the description of IMUS including: the type of sensors, what they contain, what they measure and the type of algorithm used. L94-99.

 

6)

  1. a) I suggest that you highlight in greater detail the analysis that you performed on the gathered data,
  2. b) and what are the main drivers of your comparison between the two movement detection systems.

A: a) Details about how we analysed the data collected were included on L191-205, so our procedure is now easier to reproduce.

  1. b) We compared the two systems with the objective of understanding if the IMUS is valid to measure kinematics of gymnastics techniques in the context of training/practice, since an instrument like this would be very useful for coaches and gymnasts to have technical precise feedback. L21-26, L71-74.

 

7) In r. 223-224 you state that “our results are better than some studies that included error from biomechanical models”. I suggest that you clarify how the inclusion of such errors in your analysis method could further improve your response.

A: Although the error from the biomechanical models is present in our study, we obtained better results comparing to studies (Bessone et al., 2019; Zhang et al., 2013; Mavor et al., 2020) that also included this type of error. This is due to the fact that we tried to reduce this type of error by building the OS biomechanical model according to the definitions of IMUS biomechanical model, as described in “Materials and methods section” L157-162.

Reviewer 3 Report

The paper "Concurrent validation of 3D joint angles for whole-body during 2 gymnastics techniques using inertial measurement units" aims to validate a commercial sensor-system for measuring 3D joint kinematics in gymnastics.

Overall the paper is OK. The abstract and introduction are well written. However important statistics are missing and the results are not discussed with sufficient quality. Based on this study and my own experiences I would judge the sensor-system as unsuitable for measuring the 3D joint kinemtics in gymnastics.

I think it is important to publish your findings. The quality of the present paper is clearly not high enough but I think that after a careful revision and completing with the missing statistics it will be a good paper, relevant to publish. Don't be disappointed if the results will turn out that the system is unsuitable in its present form. This is a very important result too and there are many ways to correct that and obtain a valid system (if you want to know more, please contact the editor after completing the revision and ask him to give you my contact details).

Here are my detailed comments and suggestions for improvement:

Line 77: height std has error in value

Line 92-93: compass check: a bit shaky à only checking qualitatively with a compass does not tell you much about magnetic disturbances. For this you’d need to do exact measurement with a 3D magnetometer for the entire measurement volumes. I suggest to remove this sentence.

Line 108: what means “except shanks”?

Methods, OS: what models did you use to compute joint angles? What marker set was it that you use? How did you deal with marker occlusions / loss?

Line 118: put the mean +- std trial duration instead of mean trial duration < 30 sec (what is a trial mean duration anyway?).

Line 118-119: what was the criterion for 3 best jumps? And why did you have to choose three and did not keep all 5?

Line 122: what means HD quality for Xsens?

Line 127: reflective marker(s) trajectories: I think marker has no s

Lines 132 – 136: partially redundant info

Line 139: these joint movements?

Lines 144ff: why do you do a time normalization? Even after reading the paper, I’m not sure that time-normalization is a good thing to do. The motions are likely to be too complex and different between each athlete and trial.

Lines 148-149: I’d add mean & std

Lines 148-150: How exactly did you pool/combine the errors of/between the different trials? Be specific so that a reader can reproduce what you did.

Lines 152 & 160: Not needed to put Python and Office Excel

Methods: add picture of athlete with markers and IMUs

Methods: Why did you choose round-off back handspring jump and not something else? And why only this movement? (maybe that’s also something for putting in the intro)

Methods: insufficient/missing statistics. How did you deal with the fact of having dependent trials (3 repetitions per athlete)

Results, intro: add how many trials you had in total (maybe also add in methods?)

Results, general: add at least one typical figure of non-averaged angle curves (IMU vs OS)

Results, general: what was a typical trial length? 1 sec, 5 sec, 1 minute?

Results, general: add a figure similar to Fig 1 but showing the error between the two systems in function of time: only with fig 1 it’s tricky to evaluate and judge whether the IMU system would be valid for single subject data (Fig 1. shows only expected errors on averaged data)

Results, general: you are also directly measuring sensor orientation with the OS system: why are there no results of this part?

Lines 221 – 222: what points you to state that your errors include both technological and biomechanical differences?

Line 222: accuracy: no, with the current analysis, you cannot conclude that the system is accurate. Accuracy is defined as the mean error between two systems. However, this is never computed. The only variable you computed is RMS which is a combination between accuracy and precision (precision meaning the std error). Please have a look at the following publications on how such errors are typically defined and reported:

  • Favre 2008: Ambulatory measurement of 3D knee joint angle
  • Fasel 2017: Validation of functional calibration and strap-down joint drift correction for computing 3D joint angles of knee, hip, and trunk in alpine skiing

Line 224-225: how did you exclude errors from biomechanical models? What are these other studies?

Line 226: I would precise and add “no statistically significant differences, according to our criteria, were found…”. If you have RMSE errors that are so high as reported by you there MUST be differences. They are only not significant since your between-trial differences were so high (at least this is what I deduce from Fig 1).

Lines 226 – 230: remove: this is just a wording out of the results.

Line 232 -233: usually, lower range of motion results in lower correlation.

Lines 239 – 243: remove: this is just a wording out of the results.

Lines 243-244: I’m confused. How come that you have the same biomechanical model for both systems? Xsens is a blackbox and you’ve no idea what internal processing and optimization happens…

Discussion, general: you often write “other studies” but don’t immediately provide the references and also never state what they report. Without this information it’s tricky to relate your result to these studies

Discussion, general: you repeat the result section too much. Try to be more concise and do not repeat so many results. You should discuss and interpret your results in this section, not restate them.

Discussion, general: it is boring to read, I skipped the second half. Try to rephrase and make it more interesting to the reader. Put results in a better perspective, summarize better. From the discussion I’ve a very hard time to judge whether your system is indeed valid for measuring gymnastics. Do not be afraid to write negative results and conclude that the system is not usable for gymnastics. Or only usable for a specific field of use / application.

Also comment on the practicability of the system: can it practically really be used or is the setup and analysis too complex? What exactly is needed to improve the system or make it fully valid? Do you think a different system than Xsens may be better suited and valid (personally, I strongly think that yes, there are better systems, but I'm probably also biased ;-) )

Further, please compare better to other studies in the sports domain. There should be sufficient publications now for other sports that will help you put the performance of your system in a better context.

Discussion: study limitations: missing.

Discussion: why did you analyse only a single movement?

Author Response

We would like to thank the reviewer for extended and detailed review of our manuscript and the authors acknowledge that the manuscript greatly improved based on your recommendations.

Reviewer 3

The paper "Concurrent validation of 3D joint angles for whole-body during 2 gymnastics techniques using inertial measurement units" aims to validate a commercial sensor-system for measuring 3D joint kinematics in gymnastics.
Overall the paper is OK. The abstract and introduction are well written. However important statistics are missing and the results are not discussed with sufficient quality. Based on this study and my own experiences I would judge the sensor-system as unsuitable for measuring the 3D joint kinematics in gymnastics.
I think it is important to publish your findings. The quality of the present paper is clearly not high enough but I think that after a careful revision and completing with the missing statistics it will be a good paper, relevant to publish. Don't be disappointed if the results will turn out that the system is unsuitable in its present form. This is a very important result too and there are many ways to correct that and obtain a valid system (if you want to know more, please contact the editor after completing the revision and ask him to give you my contact details). Here are my detailed comments and suggestions for improvement:

 

  • Line 77: height std has error in value.

A: The error was corrected to 1,57±0,37 m. L84.

 

2) Line 92-93: compass check: a bit shaky à only checking qualitatively with a compass does not tell you much about magnetic disturbances. For this you’d need to do exact measurement with a 3D magnetometer for the entire measurement volumes. I suggest to remove this sentence.

A: The sentence was removed.

 

3) Line 108: what means “except shanks”?

A: We placed the rigid lightweight plates (RP) over the IMUS on upper arms, forearms and thigh, but not on shanks. Since the IMUS on shanks are placed on the medial side, placing the RP over it, will cause marker occlusion. The solution was placing the RP on the lateral side as Blair et al. (2018). We improved the sentence since it was not clear (L133-137). Additionally, the difference on the placement of IMUS and RP are now visible in Figure 1, L140.

 

4) Methods, OS: a) what models did you use to compute joint angles? b) What marker set was it that you use? c) How did you deal with marker occlusions / loss?

A:         a) The biomechanical model for Xsens is not possible to modify. For the OS biomechanical model, we build it according to the definitions of the IMUS biomechanical model, as: origins, dimensions and anatomical axes orientation of segment. To be clearer, we added an appendix (Table A) that explains in detail the Biomechanical model for OS.

  1. b) We included a figure (Figure 1, L140) with the anatomical markers, rigid lightweight plates and inertial sensors set, so it is easier to understand and reproduce. Additionally, it is described in L119-137.
  2. c) Marker occlusion/loss occurred sometimes with markers on posterior superior iliac spines. When it happened for ≤10 frames, we interpolated (least-squares fit method) the data on Visual 3D. When the marker occlusion/loss was superior to 10 frames, the trial was not considered for analysis. We did not have major problems with marker occlusion/loss because we did various tests to decide the best position of cameras, in order to avoid this problem. Also, we used a high number of cameras.

 

5) Line 118: put the mean +- std trial duration instead of mean trial duration < 30 sec (what is a trial mean duration anyway?).

A: The trial mean duration is the mean duration of movement performance/trial (1,53±0,09 seconds). The information was corrected on L164.

 

6) Line 118-119: what was the criterion for 3 best jumps? And why did you have to choose three and did not keep all 5?

A: The criteria for the best three trials were: a) to perform the movement technically correct without major execution faults; b) to perform the movement inside the limited area; and c) not to have marker loss or occlusion.

 

7) Line 122: what means HD quality for Xsens?

A: “HD quality” in an option available on MVN Analyze software to reprocess the data from Xsens MVN Link after the collection/recording. Contrary to the simple option, the HD reprocess of data includes information from previous and latter frames to better estimate and correct the position and orientation of each segment. The HD reprocess option was recommended by the manufacturer for our data and we thought it was important to mention it (L168).

 

8) Line 127: reflective marker(s) trajectories: I think marker has no s

A: The typographical error was corrected along the manuscript.

 

9) Lines 132 – 136: partially redundant info.

A: The sentence has been reformulated to include only the necessary information. L178-179.

 

10) Line 139: these joint movements?

A: The typographical error was corrected. L182.

 

11) Lines 144ff: why do you do a time normalization? Even after reading the paper, I’m not sure that time-normalization is a good thing to do. The motions are likely to be too complex and different between each athlete and trial.

A: To confirm if the data normalization had impact on the results, we calculated the RMSE for neck A/A for each repetition of all subjects, without normalizing the data. We obtained a RMSE = 8.81º±2.85º, compared to 8.18º±2.37º reported in the Table 1 of the manuscript. We believe that the two methods (normalization versus non-normalization) would not lead to significant different results.

 

12) Lines 148-149: I’d add mean & std

A: We added “(CMC mean ± SD)” and “(RMSE mean ± SD)”. L192.

 

13) Lines 148-150: How exactly did you pool/combine the errors of/between the different trials? Be specific so that a reader can reproduce what you did.

A: We calculated the CMC and RMSE for each subject and for each joint plane by calculating the mean value for the 3 trials. Then, we calculated the Mean±SD from all subjects to every joint plane. This in now included in L191-205.

 

14) Lines 152 & 160: Not needed to put Python and Office Excel.

A: The references to Python and Office Excel were removed.

 

15) Methods: add picture of athlete with markers and IMUs

A: We added a picture of the athlete with markers (Figure 1, L140), clusters and IMUS, so it is easier for the reader to understand the set up.

 

16) Methods: Why did you choose round-off back handspring jump and not something else? And why only this movement? (maybe that’s also something for putting in the intro)

A: Round-off back handspring technique was chosen (besides the reasons present on L166-168), because it is one of the most important and mandatory techniques in gymnastics. It is learnt at an early age and it is performed at all levels of competition. Furthermore, it is used in various gymnastics disciplines (e.g., women and men artistic gymnastics, Teamgym, acrobatics and trampoline) and in various apparatus (e.g., tumbling, floor, balance beam). The importance of this technique and the reasons why it was evaluated in our study are now clarified in introduction section, L39-42. Due to laboratory limitations as volume of capture and ceiling height, it was not possible to evaluate a more complex sequence (e.g., with three elements or somersaults).

 

17) Methods: insufficient/missing statistics. How did you deal with the fact of having dependent trials (3 repetitions per athlete)

A: We dealt with dependent trials by performing a two tailed paired sample t-test based on SPM-1D (L199-201).

 

18) Results, intro: add how many trials you had in total (maybe also add in methods?)

A: A total of 30 trials were considered for analysis. We added this information to “Materials and methods” and “Results” sections (L165 and L272, respectively).

 

19) Results, general: add at least one typical figure of non-averaged angle curves (IMU vs OS)

A: A new figure (Figure 2, L275) was added with the non-averaged curves for joint angles from IMUS and OS for one trial of one subject, for all joint planes.

 

20) Results, general: what was a typical trial length? 1 sec, 5 sec, 1 minute?

A: A typical trial length is 1,53 ± 0,09 seconds (the period of each trial that was considered for analysis, from the first step of round-off to the landing o back handspring). This information is now included in “Materials and methods” section, L164.

 

21) Results, general: add a figure similar to Fig 1 but showing the error between the two systems in function of time: only with fig 1 it’s tricky to evaluate and judge whether the IMU system would be valid for single subject data (Fig 1. shows only expected errors on averaged data)

A: A figure was added to the Appendices section (Figure A) that represents the joint angles measured by OS and IMUS during three trials of round-off back handspring of one subject. This figure allows to see the differences between both systems during the three trials of one subject.

 

22) Results, general: you are also directly measuring sensor orientation with the OS system: why are there no results of this part?

A: Since it was not one of the objectives of the study, we did not measure it.

 

23) Lines 221 – 222: what points you to state that your errors include both technological and biomechanical differences?

A: First, technological error is present since both systems use different technologies to measure kinematics (optical tracking technology versus inertial motion capture technology). Second, our procedure to build the OS biomechanical model with the same segments definitions as the IMUS biomechanical model reduce the error associated with different biomechanical models, but does not eliminate it completely. The existent literature (Robert-Lachaine et al., 2017; Mavor et al., 2020; Teufl et al., 2019), demonstrates that the technological error is normally low, confirming that our results are a combination of error from the different technological and biomechanical models.

 

24) Line 222: accuracy: no, with the current analysis, you cannot conclude that the system is accurate. Accuracy is defined as the mean error between two systems. However, this is never computed. The only variable you computed is RMS which is a combination between accuracy and precision (precision meaning the std error). Please have a look at the following publications on how such errors are typically defined and reported:

- Favre 2008: Ambulatory measurement of 3D knee joint angle
- Fasel 2017: Validation of functional calibration and strap-down joint drift correction for computing 3D joint angles of knee, hip, and trunk in alpine skiing

A: We report a combination between accuracy and precision of the IMUS (RMSE ±SD) and we corrected the entire manuscript to report if the system was suitable/valid or not to analyse gymnastics techniques. The articles that you suggested were of a great help, especially Fasel (2017).

 

25) Line 224-225: how did you exclude errors from biomechanical models? What are these other studies?

A: The error from the different biomechanical models is present in our study. However, this type of error was reduced by building the OS biomechanical model according to the segments definitions of the IMUS biomechanical model (L129-133).The results and discussion sections were rewritten and the references were included.

 

26) Line 226: I would precise and add “no statistically significant differences, according to our criteria, were found…”. If you have RMSE errors that are so high as reported by you there MUST be differences. They are only not significant since your between-trial differences were so high (at least this is what I deduce from Fig 1).

A: The expression was added along the document (L322, 329). The figure 1 (now Figures 3a and 3b, L361) was corrected. Figures 2 (L274), Appendix Figure A (L-674) and results and discussion section (L271, 402, respectively) should be helpful in understanding the data and the results found.

 

27) Lines 226 – 230: remove: this is just a wording out of the results.

A: The lines were removed.

 

28) Line 232 -233: usually, lower range of motion results in lower correlation.

A: The lines were removed.

 

29) Lines 239 – 243: remove: this is just a wording out of the results.

A: the lines were removed.

 

30) Lines 243-244: I’m confused. How come that you have the same biomechanical model for both systems? Xsens is a blackbox and you’ve no idea what internal processing and optimization happens…

A: We have the same segments definitions (i.e., segments origins, dimensions and anatomical axes orientations) for both biomechanical models. This procedure reduces the differences between the biomechanical models and consequently, reduces this type of error. However, it does not remove it completely since, as you mentioned, we have no information about internal processing and optimization on Xsens and the static calibration for OS does not include the optimization of the walking phase of IMUS calibration..

 

31) Discussion, general: you often write “other studies” but don’t immediately provide the references and also never state what they report. Without this information it’s tricky to relate your result to these studies

A: The discussion section was rewritten and the references were provided immediately after the affirmations throughout the entire manuscript.

 

32) Discussion, general: you repeat the result section too much. Try to be more concise and do not repeat so many results. You should discuss and interpret your results in this section, not restate them

A: In the “Discussion” section, the lines repeating the results were eliminate, so the results were immediately compared with other studies. We improved the text to be more concise in comparing and interpreting/discussing the results.

 

33) Discussion, general: it is boring to read, I skipped the second half. Try to rephrase and make it more interesting to the reader. Put results in a better perspective, summarize better. From the discussion I’ve a very hard time to judge whether your system is indeed valid for measuring gymnastics. Do not be afraid to write negative results and conclude that the system is not usable for gymnastics. Or only usable for a specific field of use / application.

A: The discussion was rewritten to first emphasize the most important results as: RMSE above 10º, CMC below 0.80 and significant differences (SPM-1D {t}) L409-465. After that, we compared and discussed the results that were considered acceptable (L466-491). In our opinion, the IMUS can be used to provide technical feedback in a training context for gymnastics, except for joint thorax-thigh F/E that has a RMSE above our criteria (10º) and above the criteria from Teamgym Code of Points for minor deductions (15º). We also believe that the IMUS is valid and is a major advantage to use in training. Additionally, these results emphasized that more studies are necessary. L498-587.

 

34) Also comment on the practicability of the system: can it practically really be used or is the setup and analysis too complex? What exactly is needed to improve the system or make it fully valid? Do you think a different system than Xsens may be better suited and valid (personally, I strongly think that yes, there are better systems, but I'm probably also biased ;-) )

A: We consider the system is practical, due to the easy and quick preparation of the subject, calibration and for providing data in real-time. This is a great advantage since coaches and gymnasts do not spent much time to be ready to record data (L594-596). To be fully valid, we believe that more studies should be done and focus on methods to eliminate the main source of error: the two different biomechanical models. Honestly, I do not have sufficient knowledge to compare Xsens MVN Link with other IMUS and decide if there is another system that is more suitable.

 

35) Further, please compare better to other studies in the sports domain. There should be sufficient publications now for other sports that will help you put the performance of your system in a better context.

A: From a first search, we only found a recent article in the sports domain validating Xsens MVN Link for tennis forehand drive (Pedro, Cabral, Veloso, 2021). We included this study in our discussion. However, no more pertinent and recent studies on sports domain were found.

 

36) Discussion: study limitations: missing.

A: This study presents limitations as the non-ecological environment (i.e., the laboratory) where the task was performed, which can compromise the motor patterns of the subjects. The limitations of the study are now included in the “Discussion” section, L403-405.

 

37) Discussion: why did you analyse only a single movement?

A: The movement analysed is a sequence of two elements, as illustrated in Appendices Tables B1 and B2. To our knowledge, none of the existent validation studies included two different elements/tasks with this level of complexity. Due to limitations as laboratory length and ceiling height, it was not possible to include a third element to the sequence. Since the data collection sessions had long durations (participant preparation, Xsens calibration and performance of the task in study), it was not possible to evaluate other elements. However, we highlight that the round-off back handspring is performed in various disciplines and apparatus of gymnastics, and at all recreational and competitive levels, what makes this sequence one of the fundamental and more important in gymnastics L39-42.

Round 2

Reviewer 2 Report

Dear Author,

thank you for your redrafting efforts. I think that the quality of your draft has now improved.

Kind regards.

Author Response

Thank you very much for your comments that helped to improved the quality of the manuscript.

Reviewer 3 Report

Dear authors,

Thank you for revising the paper and considering all reviewer's comments. I think that the quality has been greatly improved and that the paper is now easier to understand.

I've one last request of change: please make the axes of Fig2 bigger, the numbers cannot be read.

Then as an input for further manuscripts: please try to provide your opinion and insights in the discussion section. According to your observations and analyses, I'd love to see your opinion on why the system was bad / good for certain joint angles. Also give a more personal touch to the usability of the system and where you think its good / bad to use and what you think is missing to obtain a great system that you and the coaches could well use during training.

Author Response

Dear Reviewer, the authors would like to thank you for your comments and recommendations that allowed improving our manuscript significantly.  

I've one last request of change: please make the axes of Fig2 bigger, the numbers cannot be read.

A: The axes of figure 2, as well as figure A (Appendices) were improved, so the values are now visible.

Then as an input for further manuscripts: please try to provide your opinion and insights in the discussion section. According to your observations and analyses, I'd love to see your opinion on why the system was bad / good for certain joint angles.

A: In our opinion the main reason for the system to be bad/good for certain joint angles is due to the differences between the biomechanics models, that are caused by the different calibration procedures (OS = static calibration; IMUS = static and dynamic calibration). However, we have some good results but also bad results, so joint angles are affected differently. I believe joints with more amplitude are more affected by the different biomechanics models/calibration procedures: for example, we registered higher RMSE for joints with great amplitudes of movement (e.g., shoulder, thorax-thigh, knee and ankle F/E and thorax-thigh I/E) (L300-303).

 

Also give a more personal touch to a) the usability of the system and where you think its good / bad to use and, b) what you think is missing to obtain a great system that you and the coaches could well use during training.

A:

  1. In our opinion and taking into consideration the authors that have experience as gymnastics coach´s and the results of this study, the system is good to evaluate gymnasts in a training context and to have a real-time feedback. I would not use it to compare gymnasts’ performances, nor to give them technical feedback in crucial moments of the season (e.g., when you need to be very detailed about their execution quality). Additionally, the system can be used in other sports similar to gymnastics as figure skating, parkour, jump rope, skateboarding, among others (L313-316).
  2. From a practical point of view, we think that it would be great and very useful for coaches and gymnasts if it provided an automatic classification/technical feedback of the performance. Despite the advantages of the system to be used in training, the coach will need to spend some time analyzing the data to provide feedback to the gymnast. If there was an automatic classification (e.g., classify the somersault as tucked, piked or straight according to the definitions of Code of Points), it would save time to coach and possibly give some autonomy to more experienced gymnasts (L309-312).
Back to TopTop