**3. Methodological Optimization and Development in Motion Analysis**

To meet the huge demands for wearable motion capture and remote motion analysis in healthcare sectors [18–21], new trends are emerging to optimize existing motion analysis models or combine them with the novel statistical, machine learning, or deep learning algorithms. Li et al. [22] proposed the use of multivariable linear regression models and a composite index, which was derived from the most significant differences in patients with anterior cruciate ligament deficiency (ACLD) vs. healthy controls, to facilitate the clinical diagnosis of ACLD. Zhao et al. [23] proposed a new model of using only the easily available anthropometric data (i.e., leg length, body weight, and walking cadence) to estimate vertical stiffness in hip and knee joints, providing alternative insights for gait analysis. Human ankle subtalar and talocrural joint motions are difficult to quantitatively measure in outdoor environments; therefore, Agudelo-Varela et al. [24] proposed a wearable device using a new statistical method of angle calculation. Machine/deep learning algorithms have further facilitated marker-free motion capture and analysis. Using machine learning

algorithms, Haufe et al. [25] found that the gait events could accurately be determined by as few as two lower-limb muscles' sEMG signals in patients with Parkinson's disease. Sikandar et al. [26] used deep learning algorithms to classify walking speeds based on twodimensional marker-free video images. Similarly, Tang et al. [27] attempted to estimate joint moments and power using video data and deep learning algorithms; however, differences existed when comparing marker-free and marker-based estimates, which indicated that their marker-free approach could be further improved to identify the joint centers/center of segment mass more accurately. In addition to video images, Wang et al. [28] utilized motion data collected by two inertial measuring units (IMUs) to identify students' classroom behaviors using deep learning algorithms. Similarly, Xia et al. [29] used IMUs and thinfilm force sensors in hand exoskeletons designed for stroke survivors, enabling intention recognition based on the biomechanical data collected using the deep learning algorithms.
