**1. Introduction**

Modifications of motor function associated to different environments or state of health are typically estimated and quantified by means of instrumental gait analysis. To this aim, particularly relevant seems to be the problem of recognizing at least the two main gait phases, namely stance and swing. Single-type sensors or a combination of multiple types of sensors, such as angular velocity, attitude, force, electromyography, and cameras are typically used for gait phase quantification [1–5]. Recent availability of technological advancements is allowing to limit the experimental complexity of gait-analysis set-up, providing a less expensive, less intrusive, and more comfortable estimation of gait data. Robust artificial intelligence techniques for managing a lot of biological data and signals coming from smart sensors such as inertial measurements units (IMU) are undoubtedly among the most used approaches to this aim [6–14]. Specifically, the problem of estimating temporal parameters of gait could take grea<sup>t</sup> advantage by the development of these new approaches. Frequently, the use of IMUs appears to be suitable for a smart assessment of walking parameters, such as gait-phase duration and timing of heel strike (time when the foot touches the ground) and toe off (time when the foot-toes

clear the ground) [11]. Attempts based on artificial intelligence were also applied in a satisfactory way for the assessment of gait parameters during walking [6,7,9,10,12–15].

Machine/deep learning techniques are usually implemented for classification of biological signals [13,16]. The introduction of those methodologies has opened a novel perspective also for reducing the complexity of experimental set-up. Predicting gait events from sensors already used in smart gait protocols or in specific environments would avoid the addition of further sensors or systems for the direct measurement of temporal data, such as stereo-photogrammetry, foot-switch sensors, pressure mats, and IMUs [11,17–19]. This would be particularly suitable for specific fields, such as walking-aid devices (mainly exoskeletons) where sensors are already embedded in the system [10,20–23]. From this point of view, an interesting attempt has been performed by Liu et al. [10], who proposed a technique for the recognition of gait phases using only joint angular sensors of the exoskeleton robot, containing the position, velocity, acceleration, and further motion data; very promising results were achieved. In the same way, di fferent studies attempted to provide a reliable classification of gait phases and an accurate prediction of heel strike (HS) and toe o ff (TO) from only surface electromyographic (sEMG) sensors [13,14,21,23–25]. Details about methodology and outcomes of these studies are reported in Section 2 (related works).

Following the line taken by the above-mentioned studies, the present work was designed to propose a novel approach for the binary classification of gait phases and the prediction of gait events, based only on deep-learning analysis of sagittal knee-joint angle measured by a single electrogoniometer. Although promising classification performances were achieved by sEMG-based methods [13,14,21,23–25], gold-standard approaches are not available in literature. Thus, in order to evaluate the robustness of the proposed approach, a direct comparison in the same population was also performed with the sEMG-based experiment [14], which achieves the best performance in HS and TO prediction among the approaches reported in literature (see Section 2). Moreover, another aim of the study is to test if the addition of electrogoniometer data could further improve the classification performance of this state-of-the-art method.
