**2. System Hardware and Software**

The system hardware (Figure 1a) is built around a Microsoft Kinect v2 device (Microsoft Corporation, Redmond, WA, USA), which provides through its Software Development Kit (SDK) [12], RGB color and DEPTH streams at 30 frame/s, with resolution of 1920 × 1080 px and 512 × 424 px, respectively. The range of depth is from 0.5 m to 4.5 m. The device is connected, via an USB 3 port, to a NUC i7 Intel® mini-PC running Windows® 10 (64x) (Intel Corporation, Santa Clara, CA, USA) and equipped with a monitor to provide both a system management GUI and the visual feedback of the hand and body movements to the user (Figure 1b).

**Figure 1.** System for the lower limbs and postural tasks analysis: (**a**) RGB-Depth camera (Microsoft Kinect v2), NUC i7 Intel mini-PC and monitor (**b**) example of GUI with visual feedback.

The system software is made by custom scripts, written in C++, which run on NUC and access the SDK APIs, providing every 1/30 s RGB images and 25 three-dimensional (3D) coordinates of the skeleton model used by the SDK (Figure 2).

**Figure 2.** Positions of joints of the skeleton model from Microsoft Kinect SDK: (**a**) three-dimensional representation of joints and segments for body vertical axis (green), upper limbs (red), lower limbs (blue); (**b**) two-dimensional re-projection of the same joints and segments on the RGB image.

The data analysis and the supervised classifier training and testing phases are based on custom Matlab® scripts (Mathworks Inc, Natick, MA, USA). The software implements different functionalities of the system: real-time interaction by a Human Computer Interface (HCI) based on hand joint tracking/processing and visual feedback; task movement analysis and characterization, by processing the 3D positions of specific task-dependent sets of skeleton joints; automated assessment of posture and lower limb tasks, through the implementation of trained supervised classifiers. Data of each acquisition session (consisting of video of each task performance, user inputs, trajectories of body movements and automated assessment scores) are encrypted and recorded to provide remote supervising facilities to authorized clinicians.
