*Proceeding Paper* **Development and Testing of Motion-Detection Techniques for People with Cerebral Palsy †**

**Clara Lebrato-Vázquez \*, Alberto J. Molina-Cantero, Juan A. Castro-García and Manuel Merino-Monge and Isabel M. Gómez-González**

> Departamento de Tecnología Electrónica. E.T.S. Ingeniería Informática, Universidad de Sevilla, 41004 Sevilla, Spain; almolina@us.es (A.J.M.-C.); jacastro@us.es (J.A.C.-G.); manmermon@dte.us.es (M.M.-M.); igomez@us.es (I.M.G.-G.)

**\*** Correspondence: clebrato@us.es

† Presented at the 4th XoveTIC Conference, A Coruña, Spain, 7–8 October 2021.

**Abstract:** This paper describes several computer access methods tested by Eva, a woman with choreoathetosic cerebral palsy. This disease prevents her from controlling the peripherals and configurations that normally give access to information and communication technologies, further limiting her independence. To make Eva access a computer, we focused our efforts on the methodologies that Eva could control by just moving her neck and head. These sensors were: Kinect, inertial measurement units (IMU), and video. Kinect, composed of a system of cameras and sensors, gives the option to interact and control the devices contactlessly. The IMU is a device consisting of an accelerometer and a gyroscope that measure velocity, orientation, and gravitational forces. For live image processing, a common webcam was used. During the development of the experiment, Eva must follow a sequence shown on the computer screen that alternates movement of the head with rest. These movements involved moving the head up, down, right, or left. Our results showed that the Kinect system could not be used effectively, while the image-processing algorithm obtained the best performance.

**Keywords:** cerebral palsy; choreoathetosis; accessibility; IMU; Kinect; image processing

### **1. Introduction**

Cerebral palsy is a non-degenerative and permanent neurodevelopmental disorder that affects one in 500 people [1] (https://www.overleaf.com/project/6150b68910a2214a7 92fc158 (accessed on 23 September 2021)). It is caused by a neurological injury that occurs during the development of the fetus, childbirth, or in early childhood [1], which can be due to varied causes [2]. In boys and girls, it is the most frequent cause of disability [3], and the degree differs widely in each person [4], depending on the intensity, location, and duration of the injury [3]. Among the different types of cerebral palsy, we focused on the choreoathetosic, which happens when chorea and athetosis occur simultaneously. Chorea is characterized by involuntary, irregular, brief, repetitive, and somewhat rapid movements, while athetosis is a continuous flow of slow, twisting, and sinuous involuntary movements that alternate with parts of the body that remain rigid. When given together, the person has a mixture of twisting movements at a variable speed. Normally, subjects have slow movements in the head, neck, and extremities due to athetosis combined with large shaking in the arms and hands by the chorea. They together alter all the activity, capacity, and posture of the person with these uncontrolled movements.

In the literature, we found several research articles devoted to the design of devices for pointer control [5,6] using techniques based on movement detection sensors or software interfaces for a traditional mouse that are capable of filtering or isolating voluntary movements from the involuntary ones [7].

**Citation:** Lebrato-Vázquez, C.; Molina-Cantero, A.J.; Castro-García, J.A.; Merino-Monge, M.; Gómez-González, I.M. Development and Testing of Motion-Detection Techniques for People with Cerebral Palsy. *Eng. Proc.* **2021**, *7*, 4. https:// doi.org/10.3390/engproc2021007004


Academic Editors: Joaquim de Moura, Marco A. González, Javier Pereira and Manuel G. Penedo

Published: 29 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work aims to help Eva access the computer autonomously. Eva has little control over her limbs, due to the spastic movements. These characteristics make it difficult for her to use adapted peripherals. With great effort, she has some control over her neck and head. Therefore, we exploited this to detect her instructions via head movements using three different sensors: Kinect, IMU, and video.

#### **2. Materials and Methods**

Due to the limited length of this communication, we have only focused here on describing the methods and results associated with the image-processing technique. Those based on Kinect and IMU will be presented at the conference.

The image-processing method is based on detecting the head position in the image captured by the webcam using the Viola–Jones algorithm [8]. This algorithm is based on dividing the image in small patches to which a set of cascade classifiers determines if they contain facial features, and, eventually, a face. This allows for fast face-tracking with high detection rates that provides us with face coordinates.

We have developed a user graphic interface (GUI) which allows us to test each of the devices under study and to guide the user throughout the experiment, providing them with stimuli or hints for the actions that they need to complete, in the form of arrows.

The experiment consists of a series of movements that the subject must reproduce from the initial position. The directions are right, left, up and down. They complete 10 repetitions per movement, in a random order. The time duration of each arrow is 6 s with another 6 s allocated for the user to move back to the origin. The idle time can seem relatively long, but it was found to be needed by Eva during the initial tests.

Two women have taken part in the experiment: S1 without any disability and Eva (whose characteristics have been aforementioned). All of them are adults (27 and 48 years of age) and with higher-education degrees. Before the experiment, they were thoroughly informed of the details and provided their consent.

#### **3. Results and Discussion**

The position (*x*, *y*) of the center of the frame returned by the Viola–Jones algorithm for each movement was stored for both subjects. Then, their distinguish ability was studied by extracting the average and standard error in both *x* and *y* directions. Figure 1 shows the results obtained by this method, in which each ellipse corresponds to the area containing 98% of the positions associated with the four movements plus the resting period. The figure contains the data of one session with both subjects.

**Figure 1.** Representation of the results obtained by image processing of the *x* and *y* axes of the facial center. On the (**left**), results from S1, while on the (**right**) the data that corresponds to Eva.

As can be observed, for S1, the four movements plus the resting state are perfectly identifiable and separable. For Eva, those states overlap more, which implies greater difficulty in their identification. Nevertheless, there are some states that, without any further processing, could be identified (e.g., the movement to the left).

Looking at the results, the use of four movements seems not to be appropriate for Eva. A better option should only include a subset of movements. For example, movement to the left was easier for Eva, but at least three states (left, down, and rest) could be differentiated. With these detection possibilities, Eva could access most adapted software applications that use the scanning of their elements. Two gestures in the proposed technique (rest and left), will be enough to send a selection command to accept highlighted element during the scanning.

The Kinect detection system did not achieve the expected results. We believe that the main problem lies in the software driver that links the Kinect to the library used to synchronize the data.

The results obtained with the accelerometer for Eva were unclear. The movement she performed was not unequivocally identified and could not be improved after applying the Kalman filter. Despite other studies having obtained good results with this filter [9], we believe that the strong involuntary movements that Eva has makes this approach not work properly.

The processing of images obtained by the webcam seems to be the most feasible method, since some head gestures could be identified without any additional processing.

#### **4. Conclusions**

Among the systems used, we found that the system based on image processing gave better results since it had a higher probability in identifying some head movements. In addition, this system has several advantages with respect to the use of sensor type IMU or any that requires the placement of a device on the user [10]. Another clear advantage is that most people are familiar with webcams, which makes it easier for the caregiver to connect the system properly and easily.

**Author Contributions:** All authors have equally contributed to this paper. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Spanish Ministry of Science and Innovation, State Plan 2017–2020: Challenges—R&D&I Projects with grant codes PID2019-104323RB-C32.

**Institutional Review Board Statement:** The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Junta de Andalucía (protocol code C.P. TAIS-C.I. 1130-N-17, 2018).

**Informed Consent Statement:** All participants agreed to take part in the experiments.

**Conflicts of Interest:** The authors declare no conflict of interest.

