**1. Introduction**

About one billion people live with some kind of disability. This corresponds to around 15% of the world's population [1]. The rate of disability is increasing, among other things due to the growth in the average age of the population and the increase in chronic health disorders. Lower income countries have a higher prevalence than higher income countries. Poor people have less resources to access treatment.

Half of people with disabilities cannot afford necessary medical care, compared to one third of non-disabled people who cannot afford it. People with disabilities have more than twice the probability of finding inadequate health care providers techniques, up to four times as likely to report improper treatment, and nearly three times as likely to be denied medical care [1].

Children with disabilities or disorders have a lower probability of attending school and receiving an adequate education. The probability of finding a job for a disabled person is also lower. Global employment data show that the employment rate for people with disabilities is 53% for men and 20% for women, while for people without disabilities it is 65% and 30% [1]. Therefore, people with disabilities are more vulnerable to poverty. They have worse living conditions due to the additional costs of their special needs (specialized

**Citation:** Vicente-Samper, J.M.; Avila-Navarro, E.; Esteve, V.; Sabater-Navarro, J.M. Intelligent Monitoring Platform to Evaluate the Overall State of People with Neurological Disorders. *Appl. Sci.* **2021**, *11*, 2789. https://doi.org/ 10.3390/app11062789

Academic Editor: Juan Antonio Corrales Ramon

Received: 24 February 2021 Accepted: 17 March 2021 Published: 20 March 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

medical care, assistive devices, or people for supporting them). As a result, people with disabilities are generally poorer than people without disabilities who have similar incomes.

Neurological disorders are diseases that affect the central and peripheral nervous system, i.e., the brain, the nerves that are found in the human body, and the spinal cord. Millions of people worldwide suffer from neurological disorders. For example, more than 6 million people die each year from strokes, more than 50 million people around the world have epilepsy, and 7.7 million cases per year are diagnosed with Alzheimer's disease, which is the most common cause of dementia [2]. The specific causes of neurological problems vary, but they may include genetic disorders, congenital anomalies or disorders, infections, lifestyle or environmental health problems, brain injuries, spinal cord injuries, or nerve injuries [3].

On the other hand, neurological disabilities include a wide range of disorders, such as epilepsy, learning disabilities, neuromuscular disorders, autism spectrum disorder, attention deficit disorder, brain tumors and cerebral palsy, among many others. Some neurological pathologies are cognitive, and they appear before birth. Other neurological disorders may be caused by tumors, degeneration, trauma, infection, or structural defects. Regardless of the cause, all neurological disabilities are the result of damage to the nervous system [3]. The need for technological solutions that help people who are affected by these pathologies becomes evident.

Assistive technologies (AT) are devices or systems that can be used to help a person with a disability or disorder to perform daily life activities. The AT can help to improve the functional independence and thus, facilitate the daily living tasks through the use of aids that help a person to travel, to communicate with others, to learn, to work, and to participate in social and recreational activities [4]. AT devices can range from a simple and with low technology designs such as a crutch, to complex systems that speak for the user, automatic opening doors systems or to brain-wave recognition units for interface management [5].

Some of these assistive devices try to manage the associated problems of neurological disorders and facilitate the daily life of both the people who suffer from them and the family or support members who help them throughout their lives. An example is the Embrace device from Empatica [6]. It is a wrist-worn wearable device that monitors the user to detect possible convulsive seizures and alert caregivers. Another example of a monitoring device is the PdMonitor® from PD Neurotechnology [7]. It is a set of wearable monitoring devices, in this case for people with Parkinson's disorder. The device tracks, records, and processes a variety of symptoms, often present in this disease. A third example is the Monarch eTNS® System from NeuroSigma [8]. It is the first device approved by the U.S. Food and Drug Administration (FDA) for the treatment of the Attention Deficit disorder in children (ADHD). The device sends a low-level electrical pulse through a wire to a small patch adhered to the patient's forehead. The therapeutic pulses stimulate the branches of the trigeminal nerve, which activates the neural pathway to other parts of the brain thought to be involved in ADHD. Many research works for the development of new assistive devices can also be found. For example, the work of Cesareo et al. (2020) presents a system for monitoring the breathing rate in people with muscular dystrophy [9]. The system consists of a set of inertial measurement units integrated in wearable devices to control the long-term breathing pattern. Another work in progress is Floodlight Open, developed by the Hoffmann-La Roche company [10]. This is a study that aims to monitor the multiple sclerosis (MS) symptoms using a smartphone, through simple tasks specifically designed to assess the effects of MS.

There are other assistive devices that could be defined as intelligent platforms. These modular systems, in addition to monitoring the user and providing information about the user's condition, incorporate some type of intelligence in the form of decision algorithms or new machine learning methods that improve more traditional systems. For example, an intelligent tool for assisting people with Alzheimer's disease is presented in [11]. The system helps to monitor the user's health, control medication or locate the user

when they become disoriented, among other things. The system is composed by multiple devices that monitor the user, records their position, and control medication and objects that may be important. Through a mobile application, the user and caregivers have access to information and receive alerts. Another idea is presented in [12]. It proposed using wearable monitoring devices together with computational intelligence to diagnose and monitor people with Parkinson's disease. The assessment of Parkinson's Disease motor disabilities is based on neurological examination during patient's visits to the clinic and home diaries that the patient or the caregiver keeps. However, the short-time examination may not reveal important information. To overcome these limitations and difficulties, the ambulatory monitoring systems can improve this evaluation. Applying machine learning algorithms to these platforms allows to obtain intelligent systems for assistance. For example, Casalino et al. (2018) [13] present a system for real-time monitoring of cardiovascular problems using video images and fuzzy inference rules. The proposed system is composed of a transparent mirror with a camera that detects the user's face. The frames are processed using photoplethysmography in order to estimate different physiological parameters of the user. The estimated parameters are used to predict a risk of cardiovascular disease through fuzzy inference rules. Another system that employs the use of monitoring devices and machine learning algorithms is [14]. The authors present a gait-assistive system using a neural network. The system is composed of devices that monitor the user's movement during gait and stimulate the muscle nerves using electrical stimulation through electrodes. After a data collection phase, a model based on recurrent neural networks is trained. The model will be in charge of predicting the user's movement during gait and controlling the stimulation signal.

The purpose of this work is to present a full platform for the development of custom predictive models that help people with neurological disorders. Figure 1 sketches the concept of this work. The main challenges that this work faces are the signal acquisitions from the user and the environment, the signal processing, the dataset generation with feature engineering, and to train and optimize a predictive model. The aim is to use the generated model to help the user to manage their pathologies; therefore, the model will become an AT in his daily life. Unlike other works where the system is focused on a specific pathology or the algorithms used are optimized for a specific application, the proposed system is intended to be used for multiple pathologies and applications. The platform is presented in a generic way, where each of the stages can be adapted in order to obtain a different predictive model. This model will be personalized for the user and the desired application. Thus, to illustrate the versatility of the platform and to show the working methodology, a system validation experimentation is performed.

**Figure 1.** General concept of the proposed platform.

This paper is organized as follows. In the materials and methods section, the description of the different parts that make up the platform is presented. First, the acquisition system composed by four modular electronic devices. Then, Sections 2.2 and 2.3 describe the characteristics of the database and the feature engineering before training the model,

respectively. Section 2.4 shows the steps to follow towards generating predictive models using machine learning (ML) algorithms. In Section 2.5, a use case of the platform is presented; it is intended to generate a prediction model for people concentration in the workstation. Section 3 shows the results of the use case with data extracted from the generated model and its training. Finally, in Sections 4 and 5 the obtained results are discussed and the conclusions of the paper are outlined.

#### **2. Materials and Methods**

This section presents a platform for the generation of predictive models for people with neurological disorders (Figure 1). These models, customized for each user, intend to provide information about the pathology of the user and to help them manage it in a more controlled way. The generated information can also be used as input or as feedback for other assistive technologies. First, the different parts that make up the platform are described. Each of the stages can be adapted to the special conditions of the user and the final objective of the application to be developed. To conclude the section, an example of the use of the platform is described to show the workflow of the system.

### *2.1. Data Acquisition System*

The first stage of the platform is the data acquisition system. This system is responsible for obtaining information from the user and from the surrounding environment. It has been developed with a modular architecture that allows adjusting the use of the devices according to the special characteristics of each user and the application that is carried out. For example, it allows the sensors to be restructured in the event that one of the devices is no longer required. An example of this would be to integrate the sound sensor into the smartphone in situations where the video device is not used, or vice versa. The system consists of four electronic devices: a smartphone, a wrist wearable device, an environmental monitoring device, and a video sensor device.

#### 2.1.1. Personal and Environmental Devices

The environmental and personal monitoring devices are responsible for measuring the environmental conditions and the user's physiological variables, respectively. The development of these devices, which are integrated within the presented platform in this work, along with a study of the different parameters measured by this data acquisition system is described in [15].

The environmental monitoring device provides information about the luminosity, the environmental temperature, the relative humidity, and the atmospheric pressure of the environment where the user is. It is a small electronic device, which could be a key ring.

The personal monitoring device is the one described in [15], which integrates the measurement of the heart rate and the body temperature of the user. To complement the device, a new sensor has been integrated. It is an Inertial Measurement Unit (IMU), which provides information about the motor activity performed by the user. The details of this sensor and its integration into the system are described in Section 2.1.2. This personal monitoring equipment consists of a small wearable wrist device. Its design has been slightly updated to integrate the new sensor, and its dimensions have been reduced with respect to the previous version, despite integrating a larger capacity battery that provides an autonomy of more than 20 h of use. In addition, it has been manufactured with soft and comfortable materials that provide ergonomics and facilitate its placement for users with special difficulties. Figure 2 shows a picture of the new design, where the device, completely covered with EVA foam, can be seen. This also allows the device to weigh only 5.90 g, including the foam cover.

**Figure 2.** New design of the personal monitoring device.

On the other hand, the management of these monitoring devices is done with a smartphone via an Android application. The application, also described in [15], also allows displaying information to the user or to the caregiver. The interface and the displayed information can be modified depending on the user and the corresponding application. Furthermore, as described in [15], the smartphone also acts as a sound sensor, capturing the entire human audible spectrum (between 20 Hz and 22 kHz).

#### 2.1.2. Motor Activity Sensor

The motor activity (MA) of the user is a relevant parameter that provides useful information to the platform. For example, it allows to evaluate the health and wellness in users with neurodegenerative disorders that influence in the motor functionality [16]. This parameter allows to quantify the arm movements and provides data about the user's displacement. In addition, together with other parameters, it is possible to estimate the state of mind or measure stress levels of the user [17].

The sensor used for the measurement consists of an IMU placed in the personal monitoring device, which integrates 3-axis gyroscope, 3-axis accelerometer, and 3-axis magnetometer. As an example, Figure 3 shows a fragment of the obtained signals by the MA sensor in the *X*-axis during a test session.

**Figure 3.** Example fragment of *X*-axis Gyroscope and Accelerometer signals recorded from the MA sensor during a test session.

The obtained data every sample by the sensor are organized in an array and stored in the corresponding collection within the database. The structure of a sample document for storing samples is shown below. The *date* and *dateString* parameters are the timestamp of the measurement in milliseconds and in character string, respectively. These parameters are included in every document in all data collections. The generated array with the measurement data is stored in the *motor* parameter. The *ObjectId* identifier is automatically set by the database when the document is uploaded.

```
{
 " _id " : ObjectId
 "date " : NumberLong
 "dateString " : "EEE MMM d HH:mm: s s z yyyy "
 "motor " : [ gyrX , gyrY , gyrZ , accX , accY , accZ ,magX,magY,magZ ]
}
```
#### 2.1.3. Video Device

The last device that makes up the data acquisition stage is the video device. The purpose of using this system is to obtain visual relevant information to the applications from the user's environment.

Some neurological disorders can cause difficulties in social interaction and therefore affect the mental wellbeing of the user [18]. Thus, one of the parameters measured by the video device is the number of people in the user's environment at any given time. In this way, the platform has information which together with other parameters allows to relate, for example, how social interaction can affect the user or whether an excessive presence of people around the user influences in the user's behavior [19].

On the other hand, regardless of the number of people around the user, some people with neurological disorders may feel overwhelmed if there is excessive activity around them, such as constant moving around or movements close to the user. In addition, these actions added to other stimuli, e.g., ambient noise, may be intensified [20]. Therefore, another parameter provided by the video device to the platform is the Optical Flow. The Optical Flow is the apparent motion pattern of objects in the image between two consecutive frames, caused by the displacement of the object or the camera. It is a 2 dimensional vector field where each vector represents a displacement vector indicating the movement of a point from its position in the first frame to its position in the second frame [21]. Consequently, if we keep the camera in a fixed position, the displacement that occurs in the image between frames will then be due only to the movement of the objects. In other words, we will have an estimate of how much movement occurs in the user's environment. The video device consists of an Nvidia Jetson Nanocompute card [22] and an Insta360 Air 360 degree camera [23]. The Jetson Nano has 128 *CUDA®* (Compute Unified Device Architecture) cores that encourage the execution of applications where computer vision and machine learning algorithms are used. The 360 degree camera provides the most complete possible picture of the user's environment. The system is mounted in a 118 × 96 × 60 mm aluminum casing for portability between rooms. In addition, an improved cooling kit has been added to keep the temperature stable during use. An image of the video device is shown in Figure 4a.

**Figure 4.** (**a**) Picture of the video device. (**b**) Screenshot of a people detection frame during a test session. (**c**) Example fragment of the optical flow signal recorded during a test session.

The first parameter introduced is the quantification of people in the user's environment. To do this, it is necessary to detect the people who are in the room at any given moment. This detection task is performed using a convolutional neural network (CNN) model. The algorithm used is YOLOv5 [24,25]. A pretrained model optimized for object detection is used. It has a high speed of execution doing inference and its size is contained. Figure 4b shows a screenshot of a frame where the detection of people during a test session is observed. It can be seen how the 360 degree camera captures the entire environment and the algorithm detects where the people are. The structure of a sample document for storing People Detection data is shown below. The *people* parameter stores the measured number of people.

```
{
 " _id " : ObjectId
 "date " : NumberLong
 "dateString " : "EEE MMM d HH:mm: s s z yyyy "
 "people " : NumberShort
}
```
On the other hand, there are different ways to measure the Optical Flow. In this case, a dense measurement of Optical Flow has been chosen, in which the displacement of all the points of the frame is calculated. To measure the Optical Flow of all the points in the image and to be able to quantify the movement that occurs in the user's environment, the method of Gunnar Farnebäck is used [26]. This algorithm is based on polynomial expansion and performs an estimation of the motion of two frames. Finally, the displacement modulus of both axes (X and Y) of the frame is stored. Figure 4c shows a fragment of the Optical Flow signal recorded during a test session. It can be observed how after a period of time the signal level is lower. This fact is due to a change of position by the people in the session to a place farther away from the camera, which results in a smaller change between frames and therefore a lower signal magnitude. The structure of a sample document for storing Optical Flow data is shown below. The *optical* parameter stores the estimated optical flow.

```
{
 " _id " : ObjectId
 "date " : NumberLong
 "dateString " : "EEE MMM d HH:mm: s s z yyyy "
 " optical " : NumberLong
}
```
## *2.2. Database*

The next stage of the platform is in charge of storing the information obtained by the data acquisition system. The information must be organized in an efficient and secure way. Therefore, the database must have some characteristics that allow the platform to work properly and with minimum execution times. The main properties that must be met are listed below.


The database used in the platform is a non-relational MongoDB database [27]. The information obtained by the data acquisition system is stored in documents within data collections, with a collection for each measured parameter. This allows new parameters to be integrated into the platform without the need to modify the storage architecture. An example of the collection storage structure is shown in Figure 5.

**Figure 5.** Example of database collection in the platform.
