Next Article in Journal
Smart Grid Stability Prediction Model Using Neural Networks to Handle Missing Inputs
Next Article in Special Issue
Whitening Technique Based on Gram–Schmidt Orthogonalization for Motor Imagery Classification of Brain–Computer Interface Applications
Previous Article in Journal
Few-Shot Emergency Siren Detection
Previous Article in Special Issue
Using Acoustic Vibrations as a Method for Implant Insertion Assessment in Total Hip Arthroplasty
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Multimodal Equipment to Help in the Diagnosis of COVID-19 Using Machine Learning Algorithms

1
Biomedical Engineering Research Group—GIIB, Universidad Politécnica Salesiana (UPS), Cuenca 010105, Ecuador
2
Department of Electrical Engineering, Universidade Federal do Espírito Santo (UFES), Vitoria 29075-910, Brazil
3
Department of Electrical, Computer, and Biomedical Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(12), 4341; https://doi.org/10.3390/s22124341
Submission received: 11 May 2022 / Revised: 1 June 2022 / Accepted: 2 June 2022 / Published: 8 June 2022
(This article belongs to the Special Issue Sensing for Biomedical Applications)

Abstract

:
COVID-19 occurs due to infection through respiratory droplets containing the SARS-CoV-2 virus, which are released when someone sneezes, coughs, or talks. The gold-standard exam to detect the virus is Real-Time Polymerase Chain Reaction (RT-PCR); however, this is an expensive test and may require up to 3 days after infection for a reliable result, and if there is high demand, the labs could be overwhelmed, which can cause significant delays in providing results. Biomedical data (oxygen saturation level—SpO2, body temperature, heart rate, and cough) are acquired from individuals and are used to help infer infection by COVID-19, using machine learning algorithms. The goal of this study is to introduce the Integrated Portable Medical Assistant (IPMA), which is a multimodal piece of equipment that can collect biomedical data, such as oxygen saturation level, body temperature, heart rate, and cough sound, and helps infer the diagnosis of COVID-19 through machine learning algorithms. The IPMA has the capacity to store the biomedical data for continuous studies and can be used to infer other respiratory diseases. Quadratic kernel-free non-linear Support Vector Machine (QSVM) and Decision Tree (DT) were applied on three datasets with data of cough, speech, body temperature, heart rate, and SpO2, obtaining an Accuracy rate (ACC) and Area Under the Curve (AUC) of approximately up to 88.0% and 0.85, respectively, as well as an ACC up to 99% and AUC = 0.94, respectively, for COVID-19 infection inference. When applied to the data acquired with the IMPA, these algorithms achieved 100% accuracy. Regarding the easiness of using the equipment, 36 volunteers reported that the IPMA has a high usability, according to results from two metrics used for evaluation: System Usability Scale (SUS) and Post Study System Usability Questionnaire (PSSUQ), with scores of 85.5 and 1.41, respectively. In light of the worldwide needs for smart equipment to help fight the COVID-19 pandemic, this new equipment may help with the screening of COVID-19 through data collected from biomedical signals and cough sounds, as well as the use of machine learning algorithms.

1. Introduction

COVID-19 is a well-known contagious infectious disease caused by the new Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) [1]. Since its first detection in 2019, new variants of SARS-CoV-2 have emerged [2], such as the United Kingdom (UK) variant (B.1.1.7), the Brazilian variants (P.1, P.2, and N.9), the South Africa variant (B.1.325) [3,4], omicron (B.1.1.529), firstly detected in Africa, ihu (B.1.640.2), detected in France, the recent hybrid variant deltacron (AY.4/BA.1), which is a combination of the variants delta and omicron, and the newer recombination of variants BA.1 and BA.2 of the omicron variant (named XE), firstly detected in the UK in April 2022. Current official data, from 2 June 2022, provided by the Center for Systems Science and Engineering (CSSE) at John Hopkins University (JHU), shows that COVID-19 has affected more than 531 million people worldwide, killing almost 6.3 million people, including more than 666,800 Brazilians, 41,100 Canadians, and 35,600 Ecuadorians [5].
It is a fact that massive vaccination (reaching more than 90% in some countries) has prevented or attenuated the effects of this infection, strongly decreasing the number of deaths [6]. However, the COVID-19 pandemic is not over yet, as evidenced by recent lockdowns in Shanghai and Beijing. In addition, the average worldwide vaccinated population is only 60% (some countries have vaccinated less than 10% of their populations) [7], and the current number of deaths is more than 3000 people daily [5].
Depending on the SARS-CoV-2 variant, the symptoms of COVID-19 can include fever or chills, cough, shortness of breath or difficulty breathing, headache, muscle or body aches, dizziness or fatigue, sore throat, congestion or runny nose, new loss of smell or taste, nausea, vomiting, diarrhoea, abdominal pain or anorexia, confusion or impaired consciousness, and rash, among others [8]. Currently, according to the Centers for Disease Control and Prevention (CDC), persons infected with the omicron variant, which represents 99.8% of infection worldwide, can present with symptoms similar to previous variants.
These infected persons may be asymptomatic or symptomatic, the latter varying among mild, severe, and critical. There are risk factors that increase the chance of developing the severe and critical version of the disease, such as advanced age, smoking, and comorbidities (diabetes, hypertension, cardiovascular disease, obesity, chronic lung disease, and kidney disease) [8]. Reverse-Transcription Polymerase Chain Reaction (RT-PCR) is the gold-standard to detect SARS-CoV-2 infection [9]; however, its high cost limits access in countries such as Ecuador and Brazil, where this exam costs between USD 45 and 65 (and more than USD 100 in Canada and other countries). In addition, RT-PCR is only more reliable when the sample is obtained up to three days after getting the infection, and if there is a high demand, test results can be delayed by some days.
Due to the high transmission rate of the omicron variant (much higher than the previous ones), specific measures are still needed to reduce the spread of the pandemic, such as alternative diagnostic methods for asymptomatic and symptomatic individual detection using Artificial Intelligence (AI) [10]. It is worth mentioning that about 40 to 45% of individuals with COVID-19 are asymptomatic [11], which is a big concern to prevent the virus’s spread, as such individuals keep transmitting the virus without realizing that they are.
Thus, a very strong effort has also been made by researchers and industries worldwide to develop low-cost wearable devices and user-friendly mobile applications to detect the symptoms of COVID-19 using information from some biomedical signals and markers, such as cough, heart rate variability, blood pressure, body temperature, and oxygen saturation level [10,12,13]. However, these biomedical data are not decisive to confirm infection by COVID-19, but could open avenues to be used as a screening tool for telemedicine or remote monitoring. For instance, although the sound of forced cough is able to provide a COVID-19 diagnosis, such as claimed by [13,14], another study [15] computed that only 59% of people infected by COVID-19 have a dry cough. On the other hand, the heart rate variability is another separate factor able to indicate possible infection by the virus even in asymptomatic people [16,17].
Body temperature is measured to check fever in individuals, which is another symptom that affects 99% of symptomatic individuals with COVID-19, although it does not occur in asymptomatic ones [15]. Another symptom of COVID-19 is the decrease in oxygen saturation level in blood (abbreviated as SpO2—peripheral capillary oxygen saturation), and when it is below 95%, this may cause serious health issues; hence, there is a need to regularly monitor it. However, other respiratory diseases such as cold and flu also reduce the SpO2 level within the range 90–95% without causing any major health concerns.
The use of Artificial Intelligence (AI) based on biomedical data for the diagnosis of respiratory diseases is quite recent. For instance, ref. [18] presented a systematic review of works that address the diagnosis of pneumonia through several biomedical signals (including the most common ones: body temperature, abnormal breathing, and cough) and using different techniques of AI, such as Logistic Regression (LR), Deep Learning (DL), Least Absolute Shrinkage and Selection Operation (LASSO), Random Forest, Classification and Regression Trees (CART), Support Vector Machine (SVM), fuzzy logic, and k-Nearest Neighbour (K-NN), among others. The study found that AI could help to reduce the misdiagnosis of COVID-19, since there is significant overlap in COVID-19’s and other respiratory diseases’ symptoms.
Recent research has been conducted using samples of sounds from individuals to infer infection by COVID-19 [19,20,21,22,23]. For instance, in [22], a public crowdsourced Coswara dataset, consisting of coughing, breathing, sustained vowel phonation, and one to twenty sounds recorded on a smartphone, was used for this purpose. In another study, data from the INTERSPEECH 2021 Computational Paralinguistics (ComPaRe) challenge were used to infer COVID-19, in a binary classification, through coughing sounds and speech using two subsets from the Cambridge COVID-19 Sound database [22]. The first subset is the COVID-19 Cough Sub-Challenge (CCS), which consists of cough sounds from 725 audio recordings, and the second subset is the COVID-19 Speech Sub-Challenge (CSS), with only speech sounds of 893 audio recordings. In another study [21], an analysis of a crowdsourced dataset of respiratory sounds was performed to correctly classify healthy and COVID-19 sounds using 477 handcrafted features, such as Mel Frequency Cepstral Coefficients (MFCCs), zero-crossing, and spectral centroid, among others. In [23], an audio texture analysis was performed on three different signal modalities of COVID-19 sounds (cough, breath, and speech signal), using Local Binary Patterns (LBPs) and Haralick’s method as the feature extraction methods. Unlike cough sounds, another study [24] used biomedical data (body temperature, heart rate, and SpO2), collected from 1085 quarantined healthy and unhealthy individuals, through a wearable device, to infer COVID-19 infection.
In contrast with the aforementioned studies, the all-in-one Integrated Portable Medical Assistant (IPMA) equipment introduced here uses all these measurements taken together from the individual to improve the diagnosis accuracy of a possible infection due to SARS-CoV-2, instead of using isolated measurements. Thus, this work introduces the IPMA, which is a piece of non-invasive, real-time, any-time equipment for large-scale COVID-19 screening that can be used for daily screening of large populations, such as students at school, employees at work, and other people in general public areas, such as parks, public transit, and more. The IPMA uses four bio-markers (cough, heart rate variability, oxygen saturation level, and body temperature) to infer SARS-CoV-2 infection through machine learning algorithms.
It is worth mentioning that some wearable devices (such as smartwatches and fitness bands), as well as mobile apps have been launched to measure cough, heart rate, body temperature, and oxygen saturation levels. Yet, some of these apps and devices lack clinical validity due to their inaccurate measurements without significant correlation with the measurements of certified clinical devices [25]. In fact, these devices are usually sold as approximate information products that show discrete measurements, indicating that their purpose is to be used as preventive alert devices, to “increase users’ awareness of their general well-being” [26], having no clinical validation or medical certification, nor even indicating the possibility of COVID-19 infection (as they do not have embedded algorithms to do that). In addition, such wearable devices present inconsistency in the quality of data acquisition, and there is no standardization for data collection or sensor placement [27]. Unlike these wearable devices and apps, the IPMA houses medically certified devices and takes their measurements in a row to be stored in a database. In addition, as previously mentioned, although some recent studies have used separately, in different research, human sounds [19,20,28,29,30] and some biomedical data [21] to infer infection by COVID-19, in our study, we used all these signals together to be input into machine learning algorithms, in order to gain a higher accuracy rate. Furthermore, we verified the usability of our equipment with volunteers in terms of the device itself, by using the System Usability Scale (SUS), and of the user interface, by using the Post Study System Usability Questionnaire (PSSUQ).

Goals

There were three main goals in this study. First was the development of multimodal equipment to automatically acquire data from medically certified devices, without opening them, thus keeping their medical certification, using linear actuators to turn them on, as well as cameras and an algorithm based on a VGG16 pre-trained network to read the devices’ displays. Second was the development of machine learning algorithms to help infer COVID-19 infection. Third was the creation of a database with collected biomedical measurements, which can be further used to infer other respiratory diseases.

2. Materials and Methods

2.1. Hardware

The IPMA used in this study is shown in Figure 1, in two different versions (see the details of these two versions in Table 1 and Figure 2). Inside the IPMA, there is an oximeter, an automatic blood pressure device, a thermometer, and a microphone. These sensors are responsible for acquiring the oxygen saturation levels (SpO2), Heart Rate (HR), Blood Pressure (BP), Body Temperature (T), and cough from the volunteer. Data from the blood pressure device were not used in this study. After every use, the IPMA was completely sterilized, for 2.5 min, using UV-C lamps (see the moment of that sterilization for version B in Figure 3; version A must be put inside a box with UV-C radiation to be sterilized). It is worth mentioning that there was no contact of the volunteer with the UV-C light, which can be harmful to the eyes and skin. In version B, the structure is made of transparent polycarbonate, which blocks the UV-C spectrum. For safety, the UV-C lamps are turned on only if the IPMA’s door is closed, and there is a limit-switch and a control relay that cuts the energy immediately if the IPMA’s door is open, turning off the UV-C lamps.

2.2. Software

In order to use the IPMA, a Graphical User Interface (GUI) was developed, which includes a mobile application. The mobile application can store data from the device and send the data to the physician. Furthermore, there is a Flask server that runs a website in an embedded computer, which can be accessed through any browser. On that website, there is a GUI where the user can interact with the IPMA.

3. Machine Learning Algorithms

Machine learning algorithms were used here for two purposes: (1) to recognize the characters displayed on the screen using Optical Character Recognition (OCR), from images captured by the cameras; (2) to help the screening of COVID-19 by performing data analysis and classification on the biomedical data captured by the IPMA.
The information about the biomedical data from the individual is acquired by opticam SQ11 mini-cameras focused on the devices’ displays, so that the devices do not need to be opened to obtain that information, thus preserving their medical certification. Once images are acquired, VGG16 is used to adjust the image to a correct position to allow character recognition. The images need to be in the correct position, so that the OCR algorithm can locate and recognize key pixels inside the image to locate the segments of the display and determine whether they are on or off.

3.1. Optical Character Recognition

3.1.1. Image Preprocessing

Before applying the OCR algorithm, we performed some image prepossessing steps to segment and standardize the position of the 7-segment display of each image. As the thermometer is well fixed in the IPMA hardware structure, in the preprocessing stage, the image is rotated and cropped according to the region of interest. However, the oximeter needs a more complex preprocessing, as it is attached to a moving structure that adjusts itself to the arm length, and the opening angle of the device also changes the device’s display position relative to the camera, requiring an adjusting algorithm to handle these changes. For this task, we performed transfer learningusing a pre-trained network called VGG16. VGG stands for Visual Geometry Group and is a standard deep Convolutional Neural Network (CNN) architecture with multiple layers. “Deep” refers to the number of layers, with VGG-16 consisting of 16 convolutional layers. It was proposed in [22] to classify images into 1000 different categories. The model with the original weights was provided by the Keras API [23].
We adapted VGG16 to find specific points in the oximeter image and used these points to adjust the image for the OCR algorithm. These points are presented in green in Figure 4. To adapt the model, we created a dataset composed of 22 images taken from the oximeter with the position of the marks defined manually. Fifteen out of them are images of the oximeter displaying SpO2 values, and 7 out of them are with the display turned off or for incomplete measurement, as shown in Figure 4.
The input and outputs layers of the VGG16 were modified to our needs. The input layer was swapped for one with the shape of our images (288 × 352 × 3—high × width × colour channels), and the original output was changed to a dense layer with 1024 neurons with the Rectified Linear Unit (ReLU) activation function connected to a final layer composed of 8 neurons and a pair for each line and column coordinates of the 4 desired points.
In the training procedure, the original weights of VGG16 were not modified; only the weights of the input and output layers were modified. From all 22 images, 16 were used for training and 6 for the validation of our model. The images in each group were selected at random. To avoid overfitting of our model and increase the noise and motion robustness of VGG16, multiple random image transformations, such as rotation, Gaussian noise addition, contrast, and shear, were applied at random during the training Figure 5). These transformations were performed at each epoch by selecting random samples and applying the image processing techniques on them. This procedure allows generating multiple training samples using only one image. A normalization procedure was applied on the image pixels’ intensities, dividing them by 255 (maximum intensity of the images with 8 bit colour resolution) to have a maximum value of 1.
The Mean-Squared Error (MSE) loss metric was used in the training process, and the Adam optimizer was applied. To keep the best weights and avoid overfitting, a checkpoint function was created, which saves the model with the lowest validation MSE. The model was trained for 150 epochs with 32 randomly generated images in each epoch. The code was developed and trained using the free version of the Google Colab virtual machine (the training was performed with a GPU) using Python and Tensorflow [24].
Tensorflow lite was used to convert the model into a 32 bit version, allowing the IPMA to be embedded on a Raspberry board. Then, it was used to find the key points in the images. A perspective transformation was performed to align and segment the SpO2 reading, and finally, a vertical flip was applied. The full procedure of the image alignment applied to the oximeter is presented in Figure 6.

3.1.2. Character Recognition

Once the images are aligned by using VGG16, the OCR algorithm takes the images and binarises them, and a template matching algorithm is used to locate the segments of the 7-segment digits in the image. This is performed in two parts, first finding the position of the full number, then performing the template matching on each digit. Afterwards, the algorithm checks if the segment is activated or not and then compares it with a 7-segment display table, thus determining which digit, from 0 to 9, is being displayed. This algorithm has been demonstrated to be a 100% efficient. Thus, once the digits are recognized, it is possible to know the respective number shown in the display, knowing the position of each digit. Figure 7 shows the diagram of the OCR recognition system.

3.2. COVID-19 Inference

In our research, we used cough and audio sounds (from the CCS and CSS of the Cambridge COVID-19 Sound database [22]), in addition to the data of body temperature, heart rate, and SpO2 from [17], to train our machine learning algorithms (Quadratic kernel-free non-linear Support Vector Machine (QSVM) and Decision Tree (DT) with Gini’s diversity index to split the data), and then, we applied them to the data collected by the IPMA. Figure 8 shows the block diagram for these algorithms.
Prior to any processing, we re-sampled the cough and speech data to a frequency of 16 kHz. For each audio sample, the Mel spectrogram was extracted with a window frame of 25 ms for processing and an overlap of 50%. These spectrograms are computed by extracting the coefficients relative to the compositional frequencies with Short-Time Fourier Transform (STFT), such as done by [31]. Following, these Mel spectrograms are converted to grey-scale, and 512 Local Ternary Pattern (LTP) features are extracted. This feature extraction method is an extension of LBPs, which uses a constant threshold to threshold pixels into three values, i.e., −1, 0, and 1 [32,33]. For heart rate, temperature, and SpO2 signals, the feature vector is composed of their values. The three feature vectors obtained are used to train twodifferent classifier models: for cough and speech, a QSVM is used, whereas for heart rate, body temperature, and SpO2, a DT is used. For each dataset, approximately 70% of the data were used as the training set and 30% for testing. Furthermore, the full training set was analysed in a k-fold cross-validation, with k = 10 . The test and IPMA’s values’ set were checked using each classifier model, and a score vector of the individual being infected or non-infected was obtained for the IPMA set (P1, P2, and P3). This score is the distance from the sample to the decision boundary. A positive score for a class indicates that the sample is predicted to be in that class. A negative score indicates otherwise. Finally, for the IPMA, the sample scores were summed, and if the highest value was related to the COVID-19 score, the sample was assigned as “infected”.
The data collected by the IPMA from 36 volunteers (from Ecuador and Brazil, shown in Appendix A were used to evaluate the classifiers’ models previously trained with the CCS, CSS, and the database from [17]. The cough signals were preprocessed, and the feature extraction was performed over these three datasets and the IPMA database. Table 2 shows results regarding the Accuracy (ACC) and Area Under the Curve (AUC) obtained for the datasets from the CCS, CSS, and [17], using QSVM for cough and speech and DT for the dataset with body temperature, heart rate, and SpO2. For the IPMA’s data, an ACC of 100% was achieved.

4. Evaluation Metrics Applied to the IPMA

The methodology used to evaluate the IPMA is based on its ease of use and if the GUI is engaging for the volunteer. In order to evaluate the IPMA and the GUI, two scales were used here, which were the System Usability Scale (SUS) and the Post Study System Usability Questionnaire (PSSUQ).
The SUS is a methodology to evaluate the usability, effectiveness, and efficiency of a system, which was developed by Brooke in 1986 as a 10-question survey [34]. In the SUS, the volunteer gives a score for each question, ranging from 1 to 5. The value 1 means the volunteer totally disagrees with the sentence being asked, whereas 5 means the individual totally agrees. The SUS sentences are shown below [35]:
  • I think that I would like to use this system frequently.
  • I found the system unnecessarily complex.
  • I thought the system was easy to use.
  • I think that I would need the support of a technical person to be able to use this system.
  • I found the various functions in this system were well integrated.
  • I thought there was too much inconsistency in this system.
  • I would imagine that most people would learn to use this system very quickly.
  • I found the system very cumbersome to use.
  • I felt very confident using the system.
  • I needed to learn a lot of things before I could get going with this system.
The way the sentences are organized has even questions, with a positive view of the system, and odd questions, with a negative view. There is a similar odd question for each even question, but written in a different way. The Total SUS score (T) is calculated using Equation (1), where p is the score of each question. Note that the parity of the question influences its contribution to the total score.
T = 2.5 o d d s p 1 + e v e n 5 p
Although the SUS scale ranges from 0 to 100, it is not a percentile scale; there is a graph used to correlate the percentile scale with the SUS score, with the average (50%) represented by the score 68 [36]. In addition, according to [37], scores of the SUS above average are associated with “Good” products.
As previously mentioned, another important aspect of usability that should be evaluated is the user experience with the computational application, which is addressed by the PSSUQ. This scale is a 19-element standardized questionnaire, which can be scored from 1 (“I strongly agree”) to 7 (“I totally disagree”), 4 being “neutral”. It is used to evaluate the user experience with computer systems and applications and was developed by IBM in 1988 from a project called System Usability Metrics (SUMS) [36]. It follows a 7-point Likert scale, and the result is the average score of the questions.
An interesting aspect of this scale is that it can be split into three subcategories to evaluate three different aspects of the user experience, which are the System Usefulness (SYSUSE), Information Quality (INFOQUAL), and Interface Quality (INTERQUAL). The SYSUSE is calculated by averaging the results from Questions 1 to 6, whereas the INFOQUAL and INTERQUAL average the results for Questions 7 to 12 and 13 to 15, respectively. Additionally, there is the overall score, which is the average of all questions, including the 16th question.
In the PSSUQ, lower scores mean better evaluations, thus indicating a higher level of usability. The neutral value is 4, whereas values closer to 1 represent better usability and closer to 7 represent worse usability. The sentences in the PSSUQ scale are [38]:
  • Overall, I am satisfied with how easy it is to use this system.
  • It was simple to use this system.
  • I was able to complete the tasks and scenarios quickly using this system.
  • I felt comfortable using this system.
  • It was easy to learn to use this system.
  • I believe I could become productive quickly using this system.
  • The system gave error messages that clearly told me how to fix problems.
  • Whenever I made a mistake using the system, I could recover easily and quickly.
  • The information (such as online help, on-screen messages, and other documentation) provided with this system was clear.
  • It was easy to find the information I needed.
  • The information was effective in helping me complete the tasks and scenarios.
  • The organization of information on the system screens was clear.
  • The interface of this system was pleasant.
  • I liked using the interface of this system.
  • This system has all the functions and capabilities I expect it to have.
  • Overall, I am satisfied with this system.

Evaluation Protocol

Following [39], testing a large number of participants does not necessarily provide much more information than testing just a few people, since even a few participants are able to find most of the usability issues. According to them, a usability test performed by five users should be enough to detect up to 80% of the potential problems of a product or website. Thus, the probability of a user finding an issue is about 31%. After those five users, the same findings continue to be observed repeatedly without discovering anything new. Thus, based on their study, we selected 18 people for the tests of each IPMA (from Ecuador and Brazil), resulting in a total of 36 individuals.
In the evaluations, the volunteer had to follow the instructions to have his/her biological data collected, and after using the equipment, he/she filled out the SUS and PSSUQ forms. Therefore, it was possible to know their opinion about the usability of the IPMA. Due to the current sanitary restrictions, the equipment was only evaluated with volunteers that did not have ongoing COVID-19 infection, which means they were either recovered or had never had the disease. The evaluation protocol consists of the following steps:
  • Volunteers were given an explanation about the whole process of the use of the equipment.
  • They filled out a questionnaire that included their birthday, gender, and health questions.
  • The system asked the volunteer to open the IPMA’s door and make a 10 s forced cough.
  • Afterwards, the system asked the volunteer to speak a phonetically balanced sentence.
  • Next, the volunteer was informed that the system would take the measurements. The volunteer was asked to place his/her arm properly inside the IPMA. Then, measurements took place after pressing the start button
  • Once the measurements were finished, the IPMA asked the volunteer to remove his/her arm from the IPMA.
  • Further, the system acknowledged the volunteer and started the UV-C disinfection process.
  • Finally, the volunteer was asked to fill out two forms (SUS and PSSUQ).

5. Results and Analysis

5.1. Evaluation Conducted in Ecuador

In these trials, it was possible to evaluate the equipment’s aspects, including its mechanical features and usability by 18 volunteers. Regarding the age of the individuals, 72.22% (13 individuals) were young adults (15–29 years old) and 27.78% (5 individuals) were adults (30–64 years old). Results for the measurements, as well as the photos of the experiment setup can be viewed in Appendix B.
In terms of the usability of the equipment, the SUS provided a good result (78.19), which was above the average (68). All individuals but one gave scores that reached values better than the average. Regarding the interface evaluation, the scores showed that the GUI was considered to have a high level of usability, as the average was 1.6 (with subscales SYSUSE scoring 1.5, INFOQUAL scoring 1.8, and INTERQUAL scoring 1.4). Appendix B shows the results of the SUS and PSSUQ for Ecuadorian volunteers.
Regarding the concept of the SYSUSE, individuals’ opinions showed that the app was intuitive, even for the first time of use. On the other hand, some individuals, especially those who were older, felt that the need to have a Gmail account was a drawback for the application. In the INFOQUAL subscale, individuals considered the system functionality. For instance, they suggested the IPMA should emit warnings when the network signal is weak or the hand position is not correct. Finally, in the INTERQUAL subscale, the interface was considered friendly for most users; however, they suggested that more graphs would make the system better.

5.2. Evaluation Conducted in Brazil

The 18 Brazilian volunteers followed the GUI, which guided them in the process of filling out the form, capturing the audio recordings, and taking the measurements. Afterwards, they were asked to fill out both the SUS and PSSUQ forms to evaluate the IPMA. The detailed results from the IPMA trials are presented in Appendix A.
From the SUS perspective, the equipment scored 81.11 for the Brazilian volunteers, a value considered above average for usability. Five of them (27.78%) scored the equipment below average (68), 2 out of those 5 scores being slightly below average (67.5 and 65). However, the majority of the individuals (13 of them, i.e., 72.22%) considered the equipment quite useful.
The PSSUQ scores showed that the volunteers found the GUI quite useful, with an average score of 2.4 (where the INFOQUAL had a score of 2.8, the SYSUSE had a score of 2.2, and the INTERQUAL had a score of 2.0). This shows, in general, that the Brazilian volunteers also had a good experience with the equipment.

6. Conclusions

This work introduced the all-in-one Integrated Portable Medical Assistant (IPMA), which is a piece of equipment that allows biomedical data acquisition from individuals, infected or not with COVID-19, through sounds of coughing and three biomedical signals (heart rate, oxygen saturation level, and body temperature) acquired from medically certified devices. The values of these biomedical data were obtained through their displays’ readings by cameras and using a 100% efficient VGG16 ANN. All these signals collected from the individual by the IPMA feed pre-trained machine learning algorithms (which achieved approximately ACC = 88.0% and AUC = 0.85 and ACC = 99% and AUC = 0.94, respectively, using QSVM and DT) to allow inferring possible COVID-19 infection, with 100% accuracy, thus indicating to the individual when it is time to seek medical care. It is important to say that although the data collected by the IPMA were all from individuals without COVID-19 infection, our machine learning algorithms showed a good performance to infer COVID-19 from different databases.
Regarding the evaluation from the individuals who used the IPMA, it was considered successful, since it achieved an average score over 68 on the SUS, which means the equipment was considered “above average” (or, in other words, good equipment). Additionally, the PSSUQ presented low scores for both IPMA versions, which means high overall usability.
It is worth mentioning that the IPMA has great advantages as it is a non-invasive, real-time, any-time equipment for large-scale data acquisition for screening and may be used for daily screening of students, workers, and people in public places, such as schools, jobs, and public transportation, to quickly alert that there are group outbreaks. Furthermore, due to its portability, it is suitable to be used in hospitals, in clinics, or at home. Additionally, the IPMA was designed to be user-friendly, with a comprehensive GUI, and safe, since it uses UV-C light to disinfect it.
With the widespread use of the IPMA and collection of data, a new database is being created, which will be quite useful for new studies about alternative parameters to infer COVID-19 infection and other respiratory diseases, mainly because, more than two years after its outbreak, COVID-19 is still strongly threatening the world with its new variants. Mobile applications (apps) were also developed here to allow the compilation of the main physiological signals captured by the equipment and, then, visualize them in mobile phones.
The IPMA was also proven to be functional, and the evaluations conducted with individuals showed that the measurements can be performed easily, while the results can be stored for further analysis and for machine learning training. As future works, we expect to use this equipment to evaluate other respiratory diseases, such as cold, flu, or pneumonia, by training the machine learning with more data.
Regarding a comparison between the IPMA with other equipment with the same purpose, this was not possible, since, as far as we know, there is no other equipment that tries to perform data acquisition from medically certified devices, create a database, and apply machine learning algorithms in the same way as we did here. Additionally, we tested our equipment with known scales to evaluate usability and user interface quality, in order to validate our research.
The limitations of our work are mainly the need for an AC outlet near the equipment to power the UV-C lamps, as well as the need for batteries for each medical device, since we could not power them directly with the main IPMA battery due to the risk of losing their medical certification. Additionally, the structure became bigger because of the need for linear actuators and cameras, as we could not obtain the measurement signals directly from the electronic board of the medically certified devices, to avoid losing their medical certification.
As future works, we plan to perform more trials in different geographical areas to collect more data and verify the usability with people of different regions. Furthermore, we will add some improvements to the user interface to make it more user-friendly and to improve the visualization in the mobile application. Finally, tests will be performed at public healthcare institutions to evaluate the use of the device.

Author Contributions

I.C., G.S., C.V., L.S., Y.C., L.L. and L.R. performed the experiments, analysed the data, and drafted the manuscript. A.C.V.-P., D.D.-R., J.C.-S., F.U.-O., C.D. and E.C. contributed to designing the experimental protocol, analysed the data, and drafted the manuscript. S.K. and T.B.-F. contributed with materials and analysis tools and also contributed to the data analysis and interpretation of the findings. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES/Brazil): 012/2020 and Universidad Politécnica Salesiana (UPS) (Cuenca, Ecuador).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of UFES/Brazil (CAAE: 64800116.9.0000.5542).

Informed Consent Statement

All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of UFES/Brazil (CAAE: 64800116.9.0000.5542).

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge the financial support from UPS/Ecuador, CAPES/Brazil and ELAP/Canada (financial support and postdoc fellow scholarship, respectively), CNPq/Brazil (Ph.D. and researcher scholarships), FACITEC/Brazil (Ph.D. scholarship), and Global Affairs Canada (Canadian scholarship). The authors also acknowledge all the volunteers from Ecuador and Brazil.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Measurements Conducted with the IPMA

Figure A1 and Figure A2 show mosaics with photos of the Ecuadorian and Brazilian volunteers taking the tests.
Figure A1. Ecuadorian volunteers taking the tests with the IPMA.
Figure A1. Ecuadorian volunteers taking the tests with the IPMA.
Sensors 22 04341 g0a1
Figure A2. Brazilian volunteers taking the tests with the IPMA.
Figure A2. Brazilian volunteers taking the tests with the IPMA.
Sensors 22 04341 g0a2

Appendix B. Measurements Conducted with the IPMA

Appendix B.1. Measurements

Table A1 shows the data of oxygen saturation, heart rate, and body temperature, taken with the IPMA, in addition to the information of the birth date, gender, and whether the volunteer is infected or not with COVID-19. In this table, SpO2 is the oxygen saturation level (in percentage), HR is the Heart Rate (in bpm—beats per minute), Temp is the body temperature (in degrees Celsius), and SBP and DBP are, respectively, Systolic Blood Pressure and Diastolic Blood Pressure, measured in millimetres of mercury (mmHg). In the column Gender, M stands for Male and F for Female. In the column COVID, R stands for Recovered and N for Negative, RT-PCR refers to the Real-Time Polymerase Chain Reaction, NS means No Symptoms, ME means Medical Evaluation, and PE means Personal Evaluation. The columns Onset and Finish refer to the date of the beginning of the symptoms and the date that the symptoms ended, respectively.
Table A1. Measurements and volunteers’ information taken during the IPMA’s evaluation by Ecuadorian volunteers.
Table A1. Measurements and volunteers’ information taken during the IPMA’s evaluation by Ecuadorian volunteers.
IDGenderBirth DateCOVIDOnsetFinishSpO2HRTempSBPDBP
1M06/29/97R (RT-PCR)08/18/2108/21/21989336.44581
2M07/19/99R (RT-PCR)08/09/2108/23/21947636.263110
3M11/26/71N (RT-PCR)--936836.17089
4F31/1/1999N (NS)--818335.271111
5M04/21/97R (RT-PCR)06/15/2107/15/21937335.449107
6M01/18/95R (RT-PCR)05/03/2105/10/21987936.070102
7M07/08/97R (RT-PCR)06/08/2106/22/21926835.986117
8F12/19/98R (RT-PCR)01/10/2201/31/22927336.265111
9F11/02/99N (NS)--896336.06091
10M06/18/98N (NS)--966336.06698
11M01/01/74N (NS)--847736.252113
12M10/14/97N (NS)--967436.269104
13F01/07/84N (NS)--967036.271102
14M02/07/74N (NS)01/03/2201/12/22956436.179119
15M07/13/78N (NS)--997536.183117
16F01/06/03R (RT-PCR)--948635.94685
17M07/06/02N (NS)--988836.143108
18M12/16/03N (NS)--917636.360114
Avg-----93.2874.9436.0363.78104.39
Table A2. Measurements and volunteers’ information taken during the IPMA’s evaluation by Brazilian volunteers.
Table A2. Measurements and volunteers’ information taken during the IPMA’s evaluation by Brazilian volunteers.
IDGenderBirth DateCOVIDOnsetFinishSpO2HRTempSBPDBP
1F01/07/1984N (QT)--987236.89866
2F04/09/1993R (PE)27/12/202004/01/2021979836.311679
3M05/29/1989R (PE)27/12/202004/01/2021987336.512071
4F07/19/1985N (NS)--977636.913281
5F08/09/1981N (NS)--986736.513181
6M06/04/1973N (RT-PCR)--976836.510671
7F09/29/1994N (NS)--987836.311969
8F05/14/1998N (NS)--988236.711274
9F12/01/1994N (ME)--999436.411863
10F05/23/1974N (RT-PCR)--987436.6115105
11F01/26/1971N (ME)--969136.211452
12F11/11/1991N (ME)--988936.311058
13F01/14/1989N (NS)--9910136.314395
14M12/15/1989N (RT-PCR)--966336.39649
15M10/04/1991N (NS)--958636.111770
16F07/09/1989N (NS)--9710436.313597
17M09/12/1981N (ME)--977136.413780
18M06/29/1965N (RT-PCR)--956936.412583
Avg-----97.2880.8936.43119.1174.67

Appendix B.2. SUS

Table A3 and Table A4 show the SUS scores given by the volunteers who tried the IPMA. The first column shows the volunteers who took the experiments, from 1 to 18; in the second line, the numbers are related to the SUS sentences, and the column “SUS” is the overall score calculated for each volunteer. The last line is the average.
Table A3. SUS results for Ecuadorian volunteers.
Table A3. SUS results for Ecuadorian volunteers.
Questions
ID12345678910SUS
1322424143365.00
2351414151590.00
3221525251175.00
4241214152582.50
5331515155580.00
6531232321347.50
7151515151497.50
8141513151182.50
9151414151492.50
10241124151170.00
11141514151595.00
12551113154362.50
13511514151577.50
14251114151585.00
15141314151487.50
16341234251370.00
17351524251587.50
18242242251260.00
Avg2.503.831.113.391.613.831.334.781.563.5678.19
Table A4. SUS results for Brazilian volunteers.
Table A4. SUS results for Brazilian volunteers.
Questions
ID12345678910SUS
1425341514185.00
2424441314467.50
3525451515287.50
45151515151100.00
5515242515192.50
6515141514195.00
7515551415187.50
85151515151100.00
9515433435175.00
10315532523365.00
115151515151100.00
12523142223172.50
13324251514185.00
14324434443155.00
155151515151100.00
16334531233252.50
17223424452240.00
185151515151100.00
Avg4.301.504.602.704.101.604.301.704.201.4081.11

Appendix B.3. PSSUQ

The tables with the PSSUQ scores that the volunteers gave at the end of the experiment are given below. Table A5 shows the Ecuadorian results for the PSSUQ and Table A6 the Brazilian results. The numbers 1 to 18 in each table represent the volunteers who took the evaluation; in the second line, there is the number of PSSUQ sentences and the scores for each subscale and for the PSSUQ. In this line, S stands for the subscale SYSUSE, F for INFOQUAL, Q for INTERQUAL, and P for PSSUQ (overall score). The last line is the averages.
Table A5. PSSUQ results for the Ecuadorian volunteers.
Table A5. PSSUQ results for the Ecuadorian volunteers.
SentencesScores
ID12345678910111213141516SFQP
121112322432312211.72.81.72.0
211232211311111121.81.41.01.5
311222112321111111.51.81.01.4
411122122221111111.31.61.01.4
511111111111111111.01.01.01.0
642343122334434552.83.24.03.3
711111211611111111.22.01.01.4
811111111221111111.01.41.01.1
911111111611111111.02.01.01.3
1011111111111111111.01.01.01.0
1112111111211111221.21.21.31.3
1252733213331133123.72.22.32.7
1311111111641114111.02.62.01.7
1411121111111111121.21.01.01.1
1511111111112211111.01.41.01.1
1611111224111222121.21.81.71.6
1712231114211211111.72.01.01.6
1811122332211321111.71.81.31.7
Avg1.41.31.71.91.71.71.62.13.12.11.82.12.02.22.12.31.51.81.41.6
Table A6. PSSUQ results for the Brazilian volunteers.
Table A6. PSSUQ results for the Brazilian volunteers.
SentencesScores
ID12345678910111213141516SFQP
1921333233421211132.32.51.02.2
2033342222762223212.83.52.32.9
2111111111312111111.01.51.01.2
2212111111111122111.21.01.71.2
2311211122222222111.22.01.71.6
2422212411333222232.22.22.02.2
2512111121432112111.22.21.31.6
2611111111111111111.01.01.01.0
2731211144761111111.53.81.02.3
2821224212643223242.23.02.32.6
2911111111111111111.01.01.01.0
3063335646773432374.35.22.74.5
3111111111444111211.02.51.31.6
3234344443545554443.74.34.34.1
3322444224444422223.03.72.03.0
3444456444663324334.54.33.04.1
3511111111711111111.02.01.01.4
3645544554545455544.54.55.04.6
Avg2.22.02.22.22.42.22.22.34.33.32.42.11.92.11.92.22.22.82.02.4

Appendix C. Health-Related Questions (IPMA Form)

The health-related questions from the survey in the trials with the IPMA are:
  • Did you have any of these symptoms last week? Please, check those you had. Multiple choices are available in a set of fever, fatigue, throat ache, respiratory difficulty, persistent pain, chest pressure, diarrhoea, cough, other and none of the above.
  • Do you think you currently have COVID-19? Single choice in a set containing Yes, No or Recovered answers.
  • How do you know whether you currently have COVID-19 or not? Please, specify if you made a test. Single choice in a set containing “I do not think I have COVID-19 now (no test was made)”, “RT-PCR test”, “Quick test”, “Medical evaluation”, “Personal evaluation” and “Other”.
  • If you checked “Other”, please specify how you know you have or not COVID-19. The answer here is a text explaining the symptoms the user had last week.
  • Symptoms onset and finish date The answer here are the dates of the beginning and end of the symptoms for those who had the disease.

References

  1. Wang, C.; Liu, Z.; Chen, Z.; Huang, X.; Xu, M.; He, T.; Zhang, Z. The establishment of reference sequence for SARS-CoV-2 and variation analysis. J. Med. Virol. 2020, 92, 667–674. [Google Scholar] [CrossRef] [PubMed]
  2. Corredor-Vargas, M.; Torezani, R.; Paneto, G.; Bastos-Filho, T. Importance of Sequencing the SARS-CoV-2 Genome Using the Nanopore Technique to Understand Its Origin, Evolution and Development of Possible Cures. In XXVII Brazilian Congress on Biomedical Engineering; IFMBE Proceedings Book Series; Springer: Cham, Switzerland, 2022; Volume 83, pp. 1341–1344. [Google Scholar]
  3. Sabino, E.C.; Buss, L.F.; Carvalho, M.P.; Prete, C.A.; Crispim, M.A.; Fraiji, N.A.; Pereira, R.H.; Parag, K.V.; da Silva Peixoto, P.; Kraemer, M.U.; et al. Resurgence of COVID-19 in Manaus, Brazil, despite high seroprevalence. Lancet 2021, 397, 452–455. [Google Scholar] [CrossRef]
  4. Resende, P.C.; Gräf, T.; Paixão, A.C.; Appolinario, L.; Lopes, R.S.; Mendonça, A.C.; da Rocha, A.S.; Motta, F.C.; Neto, L.G.; Khouri, R.; et al. TA potential SARS-CoV-2 variant of interest (VOI) harboring mutation E484K in the Spike protein was identified within lineage B.1.1.33 circulating in Brazil. Viruses 2021, 13, 724. [Google Scholar] [CrossRef] [PubMed]
  5. Johns Hopkins University. Available online: https://coronavirus.jhu.edu/map.html (accessed on 1 June 2022).
  6. Torezani, R.; Corredor-Vargas, M.; Ardisson, J.; Pirovani, M.; Santos, P.; Paneto, G.; Bastos-Filho, T. Molecular dynamics of the COVID-19 pandemic in Espirito Santo (Brazil) and border States. Rev. Inst. Med. Trop. São Paulo 2022, 64, e24. [Google Scholar] [CrossRef]
  7. reliefweb. Two Years into COVID-19 Pandemic, Less Than 10% of People Living in Crisis Are Vaccinated; Just $96 million—Less Than 1% of the Health Budgets of the US, UK, EU and Germany—Is Needed to Vaccinate People in IRC’s Operating Areas. 2022. Available online: https://reliefweb.int/report/world/two-years-covid-19-pandemic-less-10-people-living-crisis-are-vaccinated-just-96-million (accessed on 19 April 2021).
  8. DynaMed. Available online: https://www.dynamed.com/condition/covid-19-novel-coronavirus (accessed on 19 April 2021).
  9. Oliveira, B.A.; Oliveira, L.C.; Sabino, E.C.; Okay, T.S. SARS-CoV-2 and the COVID-19 disease: A mini review on diagnostic methods. Rev. Inst. Med. Trop. São Paulo 2020, 62, e44. [Google Scholar] [CrossRef]
  10. Coelho, Y.; Lampier, L.; Valadão, C.; Caldeira, E.; Delisle-Rodríguez, D.; Villa-Parra, A.C.; Cobos-Maldonado, C.; Calle-Siguencia, J.; Urgiles-Ortiz, F.; Bastos-Filho, T. Towards the use of artificial intelligence techniques in biomedical data from an integrated portable medical assistant to infer asymptomatic cases of COVID-19. In Proceedings of the Information Technology & Systems, Virtually, 27–28 March 2021; Advances in Intelligent Systems and Computing. pp. 24–34. [Google Scholar]
  11. Oran, D.P.; Topol, E.J. Prevalence of asymptomatic SARS-CoV-2 infection: A narrative review. Ann. Intern. Med. 2020, 173, 362–367. [Google Scholar] [CrossRef]
  12. Llanos, K.; Landi, C.; Yupa, F.; Vasquez, P.; Criollo, I.; Calle-Siguencia, J.; Urgilés-Ortiz, F.; Villa-Parra, A.C. Prototype of a Device for the Automatic Measurement of Physiological Signals to Assist the Diagnosis and Monitoring of patients with COVID-19. Ingenius. Rev. Cienc. Y Tecnol. 2022, 1, 49–58. [Google Scholar]
  13. Topol, E. Is My Cough COVID-19? 2020. Available online: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)32589-7/fulltext (accessed on 19 April 2021).
  14. Laguarta, J.; Hueto, F.; Subirana, B. COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings. IEEE Open J. Eng. Med. Biol. 2020, 1, 275–281. [Google Scholar] [CrossRef]
  15. Symptoms of Coronavirus. Available online: https://www.webmd.com/lung/covid-19-symptoms#1 (accessed on 19 April 2021).
  16. Mishra, T.; Wang, M.; Metwally, A.A.; Bogu, G.K.; Brooks, A.W.; Bahmani, A.; Alavi, A.; Celli, A.; Higgs, E.; Dagan-Rosenfeld, O.; et al. Pre-symptomatic detection of COVID-19 from smartwatch data. Nat. Biomed. Eng. 2020, 4, 1208–1220. [Google Scholar] [CrossRef]
  17. Shaik, H. Assessing the Infection Status of COVID-19 Patients Using a Wearable Prototype. OpenAire. 2021. Available online: https://zenodo.org/record/4766192#.Ypis5exBxPY (accessed on 19 April 2021).
  18. Stokes, K.; Castaldo, R.; Federici, C.; Pagliara, S.; Maccaro, A.; Cappuccio, F.; Fico, G.; Salvatore, M.; Franzese, M.; Pecchia, L. The use of artificial intelligence systems in diagnosis of pneumonia via signs and symptoms: A systematic review. Biomed. Signal Process. Control 2022, 72, 103325. [Google Scholar] [CrossRef]
  19. Sharma, N.; Krishnan, P.; Kumar, R.; Ramoji, S.; Chetupalli, S.R.; Nirmala, R.; Ghosh, P.K.; Ganapathy, S. Coswara—A Database of Breathing, Cough, and Voice Sounds for COVID-19 Diagnosis. In Proceedings of the Interspeech 2020, Virtual, 25–29 October 2020; ISCA: Singapore, 2020; pp. 4811–4815. [Google Scholar] [CrossRef]
  20. Sharma, G.; Umapathy, K.; Krishnan, S. Audio texture analysis of COVID-19 cough, breath, and speech sounds. Biomed. Signal Process. Control 2022, 76, 103703. [Google Scholar] [CrossRef] [PubMed]
  21. Hussain, S.A.; Al Bassam, N.; Zayegh, A.; Al Ghawi, S. Prediction and Evaluation of healthy and unhealthy status of COVID-19 patients using wearable device prototype data. MethodsX 2022, 9, 101618. [Google Scholar] [CrossRef] [PubMed]
  22. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
  23. Chollet, F. Keras. 2015. Available online: https://keras.io (accessed on 19 April 2021).
  24. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. Available online: tensorflow.org (accessed on 19 April 2021).
  25. Nesselroad, J.M.; Flacco, V.A.; Phillips, D.M.; Kruse, J. Accuracy of automated finger blood pressure devices. Fam. Med. 1996, 28, 189–192. [Google Scholar] [PubMed]
  26. Kopp, J.A. A Selfie Video Can Measure Your Blood Pressure. 2021. Available online: https://www.phillyvoice.com/blood-pressure-measure-smartphone-selfies-videos (accessed on 19 April 2021).
  27. Jordao, A.; Nazare, A.C., Jr.; Sena, J.; Schwartz, W.R. Human activity recognition based on wearable sensor data: A standardization of the state-of-the-art. arXiv 2019, arXiv:1806.05226. [Google Scholar]
  28. Orlandic, L.; Teijeiro, T.; Atienza, D. The COUGHVID crowdsourcing dataset, a corpus for the study of large-scale cough analysis algorithms. Sci. Data 2021, 8, 156. [Google Scholar] [CrossRef]
  29. Han, J.; Xia, T.; Spathis, D.; Bondareva, E.; Brown, C.; Chauhan, J.; Dang, T.; Grammenos, A.; Hasthanasombat, A.; Floto, A.; et al. Sounds of COVID-19: Exploring realistic performance of audio-based digital testing. NPJ Digit. Med. 2022, 5, 16. [Google Scholar] [CrossRef]
  30. Brown, C.; Chauhan, J.; Grammenos, A.; Han, J.; Hasthanasombat, A.; Spathis, D.; Xia, T.; Cicuta, P.; Mascolo, C. Exploring Automatic Diagnosis of COVID-19 from Crowdsourced Respiratory Sound Data. In Proceedings of the KDD’ 20—26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual, 6–10 July 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 3474–3484. [Google Scholar] [CrossRef]
  31. Nanni, L.; Maguolo, G.; Brahnam, S.; Paci, M. An ensemble of convolutional neural networks for audio classification. Appl. Sci. 2021, 11, 5796. [Google Scholar] [CrossRef]
  32. Sharma, G.; Umapathy, K.; Krishnan, S. Trends in audio signal feature extraction methods. Appl. Acoust. 2020, 158, 107020. [Google Scholar] [CrossRef]
  33. Turan, C.; Lam, K.M. Histogram-based local descriptors for facial expression recognition (FER): A comprehensive study. J. Vis. Commun. Image Represent. 2018, 55, 331–341. [Google Scholar] [CrossRef]
  34. Brooke, J. SUS: A Retrospective. J. Usability Stud. 2013, 8, 29–40. [Google Scholar]
  35. Brooke, J. SUS: A quick and dirty usability scale. Usability Eval. Ind. 1995, 189, 3. [Google Scholar]
  36. Lewis, J.R. The System Usability Scale: Past, Present, and Future. Int. J. Hum.–Comput. Interact. 2018, 34, 577–590. [Google Scholar] [CrossRef]
  37. Sauro, J. 5 Ways to Interpret a SUS Score. MeasuringU. 2018. Available online: https://measuringu.com/interpret-sus-score/ (accessed on 19 April 2021).
  38. Lewis, J.R. IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. Int. J. Hum.-Comput. Interact. 1995, 7, 57–78. [Google Scholar] [CrossRef] [Green Version]
  39. Nielsen, J.; Landauer, T.K. A mathematical model of the finding of usability problems. In Proceedings of the Conference on Human Factors in Computing Systems—Proceedings, Amsterdam, The Netherlands, 24–29 April 1993. [Google Scholar] [CrossRef]
Figure 1. Two versions of the IPMA used in this study. (a) Version A. (b) Version B.
Figure 1. Two versions of the IPMA used in this study. (a) Version A. (b) Version B.
Sensors 22 04341 g001
Figure 2. Details of the experiments with the IPMA. In (a), an individual inserting his/her arm; in (b), the detail of the structure where the hand is placed to measure body temperature and oxygen saturation; in (c), a photo of the oximeter display. The upper number (right in the image) is the oxygen saturation, and the bottom number (left in the image) is the heart rate. There is a silicone cover on the power button to avoid the linear actuator damaging the oximeter.
Figure 2. Details of the experiments with the IPMA. In (a), an individual inserting his/her arm; in (b), the detail of the structure where the hand is placed to measure body temperature and oxygen saturation; in (c), a photo of the oximeter display. The upper number (right in the image) is the oxygen saturation, and the bottom number (left in the image) is the heart rate. There is a silicone cover on the power button to avoid the linear actuator damaging the oximeter.
Sensors 22 04341 g002
Figure 3. Version B with UV-C applied.
Figure 3. Version B with UV-C applied.
Sensors 22 04341 g003
Figure 4. Possible states of the oximeter display. (a) Complete reading procedure and points (in green) needed for the oximeter image alignment; (b) oximeter turned off; (c) incomplete reading procedure.
Figure 4. Possible states of the oximeter display. (a) Complete reading procedure and points (in green) needed for the oximeter image alignment; (b) oximeter turned off; (c) incomplete reading procedure.
Sensors 22 04341 g004
Figure 5. Samples generated using image transformations.
Figure 5. Samples generated using image transformations.
Sensors 22 04341 g005
Figure 6. Processing steps for the oximeter display images. (a) The oximeter image taken by the camera; (b) the ANN output points marked in red; (c) the resulting warping procedure given the VGG16 points; (d) flipping the image to generate the final image.
Figure 6. Processing steps for the oximeter display images. (a) The oximeter image taken by the camera; (b) the ANN output points marked in red; (c) the resulting warping procedure given the VGG16 points; (d) flipping the image to generate the final image.
Sensors 22 04341 g006
Figure 7. Diagram of the OCR recognition algorithm using the aligned image (the oximeter display in this example) to search for key pixels applying template matching and return the displayed value in text format.
Figure 7. Diagram of the OCR recognition algorithm using the aligned image (the oximeter display in this example) to search for key pixels applying template matching and return the displayed value in text format.
Sensors 22 04341 g007
Figure 8. Block diagram of the machine learning algorithms used in this study. (a) Cough signal; (b) speech signal; (c) heart rate, body temperature, and SpO2.
Figure 8. Block diagram of the machine learning algorithms used in this study. (a) Cough signal; (b) speech signal; (c) heart rate, body temperature, and SpO2.
Sensors 22 04341 g008
Table 1. Features of each version of the IPMA.
Table 1. Features of each version of the IPMA.
Version AVersion B
Size22 × 28.5 × 49.6 (cm 3 )30 × 70 × 70 (cm 3 )
OximeterShenzen IMDK (C101A3)Hunan Accurate Bio-Medical (FS10K)
ThermometerEasy East (model IR200)Bioland (E122)
MicrophoneKnup (KP-911)Knup (KP-911)
DisinfectionExternal UVCEmbedded UVC
Table 2. Results from the CCS, CSS, and [17] datasets.
Table 2. Results from the CCS, CSS, and [17] datasets.
CCS—Using QSVMCSS—Using QSVM[17]—Using DT
ACC (%)87.9870.3298.62
AUC0.850.660.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Villa-Parra, A.C.; Criollo, I.; Valadão, C.; Silva, L.; Coelho, Y.; Lampier, L.; Rangel, L.; Sharma, G.; Delisle-Rodríguez, D.; Calle-Siguencia, J.; et al. Towards Multimodal Equipment to Help in the Diagnosis of COVID-19 Using Machine Learning Algorithms. Sensors 2022, 22, 4341. https://doi.org/10.3390/s22124341

AMA Style

Villa-Parra AC, Criollo I, Valadão C, Silva L, Coelho Y, Lampier L, Rangel L, Sharma G, Delisle-Rodríguez D, Calle-Siguencia J, et al. Towards Multimodal Equipment to Help in the Diagnosis of COVID-19 Using Machine Learning Algorithms. Sensors. 2022; 22(12):4341. https://doi.org/10.3390/s22124341

Chicago/Turabian Style

Villa-Parra, Ana Cecilia, Ismael Criollo, Carlos Valadão, Leticia Silva, Yves Coelho, Lucas Lampier, Luara Rangel, Garima Sharma, Denis Delisle-Rodríguez, John Calle-Siguencia, and et al. 2022. "Towards Multimodal Equipment to Help in the Diagnosis of COVID-19 Using Machine Learning Algorithms" Sensors 22, no. 12: 4341. https://doi.org/10.3390/s22124341

APA Style

Villa-Parra, A. C., Criollo, I., Valadão, C., Silva, L., Coelho, Y., Lampier, L., Rangel, L., Sharma, G., Delisle-Rodríguez, D., Calle-Siguencia, J., Urgiles-Ortiz, F., Díaz, C., Caldeira, E., Krishnan, S., & Bastos-Filho, T. (2022). Towards Multimodal Equipment to Help in the Diagnosis of COVID-19 Using Machine Learning Algorithms. Sensors, 22(12), 4341. https://doi.org/10.3390/s22124341

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop