Next Article in Journal
A Bibliometric Analysis of Soccer Biomechanics
Previous Article in Journal
Dual-Neighborhood Tabu Search for Computing Stable Extensions in Abstract Argumentation Frameworks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Biometric Image-Analysis Techniques for Monitoring Chronic Neck Pain

by
Wagner de Aguiar
1,*,†,
José Celso Freire Junior
2,
Guillaume Thomann
3,† and
Gilberto Cuarelli
1
1
Instituto Federal de Sao Paulo, Engenharia Mecânica Automação, Sao Paulo 01109-010, SP, Brazil
2
Engineering and Sciences Faculty, Sao Paulo State University, Guaratinguéta Campus, Guaratinguéta 12516-410, SP, Brazil
3
Institute of Engineering and Management, Grenoble INP, G-SCOP Lab, University Grenoble Alpes, 38031 Grenoble, France
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(15), 6429; https://doi.org/10.3390/app14156429
Submission received: 23 May 2024 / Revised: 3 July 2024 / Accepted: 8 July 2024 / Published: 24 July 2024

Abstract

:
The term “mechanical neck pain” is a generic term used to define neck pain in people with neck injuries, neck dysfunction, or shoulder and neck pain. Several factors must be considered during the physical-therapy evaluation of cervical disorders, including changes in the visual systems and postural and proprioceptive balance. Currently, the Cervicocephalic Relocation Test (CRT) is used by physiotherapists to detect changes in cervical proprioception. This procedure requires precise equipment, customized installation in a dedicated area and, above all, a significant amount of time post-treatment for the doctor to make the diagnosis. An innovative system composed of Google’s MediaPipe library combined with a personal laptop and camera is proposed and evaluated. The system architecture was developed, and a user interface was designed with the goal of allowing the system to be used more easily, more quickly, and more effectively by the healthcare practitioner. The tool is presented in this paper and tested in a use case, and the results are presented. The final user report, containing the visualization of the results of the CRT, which are ready for analysis by the physical therapist, can be exported from the developed tool.

1. Introduction

In the context of cervical proprioceptive dysfunction, lower back pain and neck pain are among the most common health-related reasons for performing medical exams.
As Zennaro et al. [1] and Moley [2] noted, injuries and wear and tear can often occur because the neck is flexible and supports the head. Rix and Bagust [3] emphasized that greater attention has been given to this dysfunction, which is known as cervical proprioceptive syndrome. According to Pinsault et al. [4], the Cervicocephalic Relocation Test (CRT) [5] is used as a tool to detect related pathologies in cervical proprioception. It consists of positioning the head neutrally, blindfolding the patient, and then performing a rotation of the head to the left (and then to the right) a set number of times, with a subsequent return to the starting position. This procedure allows the evaluator to recover data relating to the initial and final positions of the movements, providing a measure of cervical proprioceptive acuity.
This evaluation of patients’ cervical proprioception has been proposed in several research studies. In [6], researchers used the target head-repositioning technique with elderly individuals with chronic neck pain (CNP). The objectives were to (a) compare the cervical proprioception and functional balance between patients with CNP and asymptomatic individuals, (b) investigate the relationship between cervical proprioception and functional balance ability in individuals with CNP. In this study, a cervical range of motion (CROM) instrument was utilized to evaluate cervical proprioception in the flexion, extension, and right and left rotation directions, as follows:
-
Step 1: examiner guides the participant’s neck from the starting position to the target position;
-
Step 2: patient’s neck returns to the starting position after the patient has memorized the target position;
-
Step 3: active repositioning to the target position by the participant.
CROM is a valid and reliable instrument for measuring cervical range of motion and proprioception in individuals with and without neck symptoms [7,8]. The unit is helmet-shaped and includes three inclinometers to measure the range of motion in sagittal and transverse planes.
Other studies measured the joint repositioning error [9] and cervical range of motion (ROM) [10]. In the first paper, the purpose was to compare the effects of specific neck-muscle training and general neck-shoulder exercises on neck proprioception, pain, and disability in patients. Twenty-five patients with chronic non-specific neck pain were recruited into a preliminary single-blinded randomized clinical trial. Joint-repositioning error for neck rotation was measured based on the methods first described by Revel et al. [5]. Participants were invited to sit on a chair located 90 cm from a white wall. A laser-beam pointer was fastened by an elastic strap to their heads. They were asked to keep their heads in a relaxed and neutral position while they looked straight forward at the wall (Figure 1). The laser light on the wall was marked as the reference point. Then, patients were asked to turn their head to the right and to the left to the end of available range, performing each movement once, and return to the original position with eyes open, attempting to align the laser light with the reference point. Next, their eyes were covered by a blindfold and they were asked to repeat the procedure with their eyes closed. While returning their heads to the original position, participants were asked to inform the examiner when they reached the original position. The new laser point on the wall was marked as the target point. Joint position error was calculated in degrees as the arctangent of the distance between the target and reference point in cm divided by 90 cm.
In the second paper, authors described using an iPhone 7 (Apple Inc., Cupertino, CA, USA) fixed on the patient’s head for a test of Cervical Range of Motion in lateral rotation [11]. The Smartphone Inclinometer Application was used to quantify Cervical spine mobility. Twenty-three individuals were asked to perform neck maximal (end-range) movements (frontal flex-extension and left-right side flexion in the sitting position, left-right rotation in the prone position). This study confirmed that the tested smartphone can provide a valid and reliable measure of ACROM on the frontal and the sagittal planes; furthermore, it also demonstrated its usefulness for rotational-movement analysis using the inclinometric application. This research illustrates the tendency to use new technology in the medical field for diagnosis.
The CRT is essential for determining the condition of people with cervical proprioceptive dysfunction. Although the CRT is a worldwide reference test, it requires setup and preparation of the experiment with the patient, the use of excessive space, a helmet and laser technology, and, above all, significant time post-treatment to allow the specialist to generate results. This research presents a low-cost and easy-to-use tool that can become a new support for specialists who have to evaluate dysfunction of cervical proprioception via the CRT.
The technology presented in this document was developed by considering proposals such as those by Dror et al. [12], Shao et al. [13], and Li et al. [14], which discuss the importance of “the ability to involve the physical body in an interaction with technology”. According to this idea, they proposed a noninvasive system based on Microsoft’s Kinect (Microsoft, Redmond, WA, USA) depth sensor and using a new tracking algorithm based on motion capture (MoCap). They proposed several applications to measure and record the position and orientation of the human body in motion.
Other authors, such as Castro et al. [15] and McDuff [16], have described other applications of MoCap technology, such as detecting heart rate, detecting an individual’s balance, identifying seizures, and analyzing disorders resulting from neurodegenerative diseases. In particular, this work was inspired by the proposals of Hanke et al. [17] and Brook et al. [18], who discussed the possibility of objectively assessing the gaits of patients with neurodegenerative diseases in controlled environments to quantify these anomalies using Kinect.
Proposals of low-cost solutions for treatment by specialists in monitoring cervical proprioceptive acuity involve the use of motion sensors in combination with new techniques, such as artificial intelligence [19,20]. Researchers have developed solutions to carry out low-cost, efficient, and accurate analyses that aid in the treatment of various dysfunctions. Applying these methods can further improve the ability to analyze complex multimodal imaging data and increase the efficiency of these diagnoses [11,21].
The analysis of the studies mentioned above shows that there are opportunities to use low-cost motion sensors for measuring and evaluating patients’ performances for specific evaluations that otherwise require expensive facilities. These new solutions can provide reliable support for experts. For other patient evaluations that are based on qualitative observation or that require complex installations, some of the testing equipment developed using new technologies is not only cheap, but also easy to install and handle. Most of the time, the new testing approaches also make it easier to quantify assessments and process data after the test.
For the CRT, the equipment used is not very expensive: a helmet, one target on the wall, one laser, a pen for marking points, and a dedicated room. The main issue arises during post-processing of the information collected. The innovative tool allows the practitioner to immediately obtain the data needed to make his diagnosis. Thus, this article presents materials and methods used to develop such an innovation dedicated to the CRT. The architecture of the proposition and the calculation model are presented, too. The formalization of the results aligns with the physicians’ needs to allow the diagnosis of cervical proprioceptive syndrome.

2. Materials and Methods

This section introduces the main ideas and concepts used to develop the tool proposed for analyzing cervical proprioceptive dysfunction.

2.1. Medical Protocol

As mentioned in the introduction, a clinical technique known as CRT is used to assess sensory-motor control disorders in people with mechanical neck pain or cervical kinesthesia. The patient undergoing the test must wear a cycling helmet with a laser pointer attached to the top. He is fitted with a mask to block his vision and has to stare at a target 90 cm in front of him. The target is made of graph paper (40 × 40 cm2), with the horizontal (x) and vertical (y) axes drawn so that origin of the axes divides the paper into four equal quadrants. After he has positioned the laser beam on the target, the patient is asked to memorize this position, perform repeated rotations to the left, and return to the starting position as accurately as possible. After 2 min of rest, the procedure is carried out with rotations to the right. At each stopping point, the beam indicated on the target is marked with a pen and labeled with the current number of repetitions and the direction of rotation.
This procedure can be time-consuming and prone to errors. To offer an option for achieving better results with greater efficiency, this paper proposes modifying the equipment by integrating a laptop and camera to make it easier to carry out the CRT analysis while maintaining the operating procedure, which has been validated by the specialist community at international level. In this way, the procedure can be performed with simple equipment, facilitating its use by various specialists and may speed up data-analysis procedures in real time cases. Because it is run with the support of a computer, the application provides instant analysis of the stored and collected data.

2.2. Motion Sensors

Stereoscopic image-detection technologies use sensors with at least two cameras to calculate depth. These sensors can visualize, understand, interact with, and learn from their environment in 3D.
The tool presented in this article is a version that evolved from the initial use of a Kinect sensor and that required various studies and analyses. Devices similar to the Kinect, such as Microsoft Azure Connecta (Microsoft, Redmond, WA, USA), Intel RealSense (Intel, Santa Clara, CA, USA) and ASUS Xtion (Asus, Taipei, Taiwan), which appeared after Kinect, enable detailed and accurate data to be retrieved. These devices, which are relatively similar in their technologies, are easy to install, enable data to be analyzed and interpreted quickly, and offer support for different operating systems and programming languages.
Tests using these sensors were carried out while a laptop camera was being used. Analyses of the data from the various cameras and consultations with the experts carrying out the CRTs led to the choice of the laptop camera for use in the development of our innovative solution.

2.3. Computer Vision and MediaPipe

Computer vision is one of the most innovative and fascinating areas of modernization, as it allows us, for example, to create applications with the ability to perceive the world around us. These applications involve using sensors and developing algorithms and machine-learning models to perceive an environment and capture it. As [13,22] stated, computer vision captures, processes, and interprets light rays from the outside world.
Image-analysis applications have shown great potential in the medical field, as they help significantly in the diagnosis of various diseases. Their use can further enhance the ability to analyze complex multimodal image data and improve the efficiency of diagnosis [21]. The possibilities are even greater when one can perform “dynamic analysis” of images involving gestural navigation or the ability to identify and track the human body. These methods involve carrying out basic machine-learning (ML) tasks, such as manual tracking, which consumes considerable time and resources.
In 2015, Google developers created a tool called TensorFlow Lite [22], an open-source library for ML applications on mobile, embedded, and IoT devices and computers. It facilitates computer vision, allowing developers to run their trained models on devices. This tool facilitated the development of MediaPipe by Google in 2019.
MediaPipe is an open-source, high-precision framework that offers access to various machine-learning models, allowing data capture with hardware acceleration and model optimization [23,24]. It enables the creation of pipelines with ML applied to multiple modalities (audio, video). It has an extensive collection of models for detection and tracking of the human body, wherein the points are normalized in three dimensions. With this approach, inference can be performed on arbitrary data, including perception with modular graphical components, model inference, media-processing algorithms, and data transformation [25]. It can be integrated into any tool using programs in different languages, such as Python, Java, and C/C++. In August 2020, Google provided open access to the code, which attracted the attention of developers of devices that use depth cameras in their applications.
The framework integrates a Face Mesh solution, which performs facial-landmark detection. It detects, in real time, an individual’s face in the input image and estimates 468 3D facial landmarks (Figure 2) as face-blend shapes. The reference at each point comprises data from the x, y and z axes. The values of x and y are normalized [0.0, 1.0] for the image width and height; z represents its depth, and the closer the object is to the camera, the smaller this value z will be.
As reported by [26,27], MediaPipe has been applied as a sensor for face detection in mobile devices. Advances in technologies applied to cameras have enabled an increase in their integration into wearable platforms, demonstrating a great potential for exploration. The tool proposed in this article is one of the possible uses of MediaPipe, combined with a laptop camera. The laptop camera has an HD resolution of 1280 × 720 and the ability to capture video at 30 fps. MediaPipe has a maximum resolution of 640 × 420 at 30 fps, so testing with the laptop camera was reasonable and proved suitable for the system under development.

2.4. Data Analysis

The measurement process carried out with the proposed tool follows the principles of the CRT. An individual wearing a helmet with a laser attached is positioned in front of a laptop with a sheet of graph paper positioned above the laptop screen, as shown in Figure 3. To meet the system’s objectives, experiments and tests were carried out to determine the best distance between the camera and the individual, and a distance of 80 cm was chosen.
As shown in Figure 2, MediaPipe Face Mesh detects an individual’s face in an input image and estimates 468 3D facial reference points. In the application developed, only point 94 (Figure 4), which corresponds to the center point of the nose (Nose Bottom) and is mentioned in [25], is used.

3. The Developed Tool

This section presents the tool that was developed. It shows the process it uses, one of its screens, and the procedure for collecting data.

3.1. System Architecture

Figure 5 shows the working principle of the tool developed with a standard RGB HD notebook camera and the program designed in JavaScript. Initially, an image is captured by the laptop camera; subsequently, using the MediaPipe application, the face is identified and the points are captured. The software then processes these data and indicates the distance between the first and second readings. These obtained values are visualized and presented by the software in the form of data and graphs immediately after the position of the nose has been captured. They can also be stored in a file to be analyzed later.

3.2. Current Version of the Tool

The system home screen shows the options “Home”, “Search-FaceMesh”, and “FaceMesh”. The “Home” option always returns to the home screen. The “FaceMesh” option gives access to the screen shown in Figure 6. Only the “Get Camera” option (shown on the left) is displayed initially. After the camera has been selected, the Panel screen (shown on the right) appears. From there, one can define the direction of rotation (the “Selected Test” option) and the number of readings (the “Average Points” option) that must be performed. One can access the data-storage and reading-configuration options on the same screen. Once the direction of rotation and the number of readings have been defined, the “Start” button must be activated for the readings to be taken.
To implement the test, an individual is asked to wear a helmet (Figure 3) and to position the light beam at a point on the sheet of graph paper; this position is subsequently stored. Then, the individual is asked to move their head to position the light beam approximately 5 cm away from the starting point (to the left or right—as chosen), and the new point is captured. The process is repeated until all the measurements have been taken.
A distance of 5 cm was defined based on references [3,5], which noted that an angle greater than 4 degrees (obtained with a 5 cm displacement) indicates that the individual may have problems with cervicocephalic kinesthetic sensitivity. Although the tests carried out in the articles use a distance of 90 cm from the sheet of graph paper, after tests were carried out with MediaPipe, a distance of 80 cm from the face to the camera was chosen, ensuring an angular displacement of 4 degrees.
To better understand the calculations made with the tool, Figure 7 presents starting point A and ending point B on the sheet of graph paper. These points are used in the calculations performed by the tool. Thus, the A B vector represents the distance covered by the laser point on the target during one movement of the patient (to the left or to the right). From the analysis of the model, Equation (1) is introduced, as follows:
b2 = (xB − xA)2 + (yB − yA)2
where b = distance (cm), xA = starting point, xB = end point, yA = starting point, yB = end point.
The length b indicates the distance between the readings taken when the subject’s head moves, capturing the initial and the final point after the movement. As the points can be captured at different positions, for each of the measurements taken, the distance between the first and second points is calculated (xA and yA indicate the reading of the first point, and xB and yB indicate the reading of the second point) to enable analyses to be performed with the CRT.
With the defined value of the hypotenuse (h value), one can calculate the sine of the angle of displacement (sin(α) = (b)/(h)) and then the angle (Figure 8).
After the readings have been finished, the “Report” button can be used to generate a report on the screen with graphs and tables showing information vital to the specialist. The software performs the calculations and displays the values on a screen. The collected data are presented with an indication, for each test, of the distance between the movements and the read points, the variation on the x and y axes, the value of the sine, and the angle of these points in relation to the nose.

4. Results

The Table 1 shows the values of each movement performed. The values refer to point 94 identified by MediaPipe (Figure 4) and correspond to the position of the subject’s nose in relation to the laptop camera.
Figure 9 and Figure 10 indicate the variation in the readings on the x and y axes for each reading obtained according to the movement of the face.
Using these graphs, it is easy to show the specialist the deviation in the replacement head position of the patient. Based on these data, he is able to immediately propose a diagnosis of cervical proprioceptive dysfunction. In this situation, one can observe a more or less consistent shift of the patient’s head to the left (decrease in the x value in Figure 9) and a deviation of the patient’s head upward (increase in the y value in Figure 10).
Figure 11 shows the final results presented to the specialist. The image is a digital representation of the graph paper indicating the result of the CRT applied to the patient. It is the equivalent of the square sheet of paper used in a standard CRT. On the target, a curve composed of 10 points can be observed. These points are the one represented in the Table 1, coordinates ( x A , y A ) 1 to ( x A , y A ) 10 .
This representation provides access to measurements of the displacements that occur during 20 head movements. To allow an easier understanding and absolute measurement of deviation, the proposition was to define the first point of the test at coordinates x A , y A = ( 0,0 ) . All other coordinate values are positioned relative to these.
From this stage of the program, it is possible to store the obtained points in a file using the “Export” button. If the specialist has identified a problem during data collection, it is possible to restart the process using the “Reset” button. If the reading information is satisfactory, a spreadsheet (.csv file) can be generated with the data obtained.
To generate the file name, information about the individual (name, age, and sex) to whom the test was applied is requested. These data and a code for the meaning of the reading performed make up the file name in the form name-age-sex-codxx.csv. The user can also enter more information that is necessary to identify the reading and the individual. With the stored data, it is possible to search the file names and review the measurements taken when necessary for a new analysis or comparison.
It is essential to evaluate the quality of measurement of our system compared to the traditional CRT method. Figure 12 shows the results of an experiment wherein ten measures were compared using a 5 cm reference for the traditional CRT method and our system at the same time. The proposed solution systematically yielded a distance less than 5 cm. A mean difference of 10% between the two methods was calculated.

5. Discussion

These initial results are very encouraging, but there is still room for improvement.
Comparative tests between the different types of camera can be carried out in greater depth to assess the advantages and disadvantages of each. As the specialists are very keen on simplicity and ease of understanding with regard to the equipment made available for their patient assessments, the use of a laptop camera was highly favored. Nevertheless, it could have been interesting to analyze how different camera qualities on laptops impact the accuracy of the system.
Extensive tests were carried out to verify the impact of the color of the backdrop behind the patient. No influence of color was detected. No tests have yet been carried out to verify the impact of brightness or that of patients with peculiar facial features. Such tests can be done in future studies
According to the data obtained, it was verified that the method can identify the position of the face between the initial and final movements in the same way this can be done when a CRT is applied to a patient in a controlled environment.
The manner in which these data should be analyzed will be defined by the expert. However, as seen, the tool facilitates both the application of the CRT and the formalization of the captured data and then stores the data in a simple location on the computer, facilitating the work of physical therapists, which was one of the project’s goals. For the CRT procedure, the installation of the laptop is very easy and the measurements can be carried out in the practitioner’s office. The practitioner will need only one measurement tool to properly position the patient at a good distance from the camera, or, better, a tool for measuring the distance between the camera and the patient can be configured on our system. Thus, a very easy-to-understand visual indicator can be used by the practitioner and the patient. With their eyes closed or while wearing a blindfold, the patient will be able to make head movements, sitting on a chair, listening to the instructions given by the healthcare professional or issued automatically by our system.
Point 94 of the MediaPipe Face Mesh reference points is used for this research. In future tests using the present system, the researchers plan to use at least three points to evaluate the quality of the proposal.

6. Conclusions

This article presents a tool to help physical therapists work with people who experience chronic neck pain due to problems with cervical proprioceptive dysfunction and describes how the tool was designed and developed. This paper describes how the face positioning is captured and which calculations are implemented to interpret the data captured by the tool. The research and development carried out and briefly presented here were fundamental to developing the studies that determined the sensor and framework best suited to the tool.
The tools currently used for the CRT are not very expensive. The improvements proposed, compared to the current system, are better not only in terms of simplicity of implementation (no helmet, no more targets on the wall, no laser, no more marking with a pen, no need for a dedicated room), but also and above all improve post-processing of the information collected. The innovative tool allows the practitioner to immediately obtain the data needed to make his diagnosis.
It is also essential to continue research using Google MediaPipe, for example, as an analysis tool for measuring human joint angles. Research in this direction could facilitate the creation of new tools for analyzing parts of the human body.
In addition to using the tool in CRTs, another possible application is aiding people who need physical therapy, for example, after a hand or arm surgery. A new version of the tool could help determine joint angles and thus be used to identify the presence or absence of dysfunctions and quantify joint-angle limitations.
In [28], the aim of the study was to investigate the effect of supra-threshold electrotherapy on pain level, subjective feeling of disability, and spinal mobility in patients with chronic pain in the spinal cord. In that study, 11 men and 24 women with a mean age of 49 years were randomly divided into three groups. Cervical and lumbar range of motion (ROM), as well as disability in daily life, were investigated for all the patients before and after the electrotherapy sessions. Only questionnaires were used for evaluation after the supra-threshold electrotherapy sessions. With such an intuitive and effective tool to evaluate cervical and lumbar range of motion, researchers would be able to add quantitative data to the study.

Author Contributions

Conceptualization, W.d.A.; Methodology, J.C.F.J., G.T. and G.C.; Software, W.d.A.; Validation, J.C.F.J., G.T. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zennaro, S.; Munaro, M.; Milani, S.; Zanuttigh, P.; Bernardi, A.; Ghidoni, S.; Menegatti, E. Performance evaluation of the 1st and 2nd generation Kinect for multimedia applications. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar] [CrossRef]
  2. Moley, J.P. Dor No Pescoco, Manual MSD Versao Para Profissionais De Saude, Consulta. 2022. Available online: https://www.msdmanuals.com/pt/profissionalrev (accessed on 2 July 2024). (In Portuguese).
  3. Rix, G.D.; Bagust, J. Cervicocephalic kinesthetic sensibility in patients with chronic, nontraumatic cervical spine pain. Arch. Phys. Med. Rehabil. 2001, 82, 911. [Google Scholar] [CrossRef] [PubMed]
  4. Pinsault, N.; Vuillerme, N.; Pavan, P. Cervicocephalic Relocation Test to the Neutral Head Position: Assessment in Bilateral Labyrinthine-Defective and Chronic, Nontraumatic Neck Pain Patients. Arch. Phys. Med. Rehabil. 2008, 89, 2375–2378. [Google Scholar] [CrossRef] [PubMed]
  5. Revel, M.; Andre-Deshays, C.; Minguet, M. Cervicocephalic kinesthetic sensibility in patients with cervical pain. Arch. Phys. Med. Rehabil. 1991, 72, 288–291. [Google Scholar] [PubMed]
  6. Raizah, A.; Reddy, R.S.; Alshahrani, M.S.; Gautam, A.P.; Alkhamis, B.A.; Kakaraparthi, V.N.; Ahmad, I.; Kandakurti, P.K.; ALMohiza, M.A. A Cross-Sectional Study on Mediating Effect of Chronic Pain on the Relationship between Cervical Proprioception and Functional Balance in Elderly Individuals with Chronic Neck Pain: Mediation Analysis Study. J. Clin. Med. 2023, 12, 3140. [Google Scholar] [CrossRef] [PubMed]
  7. Won, Y.; Latip, H.; Aziz, M. The reliability and validity on measuring tool of cervical range of motion: A review. Sport. Med. Inj. Care 2019, 1, 001. [Google Scholar] [CrossRef] [PubMed]
  8. Oliveira-Souza, A.I.S.; Carvalho, G.F.; Florêncio, L.L.; Fernández-de-Las-Peñas, C.; Dach, F.; Bevilaqua-Grossi, D. Intrarater and interrater reliability of the flexion rotation test and cervical range of motion in people with migraine. J. Manip. Physiol. Ther. 2020, 43, 874–881. [Google Scholar] [CrossRef] [PubMed]
  9. Rahnama, L.; Saberi, M.; Kashfi, P.; Rahnama, M.; Karimi, N.; Geil, M.D. Effects of Two Exercise Programs on Neck Proprioception in Patients with Chronic Neck Pain: A Preliminary Randomized Clinical Trial. Med. Sci. 2023, 11, 56. [Google Scholar] [CrossRef] [PubMed]
  10. Grondin, F.; Freppel, S.; Jull, G.; Gérard, T.; Caderby, T.; Peyrot, N. Fat Infiltration of Multifidus Muscle Is Correlated with Neck Disability in Patients with Non-Specific Chronic Neck Pain. J. Clin. Med. 2022, 11, 5522. [Google Scholar] [CrossRef] [PubMed]
  11. Guidetti, L.; Placentino, U.; Baldari, C. Reliability and Criterion Validity of the Smartphone Inclinometer Application to Quantify Cervical Spine Mobility. Clin. Spine Surg. 2017, 30, E1359–E1366. [Google Scholar] [CrossRef] [PubMed]
  12. Dror, B.; Yanai, E.; Frid, A.; Peleg, N.; Goldenthal, N.; Schlesinger, I.; Hel-Or, H.; Raz, S. Automatic Assessment of Parkinson’s Disease From Natural Hands Movements Using 3D Depth Sensor, Convention of Electrical and Electronics Engineers in Israel. In Proceedings of the IEEE International Conference on Computer Vision, Columbus, OH, USA, 23–28 June 2014. [Google Scholar] [CrossRef]
  13. Shao, D.; Liu, C.; Tsow, F. Noncontact Physiological Measurement Using a Camera: A Technical Review and Future Directions. ACS Sens. 2021, 6, 321–334. [Google Scholar] [CrossRef] [PubMed]
  14. Li, Q.; Wang, Y.; Sharf, A.; Cao, Y.; Tu, C.; Chen, B.; Yu, S. Classification of gait anomalies from kinect. Vis. Comput. 2018, 34, 229–241. [Google Scholar] [CrossRef]
  15. Castro, M.; Xavier, J.; Rosa, P.; de Oliveira, J. Interacao por Rastreamento de Mao em ambiente de Realidade Virtual. In Anais Estendidos do XXII Simposio de Realidade Virtual e Aumentada; Sociedade Brasileira de Computação: Porto Alegre, RS, Brazil, 2020; pp. 44–48. [Google Scholar]
  16. McDuff, D. Camera Measurement of Physiological Vital Signs. ACM Comput. Surv. 2023, 55, 1–40. [Google Scholar] [CrossRef]
  17. Hanke, S.; Sandner, E.; Stainer-Hochgatterer, A.; Tsiourti, C.; Braun, A. The technical specification and architecture of a virtual support partner. In Proceedings of the Work-shop and Poster Papers of the European Conference on Ambient Intelli-Gence 2015 (AmI-15), Athens, Greece, 11–13 November 2015; Koutkias, V., Ed.; CEUR-WS.org. (CEUR Workshop Proceedings, v. 1528). CEUR: Bologna, Italy, 2015. Available online: http://ceur-ws.org/Vol-1528/paper4.pdf (accessed on 17 July 2024).
  18. Brook, G.; Gillian, B.; Dan, J.; Dadirayi, M.; Patrick, O.; Lynn, R. Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson’s disease. Gait Posture 2014, 39, 1062–1068. [Google Scholar] [CrossRef]
  19. Regazzoni, D.; de Vecchi, G.; Rizzi, C. RGB cams vs RGB-D sensors: Low cost motion capture technologies performances and limitations. J. Manuf. Syst. 2014, 33, 719–728. [Google Scholar] [CrossRef]
  20. Ekambaram, D.; Ponnusamy, V. Real-time AI-assisted visual exercise pose correctness during rehabilitation training for musculoskeletal disorder. J. Real-Time Image Proc. 2024, 21, 2. [Google Scholar] [CrossRef]
  21. Brito, E.N.D.B.; Fiqueiredo, B.Q.; Souto, D.N.; Nogueira, J.F.; Melo, A.L.S.C.; Silva, I.T.; Oliveira, I.P.; Almeida, M.G. Artificial Intelligence in the diagnosis of Neurodegenerative diseases: A systematic literature review. In Research, Society and Development; CDRR Editors: Vargem Grande Paulista, Brazil, 2021; Volume 10, ISSN 2525-3409. [Google Scholar] [CrossRef]
  22. MEDIAPIPE, Customizable, Cross-Platform ML Solutions. 13 November 2023. Available online: https://github.com/google/mediapipe (accessed on 2 July 2024).
  23. Lugaresi, C.; Tang, J.; Nash, H.; Mcclanahan, C.; Uboweja, E.; Hays, M.; Zhang, F.; Chang, C.; Yong, M.G.; Lee, J.; et al. MediaPipe: A Framework for Building Perception Pipelines, Google Research. arXiv 2019, arXiv:1906.08172v1. [Google Scholar] [CrossRef]
  24. Boesch, G. TensorFlow Lite—Real-Time Computer Vision on Edge Devices (2024). Available online: https://viso.ai/edge-ai/tensorflow-lite/ (accessed on 2 July 2024).
  25. Boesch, G. MediaPipe: Google’s Open Source Framework for ML Solutions (2024 Guide). Available online: https://viso.ai/computer-vision/mediapipe/ (accessed on 2 July 2024).
  26. Halder, A.; Tayade, A. Real-time vernacular sign language recognition using mediapipe and machine learning. Int. J. Res. Publ. Rev. 2021, 2, 9–17. [Google Scholar]
  27. Bazarevsky, V.; Grishchenko, I.; Raveendran, K.; Zhu, T.; Zhang, F.; Grundmann, M. Blazepose: On-device real-time body pose tracking. arXiv 2020, arXiv:200610204. [Google Scholar]
  28. Naka, A.; Kotz, C.; Gutmann, E.; Pramhas, S.; Schukro, R.P.J.; Ristl, R.; Schuhfried, O.; Crevenna, R.; Sator, S. Effect of Regular Electrotherapy on Spinal Flexibility and Pain Sensitivity in Patients with Chronic Non-Specific Neck Pain and Low Back Pain: A Randomized Controlled Double-Blinded Pilot Trial. Medicina 2023, 59, 823. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Neck proprioception (repositioning error test) [5].
Figure 1. Neck proprioception (repositioning error test) [5].
Applsci 14 06429 g001
Figure 2. MediaPipe Face Mesh face reference points [23]. The red circle indicates the points representing the nose.
Figure 2. MediaPipe Face Mesh face reference points [23]. The red circle indicates the points representing the nose.
Applsci 14 06429 g002
Figure 3. Individual with helmet positioned in front of the notebook [taken by the author].
Figure 3. Individual with helmet positioned in front of the notebook [taken by the author].
Applsci 14 06429 g003
Figure 4. Face reference point 94 [23].
Figure 4. Face reference point 94 [23].
Applsci 14 06429 g004
Figure 5. System architecture.
Figure 5. System architecture.
Applsci 14 06429 g005
Figure 6. Data-capture screen [taken by the author].
Figure 6. Data-capture screen [taken by the author].
Applsci 14 06429 g006
Figure 7. b distance calculation.
Figure 7. b distance calculation.
Applsci 14 06429 g007
Figure 8. Angle-calculation procedure.
Figure 8. Angle-calculation procedure.
Applsci 14 06429 g008
Figure 9. Positional variations in the x direction.
Figure 9. Positional variations in the x direction.
Applsci 14 06429 g009
Figure 10. Positional variations in the y direction.
Figure 10. Positional variations in the y direction.
Applsci 14 06429 g010
Figure 11. Target graph.
Figure 11. Target graph.
Applsci 14 06429 g011
Figure 12. Error-measurement calculation: CRT method vs. proposed system.
Figure 12. Error-measurement calculation: CRT method vs. proposed system.
Applsci 14 06429 g012
Table 1. data collected from the CRT test for 10 movements of the patient’s head.
Table 1. data collected from the CRT test for 10 movements of the patient’s head.
Movement NumberPoint NumberPoints Valuesb Valuec Value (cm)b Value (cm)h ValueTan (Alpha) in Degrees
94X94Y
1 ( x A ,   y A ) 1 0.51490.67390.00450018804.5001777780.12653.22641877
( x B ,   y B ) 1 0.51040.6739
2 ( x A ,   y A ) 2 0.51440.67420.00440454804.4045431180.12123.15771289
( x B ,   y B ) 2 0.510.674
3 ( x A ,   y A ) 3 0.51470.67380.00450587804.5058739480.12683.2305113
( x B ,   y B ) 3 0.51020.6736
4 ( x A ,   y A ) 4 0.51390.67450.004219804.2190046280.11123.02444391
( x B ,   y B ) 4 0.50970.6749
5 ( x A ,   y A ) 5 0.51280.67420.00460109804.6010868380.13223.29892391
( x B ,   y B ) 5 0.50820.6743
6 ( x A ,   y A ) 6 0.51230.67430.00444072804.4407206680.12323.1837025
( x B ,   y B ) 6 0.50790.6749
7 ( x A ,   y A ) 7 0.5130.67460.00503587805.0358713280.15833.61144855
( x B ,   y B ) 7 0.5080.6752
8 ( x A ,   y A ) 8 0.51130.6740.00465601804.6560068780.13543.33838946
( x B ,   y B ) 8 0.50670.6747
9 ( x A ,   y A ) 9 0.51020.6740.0043195804.3195022980.11653.09662544
( x B ,   y B ) 9 0.50590.6744
10 ( x A ,   y A ) 10 0.50980.6740.00433133804.3313277480.11723.10511961
( x B ,   y B ) 10 0.50550.6745
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

de Aguiar, W.; Freire Junior, J.C.; Thomann, G.; Cuarelli, G. Biometric Image-Analysis Techniques for Monitoring Chronic Neck Pain. Appl. Sci. 2024, 14, 6429. https://doi.org/10.3390/app14156429

AMA Style

de Aguiar W, Freire Junior JC, Thomann G, Cuarelli G. Biometric Image-Analysis Techniques for Monitoring Chronic Neck Pain. Applied Sciences. 2024; 14(15):6429. https://doi.org/10.3390/app14156429

Chicago/Turabian Style

de Aguiar, Wagner, José Celso Freire Junior, Guillaume Thomann, and Gilberto Cuarelli. 2024. "Biometric Image-Analysis Techniques for Monitoring Chronic Neck Pain" Applied Sciences 14, no. 15: 6429. https://doi.org/10.3390/app14156429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop