Next Article in Journal
High-Aperture-Ratio Dual-View Integral Imaging Display
Next Article in Special Issue
Application of Piezoelectric PLLA Braided Cord as Wearable Sensor to Realize Monitoring System for Indoor Dogs with Less Physical or Mental Stress
Previous Article in Journal
Simulation Study on the Structure Design of p-GaN/AlGaN/GaN HEMT-Based Ultraviolet Phototransistors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attachable Inertial Device with Machine Learning toward Head Posture Monitoring in Attention Assessment

1
Normal College of Liupanshui, Liupanshui 553000, China
2
School of Mechano-Electronic Engineering, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Micromachines 2022, 13(12), 2212; https://doi.org/10.3390/mi13122212
Submission received: 15 November 2022 / Revised: 7 December 2022 / Accepted: 10 December 2022 / Published: 14 December 2022

Abstract

:
The monitoring of head posture is crucial for interactive learning, in order to build feedback with learners’ attention, especially in the explosion of digital teaching that occurred during the current COVID-19 pandemic. However, conventional monitoring based on computer vision remains a great challenge in the multi-freedom estimation of head posture, owing to low-angle annotation and limited training accuracy. Here, we report a fully integrated attachable inertial device (AID) that comfortably monitors in situ head posture at the neck, and provides a machine learning-based assessment of attention. The device consists of a stretchable inertial sensing unit and a fully integrated circuit-based system, as well as mechanically compliant encapsulation. Due to the mechanical flexibility, the device can be seamlessly attach to a human neck’s epidermis without frequent user interactions, and wirelessly supports six-axial inertial measurements, thereby obtaining multidimensional tracking of individual posture. These head postures (40 types) are then divided into 10 rotation actions which correspond to diverse situations that usually occur in daily activities of teaching. Benefiting from a 2D convolutional neural network (CNN)-based machine learning model, their classification and prediction of head postures can be used to analyze and infer attention behavior. The results show that the proposed 2D CNN-based machine learning method can effectively distinguish the head motion posture, with a high accuracy of 98.00%, and three actual postures were successfully verified and evaluated in a predefined attention model. The inertial monitoring and attention evaluation based on attachable devices and machine learning will have potential in terms of learning feedback and planning for learners.

1. Introduction

Online learning became an alternative candidate to conventional teaching during the current COVID-19 pandemic, and head posture is a key index that is highly relative to learners’ attention [1,2,3,4,5,6,7]. Continuous monitoring of head motions, therefore, can build direct recognition and feedback of learning attention, in order to assist the learner in gaining high-efficiency and comfortable experiences during long-term and boring teaching. Currently, three primary approaches have been used to track head posture. The first option applies facial feature recognition based on deep learning to infer head posture in attention assessment. However, changes in both learner and learning scenes limit the feature extraction of face images, which only establishes indirect relations between images and attention [8,9]. Another alternative route is to use an eyeball instrument to collect the learner’s eyeball dynamic data for long-term analysis of head posture; these methods generally require sophisticated, expensive instruments, and can even disturb the learning status of the subject [10,11]. In addition, the learner’s head posture can be obtained through continuous image streams. However, the image quality is sensitive to changes in the angle and position of camera installation, resulting in low-angle annotation and limited training accuracy [12,13,14,15,16]. Therefore, it is urgently necessary to obtain reliable analysis of attention, in order to achieve continuous and robust monitoring of head posture.
Up to now, flexible electronics have provided an inspired choice for continuous motion monitoring [17,18,19,20]. Particularly during the monitoring of head posture, a portable inertial device can support noninvasive, long-term, and wireless in-situ measurement in tracking weak movements and rotations, which can build an intimate relationship between the posture feature and inertial motion. For instance, Liu et al. developed real-time and online inertial tools and devices, the wearable Human Activity Recognition (HAR) system [21,22], HAR Research Pipeline [23], and ASK2.0 [24], for continuous smart monitoring and recognition of various human behaviors. Simultaneously, convolutional neural network (CNN)-based methods have successfully been used to recognize and classify various human signs [25,26,27]. Therefore, combining the inertial monitoring of wearable devices with feature clustering based on CNN-based machine learning can provide a convenient and reliable judgment of attention assessment [28,29,30,31]. Compared with an estimation of the head’s image stream, this method is without limits for variations in the posture sample and ambient light, as well as for feature extraction. The posture feature is only obtained through inertial movement and rotation. For example, Mao et al. used 3-axial angular velocity to measure head posture, and established seven states-based evaluations for the learner’s attention model. Since the measured result is low-dimensional and sparse, the designed model is difficult to generalize evaluation with [32]. Brandt et al. applied head movements to drive a wearable camera, and established a direct relation between the image stream and head movements. However, estimation of the head posture with visual images limits extraction features, consequently leading to high analyzing error [33]. In addition, Hua [13], Zhang [34], Liu [35,36], and Xu et al. [37] proposed a computer vision-based method for head posture monitoring; their tested sample merely relied on manual collection at the lab, which may be too sparse and difficult to meet practical needs.
To overcome these obstacles, we developed a fully integrated attachable device that supports the in-situ 6-axial inertial measurement of head postures at the neck. The attachable inertial device (AID) integrates MCU, Bluetooth, and Li+ battery units, with a gyroscope and acceleration sensor in a flexible patch can track the rotation and axial movement of head postures. Due to its mechanical flexibility, the device can be seamlessly attached to the human neck’s epidermis without frequent user interactions, and wirelessly supports six-axial inertial measurement, thereby obtaining multidimensional tracking of individual posture. These head postures (40 types) were divided into 10 actions that corresponded to diverse situations that usually occur during the daily activities of teaching. A 2D CNN-based machine learning method classified and predicted different head postures, and was able to analyze and infer corresponding attention behaviors. The results showed that the 2D CNN-based machine learning model could effectively and accurately distinguish learners’ head motion postures, with a high accuracy of 98.00%, and three actual postures were successfully verified and evaluated in a predefined attention model. We expect that this design and validation of a fully integrated attachable device with machine learning paves the way for powerful means of attention assessment in learning feedback and planning.

2. Method

2.1. Design of the Circuit Schematic Diagram

The inertial unit (MPU6050, INVENSENSE) was connected with an 8-bit MCU chip (Atemega328p, ATMEL) via the I2C communication protocol. The Bluetooth module (cc2640, RF-STAR, China) was communicated via a serial protocol to the processing center. The working frequency of the crystal oscillator in the circuit was 8 MHz, in order to decrease the power consumption, and the entire circuit was powered by a Li+ polymer battery with a voltage of 3.2~3.7 V (Dajia Manyi Technology Co., Ltd., Shenzhen, China).

2.2. Fabrication of the Attachable Inertial Device

An initial planar circuit board based on polyimide was fabricated at Shenzhen Muwei Electronic Co., Ltd. (Shenzhen, China). The outline of the stretchable circuit was engraved with a 355 nm UV laser (50 kHz pulse frequency, 300 mm s−1 cutting speed for 5 repetitions of cutting, YLCF65UV, Yuanlu Photoelectric Technology, Wuhan, China). All of the electronic components and the battery were then soldered in a soldering box at 200 °C for 10 min. Next, the predesign mold was input to a CNC engraving machine (3040, Mikie CNC) with a cutting speed of 22,000 mm min−1, in order to cut an Al plate with a thickness of 5 mm. Colored Eco-flex elastomer (Smooth-on, Macungie, PA, USA) was poured into the mold to form the bottom packaging with a thickness of 0.8 mm; it was cured at ambient temperature for 30 min. Furthermore, the soldered circuit was aligned at the mold, and the remaining elastomer was poured into the mold to cover the circuit which was placed in a vacuum for 20 min to eliminate air. Finally, a glass plate was used to press on the uncured Eco-flex to form the resulting device, which was cured at ambient temperature for 30 min.

2.3. Tested Process of Head Postures

The inertial results of head posture were based on continuous motional posture as participants (2 adult males and 1 adult female (age: 22~30 years old, height: 160~190 cm, weight: 45~100 kg)) performed special movements: rightward rotation (θYaw = 10~85°), bow (θPitch = 10~80°), and roll (θRoll = 10~50°). There were 40 movement types in total, which corresponded to diverse situations that usually occur during daily activities of teaching. Each procedure at every angle was performed 1000 times repeatedly, and the tested angle interval was 5°. In addition, to improve the robustness of the training model, the test process was added with some special motions, such as leftward or frontal deflection at the same time, in order to simulate head motions that frequently occur in daily activities; the frequency of these movements is around 20.00%. The participants signed an ethical informed consent in the form of a Consent Form file in the supporting information.

3. Results and Discussion

3.1. Design of the Inertial Device

Long-term measurements of head posture at the human neck require a device that is comfortable, skin-attachable, wireless, and noninvasive. Figure 1a depicts the electrical and structural illustration of the fully integrated device for monitoring head posture. It primarily consists of an integrated MCU, Bluetooth, Li+ polymer battery, and an inertial unit fed with a stretchable connection (Figure S1); it was encapsulated in an ultra-elastic shell (ε = 60 kPa). The ultra-elastic encapsulation matched the stiffness of the skin epidermis, in order to make the wearer comfortable. Figure 1b is the optical diagram of the fabricated device mounted on a subject’s neck, and the insert exhibits its small size near a coin. Benefiting from its integration and flexibility, the device can easily obtain continuous neck motions without frequent user interactions. Based on the 6-axial inertial data from head postures, the CNN method can be used to train and predict head movement states, thus providing verification in attention assessment (Figure 1c). The flowchart of the complete system from data acquisition and processing, to posture training and prediction, finally to real-posture verifications, is shown in Figure S2. As shown in Figure 1d, the inertial unit is mainly composed of a gyroscope and an acceleration sensor (Figure S3) that are connected with processing, controlling, and transmitting units. Figure 1e shows that the stretchable device can withstand 20% tensional strain under uniaxial stretching, and 45° bending during finite element analysis (FEA); these conditions are necessary for epidermal deformations during physical wearing. Furthermore, as also demonstrated in Figure 1f, the fabricated device can bear severe stretching (20%), twisting (90°), and bending (45°), showing its potential for intimate skin attachability.
Figure 2a shows the hybrid fabricated process of the fully integrated AID. It primarily consists of three key steps: (1) stretchable outline of the bare circuit engraved by the UV laser; (2) reflow soldering at low temperature; and (3) full encapsulation of the bare circuit into the ultra-elastic silicone. Details of the fabricated configuration are shown in the Methods section. To observe the influence of encapsulation on signal quality, different thicknesses of the eco-flex were used for encapsulation of the bare circuit. Figure 2b depicts that no observable attenuation of the signal intensity occurred at the inertial units, and the Bluetooth’s signal amplitude decreased with an increase in thickness. This could possibly contribute to attenuation of the electromagnetic wave across a thick obstruction, but the signal intensity remained a stable wireless transmission at 2.4 GHz. To further promote attachability close to the skin’s elasticity, ultra-elastic silicone was compared with other general elastomers (Figure 2c). The result shows that the colored silicone exhibited superior elasticity (~60 kPa) near the skin epidermis (~20 kPa), thereby making the device soft and attachable for long-term wearing on skin.

3.2. Inertial Measurements of Head Posture

Figure 3a illustrates neck-posture images tested with the AID, and the corresponding results. It can be observed that neck rotation degree is almost proportional to the amplitude of the inertial motion around the corresponding rotating axis. Moreover, each rotation consists of several inconsistent peaks and different waves where the frequency spectrum could distinguish every weak vibration. Figure 3b also exhibits similar proportional relations between the rotating angle and inertial response. Therefore, the head posture’s feature can be clearly obtained based on the output amplitude of inertial movements and rotations along the yaw and pitch directions. Figure 3c shows the wearable device’s lowest resolution in both measurements of the acceleration and angular velocity, with the pitch angle of the tested posture at 5°, and their corresponding resolutions were 0.25 g and 0.4 red/s, respectively, showing its superior sensitivity in monitoring inertial motion. Figure 3d,e are the frequency responses for the amplitude and power spectrum as the tested feature was observed via Fourier transform (FT), respectively. The motion response primarily gathers around 0.87 Hz, which was consistent with physical head movement. Other frequency peaks may be attributed to weak vibrations of head coupling with the wearable device. In addition, for the special posture as the subject performs a rightward deflection of θYaw = 80° (the coordinate direction is defined as shown in Figure 1b), the inertial results of the posture around their its corresponding rotating axes are shown in Figure 3f via short-time FT, which further exhibits the long-term measured stability of the wearable device. Furthermore, the frequency feature is almost identical with the above FT’s result. Owing to the skeletal complexity of the neck, each movement produces abundant offsets and vibrations, thereby resulting in rotation and movement around a single minor direction. It was verified, as shown in Figure 3g, that the head motion is not an individual rotation; instead, it is a multidimensional motion in space, from which it can be inferred that the computer vision-based correlative model had an obvious sample estimation error, and was without sufficient accuracy during training.

3.3. Machine Learning in Head Posture Estimation

According to the above testing, the result of an individual head posture consists of six-dimensional inertial data. To extract these 1D features, the convolutional neural network randomly reduced the dimension of the six-axial data to simulate two-dimensional feature extraction, without other pre-setting, in order to achieve end-to-end training, thereby ensuring accuracy and credibility. The multiparameter 2D CNN-based machine learning method was built to be divided into multi-independent blocks, as shown in Figure 4a, and it corresponds to the pseudocode template shown in Table S1. All posture features of different positions and deflections were initially normalized and read, and the feature length and dimension were 1000 and 6, respectively. Then, the data matrix was reduced at dimension by the Triplet function (Ltr) to the max-pooling layer after the convolution and nonlinearization. The dropout function prevented features from overfitting, and the batch normal function normalized features. Furthermore, processed features were classified into 10 classes through the Cross-entropy function (Lce). Finally, the features of different actions at the full-connecting layer were predicted by the one-hot function. Each block underwent multiple convolutions and pooling, thereby lowering the convergence time during training.
The t-Distributed Stochastic Neighbor Embedding (t-SNE) method clustered all features to visualize, and the confusion matrix plotted the prediction rate, as shown in Figure 4b, from the initial to the 40th iterations. It is obvious that feature overlapping was high early in training, thus resulting in low predicted accuracy. With further iterations and training, features became clustered with a superior accuracy of 86.80% until the 20th iteration. This is because each independent block consisted of multiple convolutions and pooling that could repeatedly extract and classify inner goals. Finally, all features became totally divided as the training reached the 40th iteration, with a predicted accuracy of 94.50%, indicating fast convergence ability of the design of independent blocks for feature extraction and classification.

3.4. Applications for Attention Assessment

As shown in Figure 5a, the predicted feature matched the training with an accuracy of 98.00%. However, some rolling actions remained with low convergence accuracy, which may have been caused by the following: (1) low iterations during training; (2) insufficient samples of head postures; (3) the subject made nonstandard rolling postures. Moreover, the roll change was not directly related to the head’s attention. The effect of the roll change on the training model could, therefore, be ignored. The trained accuracy exceeded 90.00% after 10 iterations, and the tested accuracy was greater than 80.00% after 20 iterations, as shown in Figure 5b, which is highly consistent with the t-SNE result, further exhibiting the model’s fast convergence ability. Furthermore, both normalized losses of the object functions, Ltr and Lce, were less than 0.1 prior to 20 iterations (Figure 5c), showing its clustering ability. The trained model also tested other head postures of three subjects who were not tested in the above training, as shown in Figure 5d. When three subjects performed the special postures rightward rotation (θY = 40°), slight bow (θp = 10°), and frontal rolling (θR = 30°), their corresponding inertial results were input to the independent block model driven by the 2D CNN-based machine learning method. The tested accuracy could be used to infer the corresponding head posture. As shown in Figure 5e, the tested accuracies of the three rotations are highly consistent with that of the training, with a rate over 80.00%; moreover, the tested posture corresponded to the trained accuracy of the above posture angle. These results show that the independent block in machine learning can accurately distinguish the learner’s head posture. Furthermore, head postures were defined as the corresponding normal and alert states during learning, as shown in Figure 5f. Only with a yaw angle over 25° and a pitch angle over 15° could the head attention can be expressed as an “Alert!” state; the other postures are expressed as normal states. Therefore, the three tested postures corresponded to learning attentions that could be depicted as inattentive, attentive, and irrelevant, respectively. In addition, compared with several other methods [16,21,25,38,39,40,41,42] for human activity recognition, our proposed model exhibited superior abilities, as shown in Table S2, including wearability and wireless acquisition for data, as well as smart recognition. Such wireless attachable patch-based machine learning tracking could potentially pave a way for wearable monitoring and evaluation.

4. Conclusions

In this study, we developed an attachable device that consists of a stretchable inertial sensing unit and a fully integrated circuit-based system to monitor in situ head postures, and provides a machine learning-based assessment of attention. Due to its mechanical flexibility, the device can be seamlessly attached to the human neck epidermis without frequent user interactions. In addition, the inertial unit is composed of a gyroscope and an acceleration sensor with six-axial measurement ability, thereby supporting a continuous multidimensional monitoring for each individual head posture. These head postures (40 types) are then divided into 10 rotation actions which correspond to diverse situations that usually occur during daily activities of teaching, and are further classified and predicted with a two-dimensional convolutional neural network-based machine learning model. The trained and predicted results can effectively distinguish head motion posture with a high accuracy of 98.00%, and the actual posture was successfully verified and evaluated in a predefined attention model. The inertial monitoring and attention evaluation based on both attachable devices and machine learning has the potential to provide learning feedback and planning for learners.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/mi13122212/s1.

Author Contributions

Conceptualization, Y.P. and H.X.; methodology, H.X.; software, Y.P. and H.X.; validation, Y.P., C.H. and H.X.; writing—review and editing, Y.P., C.H. and H.X.; supervision, H.X.; project administration, C.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Reform Project of Teaching Content and Curriculum System in Guizhou Universities grant number 2020188.

Acknowledgments

These authors would like to acknowledge the financial support of the Reform Project of Teaching Content and Curriculum System in Guizhou Universities (2020188).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, X.; Li, Y. Interpretation of 2020 Educause Horizon ReportTM (Teaching and Learning Edition) and Its Enlightenments: Challenges and Transformation of Higher Education under the Epidemic Situation. J. Distance Educ. 2020, 38, 3–16. [Google Scholar]
  2. Bahasoan, A.N.; Ayuandiani, W.; Mukhram, M.; Rahmat, A. Effectiveness of Online Learning In Pandemic COVID-19. Int. J. Sci. Technol. Manag. 2020, 1, 100–106. [Google Scholar] [CrossRef]
  3. Syaharuddin, S.; Mutiani, M.; Handy, M.R.N.; Abbas, E.W.; Jumriani, J. Building Students’ Learning Experience in Online Learning During Pandemic. AL-ISHLAH J. Pendidik. 2021, 13, 979–987. [Google Scholar] [CrossRef]
  4. Agarwal, S.; Kaushik, J.S. Student’s Perception of Online Learning during COVID Pandemic. Indian J. Pediatr. 2020, 87, 554. [Google Scholar] [CrossRef]
  5. Rasmitadila, R.; Aliyyah, R.R.; Rachmadtullah, R.; Samsudin, A.; Syaodih, E.; Nurtanto, M.; Tambunan, A.R.S. The Perceptions of Primary School Teachers of Online Learning during the COVID-19 Pandemic Period: A Case Study in Indonesia. J. Ethn. Cult. Stud. 2020, 7, 90–109. [Google Scholar] [CrossRef]
  6. Yan, Y.; Cai, F.; Feng, C.C.; Chen, Y. University students’ perspectives on emergency online GIS learning amid the Covid-19 pandemic. Trans. Gis 2022, 26, 2651–2668. [Google Scholar] [CrossRef]
  7. Baber, H. Determinants of Students’ Perceived Learning Outcome and Satisfaction in Online Learning during the Pandemic of COVID19. J. Educ. e-Learn. Res. 2020, 7, 285–292. [Google Scholar] [CrossRef]
  8. Sun, S.-H.; Zhang, Y.-C.; Wang, C.; Zhang, H.-Y. Evaluation of Students’ Classroom Behavioral State Based on Deep Learning. Comput. Syst. Appl. 2022, 31, 307–314. [Google Scholar]
  9. Zuo, G.; Han, D.; Su, X.; Wang, H.; Wu, X. Research on classroom behavior analysis and evaluation system based on deep learning face recognition technology. Intell. Comput. Appl. 2019, 9, 135–141. [Google Scholar]
  10. Stanley, D. Measuring Attention Using Microsoft Kinect. Master’s Thesis, Rochester Institute of Technology, Rochester, NY, USA, 2013. [Google Scholar]
  11. Bearden, T.S.; Cassisi, J.E.; White, J.N. Electrophysiological Correlates of Vigilance During a Continuous Performance Test in Healthy Adults. Appl. Psychophysiol. Biofeedback 2004, 29, 175–188. [Google Scholar] [CrossRef]
  12. Kao, T.C.; Sun, T.Y. Head pose recognition in advanced Driver Assistance System. In Proceedings of the 2017 IEEE 6th Global Conference on Consumer Electronics (GCCE), Nagoya, Japan, 24–27 October 2017. [Google Scholar]
  13. Tordoff, B.; Mayol, W.; Murray, D.; de Campos, T. Head Pose Estimation for Wearable Robot Control. In Proceedings of the British Machine Vision Conference, DBLP, Cardiff, UK, 2–5 September 2002. [Google Scholar] [CrossRef] [Green Version]
  14. Bharatharaj, J.; Huang, L.; Mohan, R.E.; Pathmakumar, T.; Krägeloh, C.; Al-Jumaily, A. Head Pose Detection for a Wearable Parrot-Inspired Robot Based on Deep Learning. Appl. Sci. 2018, 8, 1081. [Google Scholar] [CrossRef] [Green Version]
  15. Diaz-Chito, K.; Hernández-Sabaté, A.; López, A.M. A reduced feature set for driver head pose estimation. Appl. Soft Comput. 2016, 45, 98–107. [Google Scholar] [CrossRef]
  16. Alioua, N.; Amine, A.; Rogozan, A.; Bensrhair, A.; Rziza, M. Driver head pose estimation using efficient descriptor fusion. EURASIP J. Image Video Process. 2016, 2016, 2. [Google Scholar] [CrossRef] [Green Version]
  17. Lu, H. Study on Wearable Vision and its Application in Visual Assistant of Mobile Intelligent Survilliance. Master’s Thesis, Chongqing University, Chongqing, China, 2011. [Google Scholar]
  18. Tang, Y. Research on Attention Analysis Method Based on Brain-Computer Interface. Master’s Thesis, South China University of Technology, Guangzhou, China, 2021. [Google Scholar]
  19. Jin, J.; Gao, B.; Yang, S.; Zhao, B.; Luo, L.; Woo, W.L. Attention-Block Deep Learning Based Features Fusion in Wearable Social Sensor for Mental Wellbeing Evaluations. IEEE Access 2020, 8, 89258–89268. [Google Scholar] [CrossRef]
  20. Pandian, G.S.B.; Jain, A.; Raza, Q.; Sahu, K.K. Digital health interventions (DHI) for the treatment of attention deficit hyperactivity disorder in children-a comparative review of literature among various treatment and DHI—ScienceDirect. Psychiatry Res. 2021, 297, 113742. [Google Scholar] [CrossRef]
  21. Liu, H. Biosignal Processing and Activity Modeling for Multimodal Human Activity Recognition. Doctoral Dissertation, Universität Bremen, Bremen, Germany, 2021. [Google Scholar]
  22. Schultz, T.; Liu, H. A Wearable Real-time Human Activity Recognition System using Biosensors Integrated into a Knee Bandage. In Proceedings of the 12th International Joint Conference on Biomedical Engineering Systems and Technologies, Prague, Czech Republic, 22–24 February 2019; pp. 47–55. [Google Scholar]
  23. Liu, H.; Hartmann, Y.; Schultz, T. A Practical Wearable Sensor-based Human Activity Recognition Research Pipeline. In Proceedings of the International Conference on Health Informatics, Odisha, India, 7–9 December 2022. [Google Scholar]
  24. Hartmann, Y.; Liu, H.; Schultz, T. Interactive and Interpretable Online Human Activity Recognition. In Proceedings of the 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), Pisa, Italy, 21–25 March 2022; pp. 109–111. [Google Scholar]
  25. Barbhuiya, A.A.; Karsh, R.K.; Jain, R. Gesture recognition from RGB images using convolutional neural network-attention based system. Concurr. Comput. Pract. Exp. 2022, 34, e7230. [Google Scholar] [CrossRef]
  26. Barbhuiya, A.A.; Karsh, R.K.; Jain, R. A convolutional neural network and classical moments-based feature fusion model for gesture recognition. Multimedia Syst. 2022, 28, 1779–1792. [Google Scholar] [CrossRef]
  27. Barbhuiya, A.A.; Karsh, R.K.; Jain, R. CNN based feature extraction and classification for sign language. Multimed. Tools Appl. 2020, 80, 3051–3069. [Google Scholar] [CrossRef]
  28. Tango, F.; Calefato, C.; Minin, L.; Canovi, L. Moving attention from the road: A new methodology for the driver distraction evaluation using machine learning approaches. In Proceedings of the 2nd Conference on Human System Interactions, Catania, Italy, 21–23 May 2009. [Google Scholar]
  29. Alam, M.S.; Jalil, S.; Upreti, K. Analyzing recognition of EEG based human attention and emotion using Machine learning. Mater. Today Proc. 2021, 56, 3349–3354. [Google Scholar] [CrossRef]
  30. Chung, W.H.; Gu, Y.H.; Yoo, S.J. District heater load forecasting based on machine learning and parallel CNN-LSTM attention. Energy 2022, 246, 123350. [Google Scholar] [CrossRef]
  31. Zhong, P.; Li, Z.; Chen, Q.; Hou, B. Attention-Enhanced Gradual Machine Learning for Entity Resolution. IEEE Intell. Syst. 2021, 36, 71–79. [Google Scholar] [CrossRef]
  32. Wubuliaisan, W.; Yin, Z.; An, J. Development of attention measurement and feedback tool based on head posture. In Proceedings of the Society for Information Technology & Teacher Education International Conference 2021, Online, 29 March 2021; pp. 746–753. [Google Scholar]
  33. Schneider, E.; Villgrattner, T.; Vockeroth, J.; Bartl, K.; Kohlbecher, S.; Bardins, S.; Ulbrich, H.; Brandt, T. EyeSeeCam: An Eye Movement-Driven Head Camera for the Examination of Natural Visual Exploration. Ann. N. Y. Acad. Sci. 2009, 1164, 461–467. [Google Scholar] [CrossRef] [PubMed]
  34. Guo, Y. A Study on Students’ Classroom Attention Evaluation Based on Deep Learning. Master’s Thesis, Shanxi Normal University, Xi’an, China, 2020. [Google Scholar]
  35. Wang, X. Research on Head Pose Estimation Method for Learning Behavior Analysis in Smart Classroom. Master’s Thesis, Central China Normal University, Wuhan, China, 2021. [Google Scholar]
  36. Nie, H. Research and Application of Learning Attention Detection Method Combining Head Pose and Gaze Estimation; Central China Normal University: Wuhan, China, 2020. [Google Scholar]
  37. Teng, X. Classroom Attention Analysis System based on Head Pose Estimation. Master’s thesis, Wuhan University of Science and Technology, Wuhan, China, 2020. [Google Scholar]
  38. LaValle, S.M.; Yershova, A.; Katsev, M.; Antonov, M. In Head tracking for the Oculus Rift. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 187–194. [Google Scholar]
  39. Guo, Y.; Zhang, J.; Lian, W. A Head-posture Based Learning Attention Assessment Algorithm. Sci. Technol. Eng. 2020, 20, 5688–5695. [Google Scholar]
  40. Padeleris, P.; Zabulis, X. Head pose estimation on depth data based on particle swarm optimization. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, 16–21 June 2012; pp. 42–49. [Google Scholar]
  41. Fanelli, G.; Weise, T.; Gall, J. Real time head pose estimation from consumer depth cameras. In Joint Pattern Recognition Symposium; Springer: Berlin, Germany, 2011; pp. 101–110. [Google Scholar]
  42. Meyer, G.P.; Gupta, S.; Frosio, I.; Reddy, D.; Kautz, J. Robust model-based 3D head pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3649–3657. [Google Scholar]
Figure 1. Design and mechanical deforming ability of the AID. (a) Mechanical structural and electrical schematic diagram of the AID. (b) Optical image of AID attached to the neck’s epidermis, and a corresponding frontal photo compared with a coin. (c) Attention assessment process via machine learning based on the monitoring of head postures. (d) Electrical block diagram of inertial signal processing and transmitting. (e) FEA’s strain diagram of the AID under 20% stretching and 45° bending deformations. (f) Images of the AID in elastic compliance in no load, torsion, twisting, and bending. Scale bar, 1 cm.
Figure 1. Design and mechanical deforming ability of the AID. (a) Mechanical structural and electrical schematic diagram of the AID. (b) Optical image of AID attached to the neck’s epidermis, and a corresponding frontal photo compared with a coin. (c) Attention assessment process via machine learning based on the monitoring of head postures. (d) Electrical block diagram of inertial signal processing and transmitting. (e) FEA’s strain diagram of the AID under 20% stretching and 45° bending deformations. (f) Images of the AID in elastic compliance in no load, torsion, twisting, and bending. Scale bar, 1 cm.
Micromachines 13 02212 g001
Figure 2. Fabrication of the AID. (a) Laser engraving of the stretchable bare circuit and the encapsulation. (b) Variations in the signal intensity of the inertial signal and Bluetooth output with varying thicknesses of packaging silicone gel. (c) Tested Young’s moduli of different elastomers and the AID encapsulated by the colored silicone.
Figure 2. Fabrication of the AID. (a) Laser engraving of the stretchable bare circuit and the encapsulation. (b) Variations in the signal intensity of the inertial signal and Bluetooth output with varying thicknesses of packaging silicone gel. (c) Tested Young’s moduli of different elastomers and the AID encapsulated by the colored silicone.
Micromachines 13 02212 g002
Figure 3. Inertial measurement results of head postures. (a) Continuous inertial angular velocity of the corresponding posture around the main axis. (b) Relative amplitude of head movements at different rotating angles. (c) The lowest resolution in the acceleration and angular velocity measurements when the pitch angle of the tested posture is 5°. (d,e) Frequency response of the amplitude and power spectrum of the head posture when θYaw = 80°, respectively. (f) Short-time FT result of the head posture around the main axis when θYaw = 80°. (g) Long-term inertial measured results of the head posture when θYaw = 80°.
Figure 3. Inertial measurement results of head postures. (a) Continuous inertial angular velocity of the corresponding posture around the main axis. (b) Relative amplitude of head movements at different rotating angles. (c) The lowest resolution in the acceleration and angular velocity measurements when the pitch angle of the tested posture is 5°. (d,e) Frequency response of the amplitude and power spectrum of the head posture when θYaw = 80°, respectively. (f) Short-time FT result of the head posture around the main axis when θYaw = 80°. (g) Long-term inertial measured results of the head posture when θYaw = 80°.
Micromachines 13 02212 g003
Figure 4. Machine learning for head posture recognition. (a) 2D CNN-based machine learning method. (b) Visualization cluster of the training via t-SNE (top) and the corresponding confusion matrix from the initial to the 40th epoch (bottom).
Figure 4. Machine learning for head posture recognition. (a) 2D CNN-based machine learning method. (b) Visualization cluster of the training via t-SNE (top) and the corresponding confusion matrix from the initial to the 40th epoch (bottom).
Micromachines 13 02212 g004
Figure 5. Applications of learning attention through head posture prediction. (a) Confusion matrix of real and predicted results under different head postures. (b) Trained and tested accuracy during deep learning. (c) Normalized loss results of the Ce and Tri function used for feature extractions and classifications. (d) Attention assessment block diagram for three new postures. (e) Attention-tested accuracy results of three different postures. (f) Presupposed attention assessment based on pose angles.
Figure 5. Applications of learning attention through head posture prediction. (a) Confusion matrix of real and predicted results under different head postures. (b) Trained and tested accuracy during deep learning. (c) Normalized loss results of the Ce and Tri function used for feature extractions and classifications. (d) Attention assessment block diagram for three new postures. (e) Attention-tested accuracy results of three different postures. (f) Presupposed attention assessment based on pose angles.
Micromachines 13 02212 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Peng, Y.; He, C.; Xu, H. Attachable Inertial Device with Machine Learning toward Head Posture Monitoring in Attention Assessment. Micromachines 2022, 13, 2212. https://doi.org/10.3390/mi13122212

AMA Style

Peng Y, He C, Xu H. Attachable Inertial Device with Machine Learning toward Head Posture Monitoring in Attention Assessment. Micromachines. 2022; 13(12):2212. https://doi.org/10.3390/mi13122212

Chicago/Turabian Style

Peng, Ying, Chao He, and Hongcheng Xu. 2022. "Attachable Inertial Device with Machine Learning toward Head Posture Monitoring in Attention Assessment" Micromachines 13, no. 12: 2212. https://doi.org/10.3390/mi13122212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop