*4.2. Limitation of the Proposed Method*

However, the proposed method has several limitations, particularly when students' classroom behaviors do not change significantly over time (e.g., writing notes). The proposed method cannot efficiently extract the segments of students' motions. This issue happened because the segments could not be extracted successfully due to the warped path of the DTW algorithm between adjacent paths being near since the absence of significant changes in the sensor data during motion. As a result, the proposed VB-DTW algorithm is inefficient for the long-term recognition of the majority of near-static data. In future work, we will still explore the most efficient way of dealing with precise and valid segment extraction.

#### **5. Conclusions**

The purpose of this paper is to provide auxiliary education by intelligently perceiving the behavior of students during classroom scenarios by integrating sensor equipment with AI technology. In this article, an improved algorithm which was named VB-DTW is proposed for separating valid sensor signals based on the DTW algorithm, and the effectiveness is validated using the Jaccard index. It provides the capacity to discern accurately between static and dynamic data. In addition, four classical deep learning network structures are compared for the accuracy of classroom behavior classification. It is discovered that the 1DCNN algorithm has the highest accuracy rate, particularly when accelerometer and gyroscope data are aggregated, where the recognition accuracy rate reaches 100%. We anticipate classifying more classroom activities based on hardware in real time and achieving multi-modal identification by fusing sensor data and visual data in future studies.

**Author Contributions:** Conceptualization, H.W., C.G., H.F., C.Z.-H.M., Q.W., Z.H. and M.L.; methodology, H.W., C.G. and H.F.; software, H.W. and C.G.; validation, H.W. and C.G.; formal analysis, H.W., C.G., H.F., C.Z.-H.M., Q.W., Z.H. and M.L.; investigation, H.W., C.G., H.F., C.Z.-H.M., Q.W., Z.H. and M.L.; resources, H.W., C.G., H.F., C.Z.-H.M., Q.W., Z.H. and M.L.; data curation, H.W. and C.G.; writing—original draft preparation, H.W., C.G., H.F., C.Z.-H.M., Q.W., Z.H. and M.L.; writing—review and editing, H.W., C.G., H.F., C.Z.-H.M., Q.W., Z.H. and M.L.; visualization, H.W. and C.G.; supervision, H.F., C.Z.-H.M. and Q.W.; project administration, H.F.; funding acquisition, H.F. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was supported a Dean's Reseach Fund (2021/22 DRF/SRAS-1/9th), the Education University of Hong Kong. This research was also supported by Wuxi Taihu Lake Talent Plan Supporting for Leading Talents in Medical and Health Profession, China.

**Institutional Review Board Statement:** The ethical review board at the Education University of Hong Kong approved this study (Protocol code: 2021-2022-0417).

**Informed Consent Statement:** Written informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

**Acknowledgments:** The authors would like to acknowledge The Education University of Hong Kong for the support in the provision of experimental sites.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
