PDCG-Enhanced CNN for Pattern Recognition in Time Series Data
Abstract
:1. Introduction
2. Background
3. Materials and Methods
3.1. Typical Patterns
3.2. Data Collection
3.3. Labeling Process
3.4. Pattern-Driven Case Generator (PDCG)
3.5. Framework
4. Experiments
4.1. Dataset
4.2. Test Scenarios
5. Results
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Arunkumar, M.; Palaniappan, S.; Sujithra, R.; VijayPrakash, S. Exploring time series analysis techniques for sales forecasting. In Data Science and Network Engineering; Namasudra, S., Trivedi, M.C., Crespo, R.G., Lorenz, P., Eds.; Springer: Singapore, 2024; Volume 791, pp. 41–55. [Google Scholar] [CrossRef]
- Yang, C.; Shao, H. WiFi-based indoor positioning. IEEE Commun. Mag. 2015, 53, 150–157. [Google Scholar] [CrossRef]
- Gutschlag, T.; Storandt, S. On the generalized Fréchet distance and its applications. In Proceedings of the 30th International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 1–4 November 2022; pp. 1–10. [Google Scholar] [CrossRef]
- Meng, X.; Fu, H.; Liu, G.; Zhang, L.; Yu, Y.; Hu, W.; Cheng, E. Multi-feature fusion: A driver-car matching model based on curve comparison. IEEE Access 2019, 7, 83526–83535. [Google Scholar] [CrossRef]
- Pronina, N.; Mestetskiy, L.M. Strokes classification of handwritten text based on the Fréchet distance. In Proceedings of the 33rd International Conference on Computer Graphics and Vision, Moscow, Russia, 19–21 September 2023; pp. 492–502. [Google Scholar] [CrossRef]
- Stübinger, J.; Walter, D. Using multi-dimensional dynamic time warping to identify time-varying lead-lag relationships. Sensors 2022, 22, 6884. [Google Scholar] [CrossRef] [PubMed]
- Jin, S.; Chen, S.; Wei, J.; Li, X.-Y. Automatic seismic event tracking using a dynamic time warping algorithm. J. Geophys. Eng. 2017, 14, 1138–1149. [Google Scholar] [CrossRef]
- Lunglmayr, M.; Lindorfer, G.; Moser, B. Robust and Efficient Bio-Inspired Data-Sampling Prototype for Time-Series Analysis. In Communications in Computer and Information Science; Springer: Cham, Switzerland, 2021; Volume 1479. [Google Scholar] [CrossRef]
- Smith, J. Mimicking Animal Movement Patterns in Algorithm Design. Nat. Comput. Sci. 2021, 1, 210–225. [Google Scholar]
- Maheshwari, A.; Yi, J. A Survey of Techniques for Efficiently Computing Fréchet Distance. In Computational Geometry and Graph Theory; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
- Wang, X.; Mueen, A.; Ding, H.; Trajcevski, G.; Scheuermann, P.; Keogh, E. Experimental comparison of representation methods and distance measures for time series data. Data Min. Knowl. Discov. 2013, 26, 275–309. [Google Scholar] [CrossRef]
- Wang, J.; Zhao, P.; Cui, S.; Liu, M. Time series similarity learning: From raw data to representation and kernel learning. Neurocomputing 2018, 275, 2285–2302. [Google Scholar]
- Franses, P.H.; Wiemann, T. Intertemporal Similarity of Economic Time Series: An Application of Dynamic Time Warping. Comput. Econ. 2020, 56, 59–75. [Google Scholar] [CrossRef]
- Lahreche, A.; Boucheham, B. A fast and accurate similarity measure for long time series classification based on local extrema and dynamic time warping. Expert Syst. Appl. 2021, 168, 114374. [Google Scholar] [CrossRef]
- Jastrzebska, A.; Nápoles, G.; Salgueiro, Y.; Vanhoof, K. Evaluating time series similarity using concept-based models. Knowl.-Based Syst. 2022, 238, 107811. [Google Scholar] [CrossRef]
- Zhang, Q.; Zhang, C.; Cui, L.; Han, X.; Jin, Y.; Xiang, G.; Shi, Y. A method for measuring similarity of time series based on series decomposition and dynamic time warping. Appl. Intell. 2023, 53, 6448–6463. [Google Scholar] [CrossRef]
- Qahtan, A.A.; Alharbi, S.; Wang, S.; Zhang, X. Deep learning in time series classification: A review. Data Min. Knowl. Discov. 2020, 10, e1319. [Google Scholar]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Batool, S.; Khan, M.H.; Farid, M.S. An ensemble deep learning model for human activity analysis using wearable sensory data. Appl. Soft Comput. 2024, 159, 111599. [Google Scholar] [CrossRef]
- Khan, M.H.; Shafiq, H.; Farid, M.S.; Grzegorzek, M. Encoding human activities using multimodal wearable sensory data. Expert Syst. Appl. 2025, 261, 125564. [Google Scholar] [CrossRef]
Simulation Cases | Patterns (Four Typical Types and “Other” Type) | ||||
---|---|---|---|---|---|
W | M | nAn | vVv | Other | |
200 1 | 38 2 | 37 | 36 | 39 | 50 |
47.47 (5.2) 3 | 48.11 (4.96) | 70.5 (8.17) | 72.31 (8.59) | 50.72 (5.96) | |
400 | 95 | 83 | 81 | 79 | 62 |
48.67 (5.01) | 48.58 (5.8) | 71.11 (8.23) | 71.24 (8.23) | 50.26 (6.01) | |
600 | 130 | 118 | 127 | 117 | 108 |
48.74 (5.72) | 47.49 (5.86) | 71.39 (8.17) | 71.28 (8.56) | 49.5 (6.29) | |
800 | 156 | 162 | 162 | 154 | 166 |
47.77 (5.82) | 47.88 (5.99) | 72.56 (8.45) | 71.57 (7.86) | 50.19 (5.56) | |
1000 | 203 | 188 | 180 | 211 | 218 |
48.24 (5.64) | 48.04 (5.69) | 72.27 (8.14) | 71.09 (8.64) | 48.79 (5.53) | |
2000 | 375 | 417 | 376 | 403 | 429 |
48.28 (5.74) | 47.7 (5.6) | 72.51 (8.57) | 72.04 (8.7) | 49.56 (5.8) | |
4000 | 802 | 777 | 787 | 843 | 791 |
48.1 (5.77) | 48.08 (5.76) | 71.97 (8.55) | 72.46 (8.6) | 49.64 (5.77) | |
6000 | 1174 | 1224 | 1121 | 1250 | 1231 |
48.09 (5.61) | 48.1 (5.58) | 71.89 (8.48) | 71.6 (8.39) | 49.49 (5.88) | |
8000 | 1651 | 1582 | 1595 | 1621 | 1551 |
48.09 (5.66) | 47.98 (5.61) | 72.12 (8.45) | 71.81 (8.51) | 49.37 (5.89) | |
10,000 | 1992 | 1975 | 1960 | 2052 | 2021 |
48.08 (5.61) | 48.31 (5.71) | 72.02 (8.43) | 71.76 (8.32) | 49.63 (5.68) |
Real Cases with Diff. Patterns | W | M | nAn | vVv | Other |
---|---|---|---|---|---|
Data source: Phone sensors (SensorData) | |||||
Size of real cases | 100 | 100 | 100 | 100 | 100 |
Types of data involved | Linear acceleration data (x, y, z axes) from smartphone sensors | ||||
Size of time series (noted as N 1) | Minimum: 35; maximum: 65 | ||||
Mean of N | 51.45 | 55.90 | 57.20 | 50.50 | 45.45 |
Std. of N | 16.3 | 15.1 | 16.1 | 12.2 | 19.4 |
Data source: Financial market (FinData) | |||||
Size of real cases | 100 | 100 | 100 | 100 | 100 |
Number of commodities with prices | 44 | 40 | 39 | 37 | 35 |
Size of time series (noted as N 1) | Minimum: 20; maximum: 130 | ||||
Mean of N | 71.4 | 70.8 | 80.6 | 81.2 | 75.1 |
Std. of N | 28.7 | 26.8 | 30.3 | 29.6 | 27.5 |
Test Scenario ID | Method | Patterns to Be Distinguished | Number of Simulation Cases for Training CNN | Number of Real Cases for Overall Evaluating |
---|---|---|---|---|
1 | Fréchet, DTW | [W, M] | NA | 200 |
2 | Fréchet, DTW | [W, M, nAn] | NA | 300 |
3 | Fréchet, DTW | [W, M, nAn, vVv] | NA | 400 |
4 | Fréchet, DTW | [W, M, nAn, vVv, Other] | NA | 500 |
5 | CNN | [W, M, nAn, vVv, Other] | 200, 400, …, 8000, 10,000, respectively | 500 |
Layer (Type) | Output Shape | Num. of Parameters |
---|---|---|
conv2d (Conv2D) | (None, 62, 62, 64) | 640 |
max_pooling2d (MaxPooling2D) | (None, 31, 31, 64) | 0 |
conv2d_1 (Conv2D) | (None, 29, 29, 64) | 36,928 |
max_pooling2d_1 (MaxPooling2D) | (None, 14, 14, 64) | 0 |
conv2d_2 (Conv2D) | (None, 12, 12, 64) | 36,928 |
max_pooling2d_2 (MaxPooling2D) | (None, 6, 6, 64) | 0 |
conv2d_3 (Conv2D) | (None, 4, 4, 64) | 36,928 |
flatten (Flatten) | (None, 1024) | 0 |
dense (Dense) | (None, 64) | 65,600 |
dense_1 (Dense) | (None, 5) | 325 |
Precision 1 | Recall 2 | f1-Score 3 | Support 4 | |
---|---|---|---|---|
Test Scenario 1: 200 real cases for evaluating, [W, M] | ||||
Fréchet | ||||
W | 0.8 | 0.75 5 | 0.96 | 0.88 | 0.87 | 0.81 | 100 |
M | 0.95 | 0.85 | 0.76 | 0.70 | 0.84 | 0.77 | 100 |
accuracy | 0.86 | 0.79 | 200 | ||
macro avg | 0.88 | 0.79 | 0.88 | 0.79 | 0.88 | 0.79 | 200 |
weighted avg | 0.88 | 0.79 | 0.88 | 0.79 | 0.88 | 0.79 | 200 |
DTW | ||||
W | 1.00 | 0.98 | 0.94 | 0.98 | 0.97 | 0.98 | 100 |
M | 0.94 | 0.98 | 1.00 | 0.98 | 0.97 | 0.98 | 100 |
accuracy | 0.97 | 0.98 | 200 | ||
macro avg | 0.97 | 0.98 | 0.97 | 0.98 | 0.97 | 0.98 | 200 |
weighted avg | 0.97 | 0.98 | 0.97 | 0.98 | 0.97 | 0.98 | 200 |
macro avg | 0.97 | 0.98 | 0.97 | 0.98 | 0.97 | 0.98 | 200 |
Test Scenario 5: 10,000 simulation cases for CNN training, 500 real cases for evaluating, [W, M, nAn, vVv, Other] | ||||
CNN | ||||
W | 0.98 | 0.79 | 0.99 | 0.93 | 0.99 | 0.86 | 100 |
M | 0.99 | 0.96 | 0.99 | 0.95 | 0.99 | 0.95 | 100 |
nAn | 0.99 | 0.96 | 1.00 | 0.71 | 1.00 | 0.82 | 100 |
vVv | 0.96 | 1.00 | 1.00 | 0.74 | 0.98 | 0.85 | 100 |
Other | 1.00 | 0.70 | 0.94 | 0.95 | 0.97 | 0.81 | 100 |
accuracy | 0.98 | 0.86 | 0.98 | 0.86 | 0.98 | 0.86 | 500 |
macro avg | 0.98 | 0.88 | 0.98 | 0.86 | 0.98 | 0.86 | 500 |
weighted avg | 0.98 | 0.88 | 0.98 | 0.86 | 0.98 | 0.86 | 500 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xie, F.; Xie, M.; Wang, C.; Li, D.; Zhang, X. PDCG-Enhanced CNN for Pattern Recognition in Time Series Data. Biomimetics 2025, 10, 263. https://doi.org/10.3390/biomimetics10050263
Xie F, Xie M, Wang C, Li D, Zhang X. PDCG-Enhanced CNN for Pattern Recognition in Time Series Data. Biomimetics. 2025; 10(5):263. https://doi.org/10.3390/biomimetics10050263
Chicago/Turabian StyleXie, Feng, Ming Xie, Cheng Wang, Dongwei Li, and Xuan Zhang. 2025. "PDCG-Enhanced CNN for Pattern Recognition in Time Series Data" Biomimetics 10, no. 5: 263. https://doi.org/10.3390/biomimetics10050263
APA StyleXie, F., Xie, M., Wang, C., Li, D., & Zhang, X. (2025). PDCG-Enhanced CNN for Pattern Recognition in Time Series Data. Biomimetics, 10(5), 263. https://doi.org/10.3390/biomimetics10050263