Parallel Prediction Method of Knowledge Proficiency Based on Bloom’s Cognitive Theory
Abstract
:1. Introduction
- (1)
- Considering educational psychological theories, BloomCDM considers the three hierarchical levels of “knowing”, “understanding”, and “application”, as well as students’ proficiency features. BloomCDM employs parallel processing to model the changes in students’ knowledge proficiency under different cognitive levels examined by questions. It projects the question feature matrix into the cognitive space of each level, denoted as parameters , and associates student proficiency parameters with question feature parameters based on the assumed relationships.
- (2)
- To address the challenge of extracting hierarchical structure information from sparse data, a method for discovering higher-order knowledge groups is designed based on the specific application scenario described in this paper. This method can uncover the structure of higher-order knowledge groups from the original matrix of answered questions and extract valuable structural information, providing strong support for knowledge proficiency prediction based on Bloom’s Cognitive Theory.
- (3)
- Experimental results on two real-world online education datasets demonstrate that BloomCDM can effectively model students’ learning and forgetting behaviors, continuously track their knowledge levels in real-time, and outperform existing models in predictive performance.
2. Related Work Problem
3. Description and Mathematical Modeling about BloomCDM
Probability Matrix Factorization
4. Knowledge Proficiency Prediction Model Based on Bloom’s Cognitive Theory
4.1. Framework of BloomCDM
4.2. “Knowing” Model
4.3. “Understanding” Model
- -
- Each element within a knowledge group belongs to one and only one knowledge concept.
- -
- There is at least one element within a group.
- -
- There is no overlap between groups.
- (1)
- Considering the hierarchical nature of “knowing” and “understanding”, the prior of the comprehension matrix .
- (2)
- The prior of knowledge proficiency is .
- (3)
- For each non-missing entry in the “understanding” level, scores matrix .
- (4)
- is an indicator function. If , then = 1; otherwise, .
4.4. “Appilication” Model
5. Model Learning and Prediction
Algorithm 1: The Learning Algorithm of BloomCDM |
Input: Matrices , Subgroup , , standard deviations ,,, learning rate , number of iterations , number of hierarchical levels = 3. |
Output: Student feature matrix , hierarchical feature matrices , predicted orthogonal matrix . |
1. |
2. |
3. |
4. |
5. |
6. |
7. |
8. |
9. |
10. |
11. |
12. |
13. |
14. |
15. |
16. |
17. //Output Feature Matrix and Prediction Matrix |
6. High-Order Knowledge Group Structure Detection
Algorithm 2: Higher-Order Knowledge Group Structure Detection |
Input: Understanding result matrix , perplexity , learning rate , number of iterations , radius , minPts, initialize label set as “undefined”. |
Output: Number of high-order knowledge groups, label set. |
1. //Using t-SNE for dimensionality reduction, obtain the low-dimensional data points after reduction. |
2. for every data point in do |
3. if then continue//Select an untreated point. |
4. Neighbour RangeQuery()//Find points that are density-reachable from point p, and add them to the neighborhood N. |
5. if then//If the number of neighbors is less than minPts, then point p is temporarily marked as noise. |
6. noise |
7. |
8. |
9. |
10. for every data point in do |
11. if noise then |
12. if then continue |
13. RangeQuery() |
14. |
15. if then continue |
16. |
17. |
18. |
19. label, unique(label) |
7. Experiment
7.1. Dataset
- (1)
- Deduplication: Both datasets have a temporal sequence, meaning that for the same question, a student may have multiple response records. Therefore, the log data represents a sequence. This experiment retained only the first response record, considering the first response as the true reflection of the student’s cognition to ensure the uniqueness of the student’s response
- (2)
- Dealing with Long-Tail Distribution: Both datasets exhibit a long-tail distribution, with some students having very few log entries, indicating low activity in answering questions. This could potentially affect the diagnostic results. In this experiment, students with fewer than 15 log entries and questions with fewer than 15 log entries were filtered out, ensuring that each student and question have sufficient data for diagnosing the student’s cognition.
- (3)
- Standardizing Response States: Since the HDU dataset has four types of response states (“Compilation Error”, “Timeout”, “Wrong Answer”, “Accepted”), this experiment excluded the first two states (“Compilation Error” and “Timeout”) and considered only “Wrong Answer” and “Accepted.”
7.2. Experiment Setting
7.2.1. BloomCDM Configuration
7.2.2. Baseline Methods
- (1)
- IRT: Item Response Theory (IRT) is a classical cognitive diagnostic model in educational statistics. It is represented by Equation (21). It constructs a model for calculating the probability of a student’s response considering a one-dimensional latent ability variable and item feature latent variable .
- (2)
- MIRT: Multidimensional Item Response Theory (MIRT) is the multidimensional version of the IRT model, as shown in Equation (22). It considers the multidimensionality of ability and builds a model for calculating the probability of a student’s response based on a monotonicity assumption and ability independence assumption.
- (3)
- PMF: Probabilistic Matrix Factorization (PMF) is a widely used algorithm in recommendation systems. It employs factorization methods to decompose the response logs into latent feature matrices for students and items .
- (4)
- QPMF: QPMF [36] is a variant of PMF that introduces the Q-matrix to enhance the interpretability of PMF. The embedding method utilizes a -matrix-based partial order to emphasize the contribution of the knowledge concept assessed by items.
- (5)
- BPR: Bayesian Personalized Ranking (BPR) is a classic algorithm in recommendation systems. BPR combines a likelihood function constructed based on partial order relationships with a prior probability to perform Bayesian analysis on student response logs.
- (6)
- BloomCDM-RC: BloomCDM-RC is a simplified version of BloomCDM. It only considers the “Remember” and “Comprehension” levels and does not take into account the Application level.
7.3. Results
7.3.1. Analysis of Student Performance Prediction Results
7.3.2. Knowledge Proficiency Diagnosis Task
7.3.3. Visualizing Knowledge Proficiency
8. Conclusions and Future Work
- (1)
- “Knowing” Modeling: Based on the theoretical definition of knowledge, we make assumptions and use the partial order to learn the knowledge features of items from student response data.
- (2)
- “Understanding” Modeling: Following the theoretical definition of comprehension, this level focuses on items assessing the same knowledge concept. When similar knowledge concepts are assessed, it is assumed that a deeper understanding of the knowledge concept can be measured. The model constructs a comprehension calculation model specific to the knowledge concept in the form of knowledge groups. This captures features related to the same knowledge among items and learns the comprehension features of items.
- (3)
- “Application” Modeling: Based on the theoretical definition of application, this level focuses on items assessing similar knowledge concepts. It is assumed that when similar knowledge concepts are assessed, it is easier to discover the inherent connections between knowledge concepts. The model constructs a cross-knowledge-concept model in the form of high-order knowledge groups, learning the application features of items.
- (4)
- To address the challenge of obtaining hierarchical structure information from sparse data, a high-order knowledge group discovery method is designed in this specific application scenario. It can discover high-order knowledge group structures and mine structural information, providing robust support for proficiency assessment based on Bloom’s cognitive levels.
- (1)
- While the effectiveness of BloomCDM designed using probabilistic graphical models is evident, neural networks have demonstrated a strong performance in handling nonlinear problems and feature embeddings, achieving accuracies that probabilistic models may not easily attain. It would be interesting to investigate whether interpretable deep learning models can be constructed. Such models could potentially provide better predictions of student performance, building upon the knowledge proficiency obtained.
- (2)
- It is intriguing that the traditional cognitive diagnostic model, MIRT, performs well in terms of the AUC metric, even outperforming recommendation models like BPR and PMF. This phenomenon warrants further investigation. We propose that a promising direction for future models could involve augmenting the MIRT model with knowledge concepts to compute more precise estimates of knowledge proficiency.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Li, T.; Chen, L.; Jensen, C.S.; Pedersen, T.B.; Gao, Y.; Hu, J. Evolutionary Clustering of Moving Objects. In Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE), Kuala Lumpur, Malaysia, 9–12 May 2022; pp. 2399–2411. [Google Scholar]
- Li, T.; Huang, R.; Chen, L.; Jensen, C.S.; Pedersen, T.B. Compression of Uncertain Trajectories in Road Networks. Proc. VLDB Endow. 2020, 13, 1050–1063. [Google Scholar] [CrossRef]
- Li, T.; Chen, L.; Jensen, C.S.; Pedersen, T.B. TRACE: Real-time Compression of Streaming Trajectories in Road Networks. Proc. VLDB Endow. 2021, 14, 1175–1187. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Liang, X.; Huang, B. Event-Triggered Based Distributed Cooperative Energy Management for Multienergy Systems. IEEE Trans. Ind. Inform. 2019, 15, 2008–2022. [Google Scholar] [CrossRef]
- Li, Y.; Gao, D.W.; Gao, W.; Zhang, H.; Zhou, J. Double-Mode Energy Management for Multi-Energy System via Distributed Dynamic Event-Triggered Newton-Raphson Algorithm. IEEE Trans. Smart Grid 2020, 11, 5339–5356. [Google Scholar] [CrossRef]
- Cui, Z.; Sun, X.; Pan, L.; Liu, S.; Xu, G. Event-based incremental recommendation via factors mixed Hawkes process. Inf. Sci. 2023, 639, 119007. [Google Scholar] [CrossRef]
- Wei, W.; Fan, X.; Li, J.; Cao, L. Model the complex dependence structures of financial variables by using canonical vine. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, Maui, HI, USA, 29 October–2 November 2012; pp. 1382–1391. [Google Scholar]
- Fan, X.; Xu, R.Y.D.; Cao, L.; Song, Y. Learning Nonparametric Relational Models by Conjugately Incorporating Node Information in a Network. IEEE Trans. Cybern. 2017, 47, 589–599. [Google Scholar] [CrossRef] [PubMed]
- Fan, X.; Li, B.; Luo, L.; Sisson, S.A. Bayesian Nonparametric Space Partitions: A Survey. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Virtual Event, 19–26 August 2021; pp. 4408–4415. [Google Scholar]
- Fan, X.; Li, B.; Sisson, S.A.; Li, C.; Chen, L. Scalable Deep Generative Relational Models with High-Order Node Dependence. arXiv 2019, arXiv:1911.01535. [Google Scholar]
- Guo, J.; Du, L.; Liu, H.; Zhou, M.; He, X.; Han, S. GPT4Graph: Can Large Language Models Understand Graph Structured Data? An Empirical Evaluation and Benchmarking. arXiv 2023, arXiv:2305.15066. [Google Scholar]
- Du, L.; Chen, X.; Gao, F.; Fu, Q.; Xie, K.; Han, S.; Zhang, D. Understanding and Improvement of Adversarial Training for Network Embedding from an Optimization Perspective. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, Virtual Event, 21–25 February 2022; pp. 230–240. [Google Scholar]
- Bi, W.; Du, L.; Fu, Q.; Wang, Y.; Han, S.; Zhang, D. MM-GNN: Mix-Moment Graph Neural Network towards Modeling Neighborhood Feature Distribution. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, Singapore, 27 February–3 March 2023; pp. 132–140. [Google Scholar]
- Cao, L.; Chen, H.; Fan, X.; Gama, J.; Ong, Y.-S.; Kumar, V. Bayesian Federated Learning: A Survey. arXiv 2023, arXiv:2304.13267. [Google Scholar]
- Chen, H.; Liu, H.; Cao, L.; Zhang, T. Bayesian Personalized Federated Learning with Shared and Personalized Uncertainty Representations. arXiv 2023, arXiv:2309.15499. [Google Scholar]
- Shengquan, Y.; Xiaofeng, W. The Transformation Research of Education Supply in the Era of “Internet +”. Open Educ. Res. 2017, 1, 29–36. [Google Scholar]
- Liu, H.; Zhang, T.; Li, F.; Gu, Y.; Yu, G. Tracking Knowledge Structures and Proficiencies of Students with Learning Transfer. IEEE Access 2021, 9, 55413–55421. [Google Scholar] [CrossRef]
- Gao, W.; Wang, H.; Liu, Q.; Wang, F.; Lin, X.; Yue, L.; Zhang, Z.; Lv, R.; Wang, S. Leveraging Transferable Knowledge Concept Graph Embedding for Cold-Start Cognitive Diagnosis. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, Taipei Taiwan, 23–27 July 2023; pp. 983–992. [Google Scholar]
- Zhou, Q.; Mou, C.; Yang, D. Research Progress on Educational Data Mining: A Survey. J. Softw. 2015, 11, 3026–3042. [Google Scholar]
- de La Torre, J. DINA model and parameter estimation: A didactic. J. Educ. Behav. Stat. 2009, 1, 115–130. [Google Scholar] [CrossRef]
- Cen, H.; Koedinger, K.; Junker, B. Learning Factors Analysis—A General Method for Cognitive Model Evaluation and Improvement. Lect. Notes Comput. Sci. 2006, 4053, 164–175. [Google Scholar]
- Jiang, P.; Wang, X.; Sun, B. Preference Cognitive Diagnosis for Predicting Examinee Performance. In Proceedings of the 2020 IEEE 2nd International Conference on Computer Science and Educational Informatization (CSEI), Xinxiang, China, 12–14 June 2020; pp. 63–69. [Google Scholar]
- Liu, H.-y.; Zhang, T.-c.; Wu, P.-w.; Yu, G. A review of knowledge tracking. J. East China Norm. Univ. (Nat. Sci.) 2019, 05, 1–15. [Google Scholar]
- Cheng, S.; Liu, Q.; Chen, E.; Huang, Z.; Huang, Z.; Chen, Y.; Ma, H.; Hu, G. DIRT: Deep Learning Enhanced Item Response Theory for Cognitive Diagnosis. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; pp. 2397–2400. [Google Scholar]
- Guo, X.; Huang, Z.; Gao, J.; Shang, M.; Shu, M.; Sun, J. Enhancing Knowledge Tracing via Adversarial Training. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, 20–24 October 2021; pp. 367–375. [Google Scholar]
- Wei, L.; Li, B.; Li, Y.; Zhu, Y. Time Interval Aware Self-Attention approach for Knowledge Tracing. Comput. Electr. Eng. 2022, 102, 108179. [Google Scholar] [CrossRef]
- Xu, L.; Wang, G.; Guo, L.; Wang, X. Long- and Short-term Attention Network for Knowledge Tracing. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; pp. 1–9. [Google Scholar]
- Ma, Y.; Han, P.; Qiao, H.; Cui, C.; Yin, Y.; Yu, Y. SPAKT: A Self-Supervised Pre-TrAining Method for Knowledge Tracing. IEEE Access 2022, 10, 72145–72154. [Google Scholar] [CrossRef]
- Sun, X.; Zhao, X.; Li, B.; Ma, Y.; Sutcliffe, R.; Feng, J. Dynamic Key-Value Memory Networks with Rich Features for Knowledge Tracing. IEEE Trans. Cybern. 2022, 8, 8239–8245. [Google Scholar] [CrossRef]
- Zhang, N.; Li, L. Knowledge Tracing with Exercise-Enhanced Key-Value Memory Networks. In Proceedings of the Knowledge Science, Engineering and Management: 14th International Conference, KSEM 2021, Tokyo, Japan, 14–16 August 2021; pp. 566–577. [Google Scholar]
- Mao, S.; Zhan, J.; Li, J.; Jiang, Y. Knowledge Structure-Aware Graph-Attention Networks for Knowledge Tracing. In Proceedings of the International Conference on Knowledge Science, Engineering and Management, Singapore, 6–8 August 2022; Volume 1, pp. 309–321. [Google Scholar]
- Gan, W.; Sun, Y.; Sun, Y. Knowledge structure enhanced graph representation learning model for attentive knowledge tracing. Int. J. Intell. Syst. 2022, 3, 2012–2045. [Google Scholar] [CrossRef]
- Wu, Z.; Huang, L.; Huang, Q.; Huang, C.; Tang, Y. SGKT: Session graph-based knowledge tracing for student performance prediction. Expert Syst. Appl. 2022, 206, 117681. [Google Scholar] [CrossRef]
- Liu, H.; Zhang, T.; Li, F.; Yu, M.; Yu, G. A Probabilistic Generative Model for Tracking Multi-Knowledge Concept Mastery Probability. CoRR abs. arXiv 2023, arXiv:2302.08673. [Google Scholar]
- Yu, M.; Li, F.; Liu, H.; Zhang, T.; Yu, G. ContextKT: A Context-Based Method for Knowledge Tracing. Appl. Sci. 2022, 12, 8822. [Google Scholar] [CrossRef]
- Chen, Y.; Liu, Q.; Huang, Z.; Wu, L.; Chen, E.; Wu, R.; Su, Y.; Hu, G. Tracking knowledge proficiency of students with educational priors. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, Singapore, 6–10 November 2017; pp. 989–998. [Google Scholar]
- Piech, C.; Spencer, J.; Huang, J.; Ganguli, S.; Sahami, M.; Guibas, L.; Sohl-Dickstein, J. Deep knowledge tracing. arXiv 2015, arXiv:1506.05908. [Google Scholar]
- Bloom, B.S.; Krathwohl, D.R. Taxonomy of Educational Objectives; Cognitive Domain; Longmans, Green: London, UK, 1956; Volume 1, p. 20. [Google Scholar]
- Anderson, L.W.; Krathwohl, D.R. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives: Complete Edition; Addison Wesley Longman, Inc.: Reading, MA, USA, 2001. [Google Scholar]
- Harvey, R.J.; Hammer, A.L. Item response theory. Couns. Psychol. 1999, 3, 353–383. [Google Scholar] [CrossRef]
- Ackerman, T.A. Multidimensional Item Response Theory Models; Wiley StatsRef: Statistics Reference Online; Wiley: Hoboken, NJ, USA, 2014. [Google Scholar]
- Sheng, Y.; Wikle, C.K. Bayesian multidimensional IRT models with a hierarchical structure. Educ. Psychol. Meas. 2008, 3, 413–430. [Google Scholar] [CrossRef]
- Rost, J.R. An integration of two approaches to item analysis. Appl. Psychol. Meas. 1990, 3, 271–282. [Google Scholar] [CrossRef]
- Ma, W.; de la Torre, J. A sequential cognitive diagnosis model for polytomous responses. Br. J. Math. Stat. Psychol. 2016, 3, 253–275. [Google Scholar] [CrossRef]
- Chen, J.; de la Torre, J. A general cognitive diagnosis model for expert-defined polytomous attributes. Appl. Psychol. Meas. 2013, 6, 419–437. [Google Scholar] [CrossRef]
- Xiong, L.; Chen, X.; Huang, T.K.; Schneider, J.; Carbonell, J.G. Temporal collaborative filtering with bayesian probabilistic tensor factorization. In Proceedings of the 2010 SIAM International Conference on Data Mining, Columbus, OH, USA, 29 April–1 May 2010; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2010; pp. 211–222. [Google Scholar]
- Nedungadi, P.; Smruthy, T.K. Personalized multi-relational matrix factorization model for predicting student performance. In Intelligent Systems Technologies and Applications; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 163–172. [Google Scholar]
- Huang, Z.; Liu, Q.; Chen, Y. Learning or forgetting? A dynamic approach for tracking the knowledge proficiency of students. ACM Trans. Inf. Syst. 2020, 2, 1–33. [Google Scholar] [CrossRef]
- Wang, F.; Liu, Q.; Chen, E.; Huang, Z.; Chen, Y.; Yin, Y.; Huang, Z.; Wang, S. Neural cognitive diagnosis for intelligent education systems. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 4, pp. 6153–6161. [Google Scholar]
- Mnih, A.; Salakhutdinov, R.R. Probabilistic matrix factorization. Adv. Neural Inf. Process. Syst. 2007, 20, 1257–1264. [Google Scholar]
- Moon, T.K. The expectation-maximization algorithm. IEEE Signal Process. Mag. 1996, 6, 47–60. [Google Scholar] [CrossRef]
- Carlin, B.P.; Chib, S. Bayesian model choice via Markov chain Monte Carlo methods. J. R. Stat. Soc. 1995, 3, 473–484. [Google Scholar] [CrossRef]
- White, H. Maximum likelihood estimation of misspecified models. Econom. J. Econom. Soc. 1982, 1–25. [Google Scholar] [CrossRef]
- Rendle, S.; Freudenthaler, C.; Gantner, Z.; Schmidt-Thieme, L. BPR: Bayesian personalized ranking from implicit feedback. arXiv 2012, arXiv:1205.2618. [Google Scholar]
- Bottou, L. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2012; pp. 421–436. [Google Scholar]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996; Volume 34, pp. 226–231. [Google Scholar]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 11, 2579–2605. [Google Scholar]
- Huang, Z.; Liu, Q.; Chen, E.; Zhao, H.; Gao, M.; Wei, S.; Su, Y.; Hu, G. Question Difficulty Prediction for READING Problems in Standard Tests. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 1. [Google Scholar]
Symbol | Meaning |
---|---|
Number of students | |
Number of questions | |
Number of knowledge concepts | |
Number of high-level knowledge concepts | |
Result of student on question | |
Knowledge proficiency vector of student | |
Vector indicating if question assesses “knowing” | |
Vector indicating if question assesses “understanding” | |
Vector indicating if question assesses “application” | |
Partial order set for each student | |
Matrix of comprehension results | |
Matrix of application results | |
Subgroup | |
Parent group | |
Set of questions |
ASSIST | HDU | |||||||
---|---|---|---|---|---|---|---|---|
Model | RMSE | MAE | ACC | AUC | RMSE | MAE | ACC | AUC |
IRT | 0.463 | 0.398 | 0.648 | 0.648 | 0.533 | 0.398 | 0.676 | 0.625 |
MIRT | 0.450 | 0.386 | 0.750 | 0.678 | 0.471 | 0.386 | 0.736 | 0.750 |
PMF | 0.460 | 0.394 | 0.657 | 0.657 | 0.479 | 0.394 | 0.724 | 0.657 |
QPMF | 0.451 | 0.388 | 0.674 | 0.683 | 0.460 | 0.397 | 0.744 | 0.687 |
BPR | 0.449 | 0.386 | 0.678 | 0.750 | 0.449 | 0.366 | 0.722 | 0.678 |
BloomCDM-RC | 0.422 | 0.370 | 0.785 | 0.785 | 0.412 | 0.370 | 0.754 | 0.785 |
BloomCDM | 0.421 | 0.364 | 0.836 | 0.886 | 0.407 | 0.364 | 0.766 | 0.836 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, T.; Mao, H.; Liu, H.; Liu, Y.; Yu, M.; Wu, W.; Yu, G.; Wei, B.; Guan, Y. Parallel Prediction Method of Knowledge Proficiency Based on Bloom’s Cognitive Theory. Mathematics 2023, 11, 5002. https://doi.org/10.3390/math11245002
Zhang T, Mao H, Liu H, Liu Y, Yu M, Wu W, Yu G, Wei B, Guan Y. Parallel Prediction Method of Knowledge Proficiency Based on Bloom’s Cognitive Theory. Mathematics. 2023; 11(24):5002. https://doi.org/10.3390/math11245002
Chicago/Turabian StyleZhang, Tiancheng, Hanyu Mao, Hengyu Liu, Yingjie Liu, Minghe Yu, Wenhui Wu, Ge Yu, Baoze Wei, and Yajuan Guan. 2023. "Parallel Prediction Method of Knowledge Proficiency Based on Bloom’s Cognitive Theory" Mathematics 11, no. 24: 5002. https://doi.org/10.3390/math11245002