18 November 2021
Entropy | Best Paper Award for the 1st International Conference on Novelties in Intelligent Digital Systems (NIDS2021)—Winner Announced

We are pleased to announce that the Best Paper Award, sponsored by Entropy (ISSN 1099-4300; website: https://www.mdpi.com/journal/entropy), for the 1st International Conference on Novelties in Intelligent Digital Systems (NIDS2021) was granted to the work by Anton Anikin, Oleg Sychev, and Mikhail Denisov (Volgograd State Technical University, Volgograd, Russia). Congratulations!

Title: “Ontology Reasoning for Explanatory Feedback Generation to Teach how Algorithms Work”

Summary: Intelligent tutoring systems are becoming increasingly common in assisting students but are often aimed at isolated subject domain tasks without creating a scaffolding system from lower- to higher-level cognitive skills, with low-level skills often neglected. We designed and developed an intelligent tutoring system, CompPrehension, aimed at the comprehension level of Bloom's taxonomy. The system features plug-in-based architecture, adding new subject domains and learning strategies. It uses formal models and software reasoners to solve the problems, judge the answers, and generate explanatory feedback for the broken domain rules and follow-up questions to stimulate the students' thinking. We developed two subject domain models: an Expressions domain for teaching the expression order of evaluation and a Control Flow Statements domain for code-tracing tasks. Developing algorithms using control structures and understanding their building blocks are essential skills in mastering programming, while ontologies and software reasoning offers a promising method for developing intelligent tutoring systems in well-defined domains (such as programming languages and algorithms). It can also be used for many kinds of teaching tasks. In this work, we used a formal model consisting of production rules for Apache Jena reasoner as a basis for developing a constraint-based tutor for introductory programming domain. The tutor can determine fault reasons for any incorrect answer that a student can enter. The problem the student should solve is building an execution trace for the given algorithm. The problem is a closed-ended question that requires arranging given actions in the (unique) correct order; some actions can be used several times, while others can be omitted. Using formal reasoning to check domain constraints allowed us to provide explanatory feedback for all kinds of errors subject-domain tasks that students can make.

The chief novelty of our research is that the developed models are capable of automatic problem classification, determining the knowledge required to solve them and, thus, the pedagogical conditions to use the problem without human participation. More than 100 undergraduate first-year Computer Science students took part in evaluating the system. The results in both subject domains show medium but statistically significant learning gains after using the system for a few days; students with worse previous knowledge gained more. In the Control Flow Statements domain, the number of completed questions correlates positively with the post-test grades and learning gains. The students' survey showed a slightly positive perception of the system.

Back to TopTop