Next Article in Journal
Estimating Full-Coverage PM2.5 Concentrations Based on Himawari-8 and NAQPMS Data over Sichuan-Chongqing
Next Article in Special Issue
AAQAL: A Machine Learning-Based Tool for Performance Optimization of Parallel SPMV Computations Using Block CSR
Previous Article in Journal
Orthodontic Loads in Teeth after Regenerative Endodontics: A Finite Element Analysis of the Biomechanical Performance of the Periodontal Ligament
Previous Article in Special Issue
Compressive Domain Deep CNN for Image Classification and Performance Improvement Using Genetic Algorithm-Based Sensing Mask Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Feedback System Supporting Students Approaching a High-Level Programming Course

1
Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 106344, Taiwan
2
Institute for Information Industry, Taipei 10622, Taiwan
3
School of Information and Electronics Beijing, Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(14), 7064; https://doi.org/10.3390/app12147064
Submission received: 18 June 2022 / Revised: 8 July 2022 / Accepted: 8 July 2022 / Published: 13 July 2022
(This article belongs to the Special Issue Applied and Innovative Computational Intelligence Systems)

Abstract

:
This study analyzes the mistakes students are prone to make in programming and uses the GDB and Valgrind tools to implement dynamic analysis techniques for their eventual application to programs created by students. In the analysis process, spectral error localization technology is used to strengthen the dynamic analysis to find errors more accurately. The analyzed results are sorted and corresponding feedback is given to students in order for them to better understand the content of errors when revising the program and classifying and counting the types of errors made. This study sorts mistakes frequently made by students and topics in which students are likely to make certain mistakes. The developed system was implemented in experiments including students from a programming course who were divided into two groups, namely the experimental group and the control group. A system for both groups of students to upload and submit assignments and a code analysis and feedback improvement system were used. Students in the control group only used the assignment uploading and submitting system for basic assignment uploading, verification, and the comparison of test data. After the program was entered, declarative sentence disassembly and dynamic slicing were suggested. Data were sent to GNU Debugger (GDB) and Valgrind for spectral error location; the classification and recording of error types; and the interpretation of the number of error lines, error types, and related variables. Feedback and a generated report were sent back to the student interface to provide effective and useful feedback to the students in the experimental group for them to revise their homework and record the types and number of errors they made in that week’s homework in the database. The answers provided by the students to the questions were recorded. The analysis of the pass rates of the students in the experimental and control groups for each homework test aided the understanding of the differences in the learning success of the two groups of students each week. The weekly pass rates and the numbers of measured errors in the experimental group compared with in the control group were input into a distribution map to allow us to better understand whether there was any positive correlation between the detected information, feedback to the students, pass rates of the tests, and other related data. The system statistically obtained feedback and the degree of improvement of homework programs; then, it distributed specially designed questionnaires to all students to directly obtain and quantify their feedback and perceived benefits of this system, thereby verifying the effectiveness of the system and its practicality.

1. Introduction

The ability to write programs has recently gained importance. Nearly 30 countries have formulated policies in this regard, hoping to strengthen the information capabilities of their citizens from an early age to improve the overall competitiveness of the country. In 2017, Japan proposed the “Future Investment Strategy 2017” and planned to incorporate programming education into the curricula of the compulsory education stages of primary and secondary schools from 2020 as well as further improve the digital teaching materials and evaluation system [1]. The UK also incorporated programming into the curriculum in 2014. There, children start to learn the basics of programming at the age of five, and by the age of eleven they must have the ability to use two programming languages. Besides, universities require at least half of the undeclared students to study programming within five years before graduation to prepare to learn about and apply their knowledge to artificial intelligence.
In a society heavily reliant on information and electronic devices, letting the public understand the principles of program operation could help people to avoid being helpless with respect to information technology (IT). However, not everyone knows how to write programs, but with the aid of basic computational concepts, it is possible to think about more diverse ways of using IT and the Internet. Students can also obtain better logical thinking from correct programming education, thus enhancing their creativity and preparing for solving many related problems in their future.
Learning programming is crucial, but only a few people know how to write programs in the IT departments of universities; only rarely, one can write a system that can be used. One of the problems is that students have too little practical experience, and writing good, stable, and reliable programs requires time-consuming trial-and-error processes and failure-related frustration. A good programming support platform is important to help students overcome the obstacles of programming. Patil et al. [2], Paiva et al. [3], and Jung et al. [4] reported that to reduce the development time or realize the submission of an assignment in a given period, students/programmers widely use social media, especially Google, to find recommendation systems that can suggest a program according to their requirements. The efficient, accurate, and fast development of a code or an assignment depends upon the accurate recommendation given by the system. Wrong recommendations lead to inaccurate software/assignment development, as well as a wastage of time for the student/programmer. Other approaches involve an assessment conducted by students themselves against reference implementations, each with their own mutations, to assess how many of the mutations students can identify in the tests; this naturally encourages students to check their solutions more accurately, rather than have them develop an over-reliance on auto graders [5]. Patil et al. [2] and Lucas et al. [6] proposed both multiple-choice feedback in an online quiz system and the automated assessment of student programming tasks. Using these systems, students can submit their programming assignments numerous times before a deadline and obtain feedback for further improving their code or for fixing mistakes, promoting just-in-time learning (Joan et al. [7]; Yana et al. [8]; Sychev et al. [9]). These systems also allow instructors to save substantial time in terms of manually grading programming assignments and to focus on the pedagogical design of courses.
Although previous research has contributed to the development of automatic feedback, few studies have empirically explored designing formative feedback for programming assignments. If there is only right or wrong evaluation feedback, students are under pressure due to insufficient skills, improper time arrangement, low interest, and other factors, and these pressures are important reasons why students find it difficult to learn programming well. This research study analyzes the errors that students are prone to make when writing programs in programming courses and uses the GDB and Valgrind tools to implement dynamic analysis techniques for their eventual application to student-made programs. Ways of providing good feedback and giving corresponding exercises can significantly improve the success rate of students by giving them the chance to correct their own programming mistakes as compared with only providing correct or wrong feedback.
In this study, we propose to develop a code debugging and feedback system to achieve three objectives. (1) The system is designed to provide teachers with quick programming assignments according to the teaching progress, automatically correct grading, and allow teachers to provide more students with practical programming experience in one course, thus reducing the pressures of producing and grading a large number of programming tests on teachers. (2) It is designed to be used by students to upload programming assignments and quickly report errors in their programs. According to the logic, grammar, structure, and algorithm used by students, as well as their programming abilities, various types of useful feedback and revision directions can be automatically prompted, which can greatly strengthen the programming design abilities and learning motivation of students, as well as the overall learning effect. In this way, the goal of half of the college students learning programming can be achieved. (3) Lastly, the system is designed to generate relatively different types of programming questions according to the progress of the class, which can provide teachers with quick and accurate programming exercises and can carry out progressive learning according to the difficulty of the exercises.
In this paper, Section 2 presents the concepts and technologies used in this study, Section 3 and Section 4 present an analysis-based approach for fault detection and feedback, and Section 5 presents the architecture of the system, experimental process, and results. Then, the conclusions of this study are presented in Section 6.

2. Research Background

2.1. Control Flow Graph

The control flow graph [10,11] is a directed graph mainly used for static analysis and can display all branches during the execution of programs; it contains if-then-else, while, and do-while controls (Figure 1). It can easily encapsulate the information per each basic block and can locate the inaccessible codes of a program.

2.2. Dynamic Dependence Graph

The dynamic dependence graph converts the results of dynamic slicing into graphics [4]. Its advantage is that it makes it easy to find the for-or-while statement in dynamic slicing and to iterate several times, and it clearly indicates the statements affected by each iteration. A primary limitation of this graph is that if there are too many sentences in dynamic slicing, it is complicated. A reduced dynamic dependence graph can be used to simplify the dynamic dependence graph [12,13].

2.3. Spectrum-Based Fault Localization

Spectrum-based fault localization uses test cases and their corresponding code coverage to evaluate each program element to determine the possibility of errors in the program. Spectrum-based fault localization was first applied in botanical studies by Jaccard [14] in 1901. Since then, many researchers have used it in other fields (including software debugging) and further improved it. In 2017, Tang et al. [15] compiled the currently used spectrum-based fault localization technology, and in 2018, Javier et al. [16] selected 4 methods with the best results from the existing 18 spectrum-based fault localization methods. The ranking order of these methods was selected to be Kulczynski et al.’s [17], Wong et al.’s [18], Ochiai et al.’s [19], and Janssen et al.’s [20] methods. Fault localization techniques on real vs. artificial faults were compared and reported for the first time by Pearson et al. [21]. It was found that techniques that localized artificial faults best did not perform well on real faults. Xie et al. [22] proposed spectrum-based fault localization to represent the effectiveness of the risk evaluation formula framework based on the concept that the determinant of the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement. Xie et al. [23] demonstrated that SBSE could be used to automatically design such a formula by recasting this as a problem for genetic programming.
The system detects the wrong program and the program spectrum; records the path of the program in the test; blocks the statement of the program, which is relatively active when the test fails; and applies the formula to calculate each statement or program area in the program. In the error probability score for the block, the higher the score is, the higher the probability of an error is when the program is executed. Its formula usually contains:
  • Nf(e). The test successfully executes line e of the program, and the test result is the number of failures;
  • Nf( e ¯ ). The test does not execute line e of the program, and the test result is the number of failures;
  • Ns(e). The test successfully executes the e-th line of the program, and the test result is the number of successes;
  • Ns( e ¯ ). The test does not execute line e of the program, and the test result is the number of successes.
Using these factors, the formula is as follows:
Jaccard s   formula = Nf e Nf e + Nf e ¯ + Ns e .

2.4. Program Slicing and Decomposition

Program slicing comprises two techniques, static slicing and dynamic slicing [24,25,26]. The static slicing proposed by Mark Weiser refers to selecting a specific variable V and separating and testing the program statements that make it functional on the premise that it does not affect the overall program behavior. It is often used in maintenance and verification testing.
Taking Figure 2 as an example, the results of the static slicing of X, Z, and TOTAL are shown in Figure 3.
Agrawal et al. presented the idea of dynamic slicing [27], which involves selecting and slicing sentences that affect or change the content of variable V in the program, with the aim of achieving slicing analysis more efficiently than with static slicing.
In Figure 4, the dynamic slicing path when variable N = 2 is {1, 2, 3, 4, 51, 61, 71, 81, 91, 62, 72, 82, 92, 63, 10}. Program decomposition [28] occurs because of the limited effectiveness of program analysis, and the algorithms used by developers usually have fixed signs to follow. Therefore, this algorithm is designed to statically decompose the program into multiple tokens. The program is orderly decomposed into various components, by which the designed module program disassembles, analyzes, and checks the relationships between each semantic fragment.

2.5. Software Testing Technique

A software testing technique is used to test cases or the software interface in various aspects and the correctness of the execution surface of a specific function of the software. It is generally divided into three techniques, white box, gray box, and black box [29].
White box testing performs corresponding tests on the structure of the code and its logic. Since the internal code is in a visible state, it can improve the test efficiency and coverage and ensure the quality of the algorithm by deleting unnecessary code blocks. The disadvantage is that it consumes more human resources and time.
Black box testing performs functional tests corresponding to the requirements. It only needs to execute software, input or perform specific actions according to the requirements, and confirm that the output or feedback meets the user’s expectations. The advantage is that it requires fewer human resources and less time, while the disadvantage is that it cannot be implemented or tested in detail, unlike white box testing.
Gray box testing is a combination of the above two techniques. Certain parts of the code are visible, and the tester conducts black box testing on the interface provided by the developer in a way similar to that of the user.

2.6. Questionnaire Survey

Questionnaire surveys are mainly used to collect statistical information on a single/single-field-related issue. They are used to collect and statistically analyze information by targeting specific groups of applicants or by large-scale random sampling without targeting specific groups. Online questionnaires or physical questionnaires can be used for surveys. A questionnaire survey is an efficient way to obtain useful information. It is broadly categorized into four types: open-ended questions, closed-ended questions, sequencing questions, and matching questions [30].

3. Code Debugging Method Design

3.1. System Architecture Diagram

The proposed system is composed of three major subsystems (Figure 5). The dynamic analysis subsystem can dynamically slice and localize the fault in the code. Based on the theory of the program dependency graph, the dependency relationship of the program code is analyzed, and the dependency relationship information is processed by the fault classification subsystem. The fault classification subsystem performs fault classification on the dependency information, compares it according to the existing implemented error category rules, and processes the comparison result to generate a feedback file. The questions assigned by the subject subsystem mark the sorted questions, such as arrays and string processing. The system performs statistical analyses on the questions raised by the students and merges them with the main question.

3.2. Pre-Processing of Code Debugging

The debugging method in this study uses dynamic analysis and error location methods, and the fault localization method is mainly based on code coverage. The system inputs a series of test data according to the test case and covers different sentences. The system needs to filter two kinds of code upload statuses. In a first instance, the first uploaded program fails to compile; because the failed program cannot be executed, it cannot be debugged using dynamic analysis and fault localization methods. Thus, the second program must pass the sample test. In the program uploading system, for each different question type, two different sample test inputs and outputs are provided for the uploader to refer to. The program needs to pass the example test before it can be debugged.

3.3. Program Dynamic Analysis

This research study uses the GNU Debugger and Valgrind tools to conduct further analyses and generate feedback on the C language used by students in the classroom, as well as dynamically analyze student-made programs for the types of errors that students are prone to make. GDB can track the execution process of programs and can print out the value and address of any variable. Valgrind mainly debugs the state of memory configuration in the program. In addition, we establish rules for five common mistakes and further explore whether the automatic feedback system can help students pay more attention to the parts that can be easily overlooked before writing the program.
The following are the types of mistakes students tend to make:
  • An action that attempts to compare or assign a value to an uninitialized variable;
  • The wrong syntax is used for a specific function in the library;
  • The program outputs a segmentation fault error;
  • The array index value exceeds its bounds;
  • The program execution time exceeds expectations or does not break out of the loop.
Specifically, we address five common student errors in the following ways:
  • The system marks the uninitialized variable flag as a result of the program analysis and prints the error type, the number of error lines, error content, and the variable whose value has been subjected to an access attempt or has been accessed during actual execution;
  • Even if students are successful with easier, sample test data, when dealing with more rigorous, real test data, errors in the details of a student’s program can be detected. The feedback message for students includes a reminder of the number of lines and the correct use of the strcmp function;
  • During execution, if there is an attempt to access variable c, which is not assigned to the memory address in the sequence, the title prints the error type, line number, and error content of the segmentation fault;
  • When executing, it might happen that students do not consider the stop condition of the loop on the periphery of the array, so there is no way to meet the input of the operation test data. As a consequence, the index value of the array trying to access the loop gradually increases and cannot be stopped, resulting in an error exceeding the array boundary;
  • When dealing with more complex test data, students might enter a loop with imperfect conditional judgment, which leads to program execution taking too long, which may take up system performance. It is easy to see why student-made programs might cause this.
These rules can be analyzed and expanded according to different courses, different students’ assignments, and different levels of students. Therefore, this paper proposes a rule-based system that can employ preliminary intelligence.

3.4. The Method of Analyzing Mistakes in the Program Process

This research study uses sockets to link the debugging system with the program uploading system. The following are the solution implementation methods for the five mistake types: (1) Use of an uninitialized variable: The system dynamically analyzes the program and records the contents of all variables. When a variable is used and not initialized, it is marked as a mistake and moved forward according to statement dynamic slicing to find the use path of the variable. (2) Array index out of bounds: The system dynamically analyzes the program and records the initial space size of all arrays. When the array is used, if it exceeds the initial space range, it is marked as a mistake. (3) Incorrect use of the library: The system statically analyzes the program. When the user uses the strcmp function, the expressed condition is == 1 or == −1, and it is marked as a mistake. (4) Segmentation fault: The system dynamically analyzes the program and records the memory address request. When an attempt is made to access a memory location that is not allowed to be accessed, it is marked as a mistake. (5) Irrationally long execution time of the program: The program execution time interval is too long when the analysis process exceeds 15 s. The system checks whether the last execution state is an infinite loop, the program is waiting for input data, or the format conversion of input data is wrong. If one of the above three conditions is true, it is marked as a mistake.

4. Feedback Message Design and Debugging Method

This study uses dynamic slicing and fault location methods. After the student uploads the program using the homework uploading system, the output is compared with the designed test data, and it confirms whether the program can be sent to the program analysis and feedback improvement system. If it does not meet the standard condition, the results of the student’s program are instantly displayed on the student’s user interface. If the analysis standard is met, the student’s program is sent into this system for dynamic slicing. The system records all the statements in the program, disassembles all the variables, and analyzes the variables one by one. The data dependency among variables is recorded, integrated into variable data access objects, and stored in series by a dynamic array; then, the array is transferred to the fault localization module.
The GDB in the module first performs a path analysis on the contents of the array and tries to compare whether there are faults that meet the filtered condition. If faults are detected, the fault information is stored in the fault information data access object and sent to the fault categorization module. If no faults are found, the dynamic array is sent to Valgrind for analysis to see if any abnormal use of the memory or abnormal conditions occur during the execution of the program. If so, the fault information is stored in the fault information data access object and sent to the fault categorization module. If the fault is still not found, the analysis is ended. When the fault information is passed to the fault identification and categorization module, the fault information and known fault types are distinguished and classified. After categorization is completed, the system retrieves the specific variable in the variable data access object corresponding to the fault, and the fault flag is sent to the result analysis and report production module. The module then stores the fault variable and its cause in the database, stores the fault category in the database together with the student ID, and classifies the completed fault. The feedback report message is simplified for a better reading experience, and the numbers of program lines, fault variables, and easy-to-read fault feedback reports are returned to the student user interface.
The faults that easily occur in programming learning are here categorized into five types: the program execution time exceeds expectations or does not break out of the loop; the wrong syntax is used for a specific function in the library; the array index value exceeds its boundary; the program generates a segmentation fault; an attempt is made to compare or assign a value to an uninitialized variable. The detected faults are classified, and a fault report is generated and sent back to the user interface of the student, as shown in Figure 6.
The system shows information about the problem; the error problem statement is visualized by the student, and the interface requires the student to pass all unit tests before they can submit their answer.

5. The Solution Implementation Method

5.1. System Description

In programming courses, students often do not understand why their programs do not pass a test. Although each student has a different programming style, the mistakes that students make in programming are fundamentally the same, such as the array exceeding the index or the use of uninitialized variables. Therefore, in this research study, we use two mistake locating methods to classify mistakes in students’ programs and mark the corresponding program problems. Students can be reminded in advance when making the same mistakes in the future.
The proposed system analyzes the C programming language. Therefore, to avoid adverse student attacks on the system, this system uses gVisor, open-source software by Google, as the system’s Sandbox. gVisor implements more than 200 Linux System Calls in the User Space to improve the system’s performance safety.

5.2. System Process

Figure 7 shows the system flow chart of the program debugging and feedback system. After the students upload their program, the system compares the program with the test data and sends the result to the debug filter. The system uses two debugging methods. Then, the system stores the program’s debugging results in the database, classifies the mistake results, marks the corresponding topics according to the classification, and sends the results to the front-end system.
The error localization program in this study mainly performs spectral error localization and dynamic slicing to identify the first five common errors explained as follows:
  • Dynamic analysis subsystem: The dynamic analysis subsystem corresponds to the error location module. The error location module packages the variable relationship in each line of statements into Variable DAO and concatenates objects with ArrayList to form a set of dynamic dependency path structures:
    1-1
    Gcov. In the error location module, the Gcov tool is used to analyze the code to obtain the code coverage, and the spectral error location is used to calculate the similarity coefficient according to the code coverage; finally, the similarity coefficient is calculated. Data are stored in the array;
    1-2
    Valgrind. In the error location module, the Valgrind tool is both used to analyze the code to obtain memory leaks and reports of illegal uses of memory and to analyze the report content to locate the location of illegally used memory;
    1-3
    GDB. In the error location module, the GDB tool is used to analyze the code, track the execution path of the program step by step, store the dependencies of each line of statements, and store them in ArrayList;
    1-4
    Error location module. The error location module packages the similarity coefficient, Valgrind report, and dependencies into ArrayList<VariableDAO> and sends the object to the error identification module.
  • Error classification subsystem: The error classification subsystem corresponds to the error identification module, the error classification and labeling module, and the analysis result module. The error identification module identifies whether there are common errors in the dynamic dependency path structure and retrieves the wrong ones. The object is sent to the error classification and tag module, which error-classifies and tags the question and code, converts the classified Variable DAO to ErrorReportDAO, and sends the object to the analysis result module. Corresponding feedback is given according to the type of classified data:
    2-1
    Error identification module. The error identification module identifies errors according to the data transmitted by the error positioning module, compares the dynamic analysis structure to see if there are any of the five potential types of error rules, and sends the comparison results to the error classification and tag module;
    2-2
    Error classification and marking module. The error classification and marking module is classified according to the comparison results; different error rules are classified into different error categories, and the Variable DAO identified as the error is converted into ErrorReportDAO; finally, the object is sent to the analysis result module;
    2-3
    Analysis result module. The analysis result module translates the error report into format and content that are easy for users to understand and, finally, sends the result to the uploading system through the socket.
  • Thematic question-setting subsystem: The thematic question-setting subsystem corresponds to the marking function in the error classification and marking module. It marks different title-type labels for different program categories, and the error classification subsystem generates questions according to the theme. The label of the subsystem and different statistical reports are calculated:
    3-1
    Error classification and marking module. The topic-based question-setting subsystem is mainly implemented for the marking function in the error classification and marking module, which marks different programs according to the type of questions marked by the teacher, so that the homework uploading system can perform statistics for different question types and different error categories.

5.3. Program Experiment and Results

This research study took programs uploaded by the students of a programming course over the past two years as examples of debugging, modified the feedback for debugging, and re-uploaded the modified programs.
  • Debugging examples involving the use of an uninitialized variable
The use of an uninitialized variable means that the student used a variable without initializing it when declaring the variable. The experimental process is reported below. The subject of the experiment is shown in Figure 8 as an example.
After uploading the sample program in response to this question in the experiment, the test result is presented as shown in Figure 9.
According to Figure 4, test No. 129 was not passed. The feedback report is as shown in Figure 10.
From feedback report (1) in Figure 10, in test No. 129, an uninitialized variable sign was used code in lines 35 and 51 of the code, and error content was observed in line 19 of the code. The variable sign had been declared, but the definition of code line 21 of the code had not been executed, so the execution of code lines 35 and 51 of the code used the definition of code line 19. Figure 11 below is an example of such a program (program (1)).
In the example program in Figure 11, as the program did not meet the condition of code line 20, line 21 was definitely not executed, and the content of code line 19 was used in code in lines 35, 42, and 51. In this experiment, after initializing the variable sign to 0 and re-uploading it to the system, the program successfully passed all tests, as shown in Figure 12.
2.
Array index out of bounds
When an array of fixed size is declared, an unknown space is accessed if subsequent use exceeds the size of the array. The subject of the experiment is shown in Figure 13 as an example.
The test result after uploading the sample program in response to this question is shown in Figure 14.
The feedback report in Figure 15 shows that all four test failures had the same cause. Figure 15 only shows the feedback content of test No. 11.
The feedback report shows that in test No. 11 the array with index values of 0~60 code in lines 35, 54, 56, and 75 of the code exceeded the boundary, while the code in line 93 exceeded the 0~121 boundary. Figure 16 and Figure 17 below show the example code.
In label (c) of Figure 16, the array was declared with MAX in code lines 28 and 51 of the code, where MAX is the macro in the third line of the code in label (a). Its value was 61, but the index value started from 61 when the array was used in the code in lines 35 and 75. In this experiment, the conditional expressions in lines 34 and 74 of the code were modified to i < MAX − 1. Similarly, line 82 of the code in label (a) of Figure 17 declared a 61*2 array, but the line started with the index value of 122 in line 93. The initialization of line 92 of the code was changed to i = 2*MAX − 1.
In label (c) of Figure 16, arrays a and b in lines 54 and 56 of the code are parameters. Line 100 of the code in Figure 17b declared arrays a and b, and the code in lines 106, 111, 117, and 123 passed by having a and b as parameters. Obviously, the same problem as above was also found in lines 54 and 56. The initializations of lines 53 and 55 of the code in label (c) of Figure 16 were changed to lenA = MAX − 1 and lenB = MAX − 1. All error lines in the error feedback report in Figure 15 were identified and uploaded again as shown in Figure 18.
3.
Incorrect use of the library
In different environments or versions, when using certain libraries, their usage and return values can be different. If students do not understand the usage of the library in detail before using it, it can lead to unexpected results. This experiment took the strcmp() function in string.h as an example. The subject of the experiment is shown as an example in Figure 19.
After uploading the sample program in response to the question in this experiment, the system reported that line 118 of the code was incorrect and prompted the correct way to use strcmp, as shown in Figure 20.
As shown in Figure 21, the strcmp function was used in line 118, and the condition expression = 1. The return result was 1 when executed in the Windows environment, but the return value was the difference between the ASCII codes of two-character of two-character strings when executed in the Ubuntu environment; for details, refer to the previous publication [15]. In this experiment, after uploading the sample program in response to this question, the system reported that code line 118 incorrectly used strcmp, and prompted the correct way to use strcmp; the result is shown in Figure 22.
4.
Segmentation Fault
This fault occurs when the pointer points to the unallocated memory space and reads it or writes on it. The subject of the experiment is shown in Figure 23 as an example.
After uploading the sample program in response to this question in the experiment, the test result is shown in Figure 24.
Figure 24 shows that all tests failed. The feedback report is shown in Figure 25.
According to the feedback report, there was a segmentation fault in variable r in line 111 of the code. Figure 26 below shows the relevant parts of the code.
As shown in label (b) of Figure 26, the pch index array in line 261 of the code pointed to the ch two-dimensional array, and the getpass function was called to take pch as a parameter in line 270 of the code. In label (c) of Figure 26, a segmentation fault occurred in line 111 of the code, indicating that the line pointed to unallocated space. According to the previous information, index array r pointed to an array of size 10*10. Then, it affected code line 111 by checking the control and data flow of the code. In the conditional formula of line 110, program 0 ≥ x − 1 was wrong. This error caused index r in line 110 of the code to point to space x < 0. Line 110 of the code was modified to if (9 ≥ y + 1 && 0 ≤ x - 1), and the sample program was re-uploaded. The program shown in Figure 27 passed all tests.
5.
Irrationally long execution time of the program
This means that the program does not end within a defined period. Among at least three situations where the program does not end, the first one is that the program has an infinite loop, which means that the program cannot exit during the loop; the second is that the program stays in the state of waiting for an input; the last situation is that the input format is incorrectly converted. The above conditions mean that the user has not yet understood the question requirements in detail. The subject of the experiment is shown in Figure 28 as an example.
The test result after uploading the sample program in response to this question in the experiment is shown in Figure 29.
Test 96, shown in Figure 29, failed. The feedback report is shown in Figure 30.
In the test shown in Figure 30, the program in the loop of line 25 was wrong and prompted the last executed program code, for example, as shown in Figure 31.
The comparison between the sample program and the error feedback report shows that the program continuously executed lines from 25 to 39, which contained many conditional expressions. According to the conditional expression and the requirements of the test questions in Figure 28, the bottom-line question requirements in the test questions in Figure 28 were not written in the sample program. In line 32, the following requirements were met: (1) the total number of points was less than the player or (2) the total number of points was less than 8 (including 8 points), but the essential part of the test question was not processed. Our experiment accomplished this requirement as shown in Figure 32.
This experiment defined the is End variable to meet the needs of the problem and initialized the variable to false. In line 33 of the code in Figure 32, an if statement was added to determine whether player B wanted to obtain another card or not; if false, player B could ask for another card; if true, then line 38 of the code was executed, indicating that no cards would be requested afterward. In this experiment, the modified sample program was re-uploaded. The program passed all tests as shown in Figure 33.

6. Experiment

6.1. System Information

The system used in this study used GDB and Valgrind to assist in analyzing the errors generated by the program, and further analyses and feedback were conducted by students using the C language. The students in the experimental group used the homework uploading system and the improved system (the system in this study) to obtain debugging feedback, while the students in the control group only used the homework uploading system for general assignment uploading. Students in the experimental group could see the feedback given by the improved system in the homework uploading system.

6.2. Results and Comparison of Statistical Analyses

In the system interface, we recorded each log of the code being executed, and following the results of each unit test, the participant ran a unit test. The study was conducted by randomly selecting the experimental group and the control group. The results obtained after the students uploaded the assignments were used for final analyses and statistics. The results included passing the test, failing the test, and the minus sign (indicating no upload result), as shown in Figure 34.
The students who did not answer after the deadline for homework submission or were deleted by the system due to plagiarism could not participate in the statistical analysis of this experiment as they formed a sample that did not meet the statistical inclusion conditions.
This study used Excel to perform data-related statistics and analyses. The difficulty of each week’s homework consisted of the introduction of a keyword of the C language and its application. Teaching and practice were carried out stepwise. Figure 35 shows a statistical analysis of the number of questions answered by students in a single week. Right and wrong features were used to count the correctness of the answers of the students who answered each question during the week. Among them, weights were used to make the data more inclined to the results of analyses and statistics when a normal number of students finished the assignments:
Pass   rate A = pass   amounts   of   each   question R pass   amounts   of   each   questions R + fail   amounts   of   each   question W
where A is the accuracy, R is the right amount, and W is the wrong amount.
Pass   rate   growth % = the   pass   rate   of   the   experimental   group % the   pass   rate   of   the   control   group % 100 %
By counting the sum of the number of students passing or failing the tests each week, excluding the students who plagiarized their work and those who did not submit the assignments, the correct-answer rate of the test was calculated on a weekly basis as shown in Figure 35.
We evaluated whether students could effectively understand the code correction prompt provided by the error message through the feedback error message generated by the system after uploading the assignment, make corrections or deletions to fix the errors, and then re-upload and submit the assignment. The average total pass rate of each week’s assignments was calculated based on the total numbers of tests passed and failed for each question during the week and the weight of each question. Fault detection recorded the analysis and the number of classifiable errors measured in the code of the students in the experimental group; we observed changes in the relationship between the pass rate and the fault rate of each week’s assignments (fault detection formed a special dataset in the experimental group only), as shown in Figure 36.
This study also recorded the total pass rate, the total number of questions, the weekly pass rate, the average number of students who answered each question, and other related data to facilitate the grasp of various variables in the statistics and reduce the deviation value after estimations, as shown in Figure 37.
As shown in Figure 38, the pass rates (including weights) of each week of the experimental group and the control group were used to compare and prepare a line chart to understand the difference in the pass rates of the two groups when progressing from simple to more complex course contents. The learning effectiveness of the students in the experimental group was found to be stable, and the pass rate was better than that of students in the control group most weeks.
As shown in Figure 39, the total pass rates (including weights) of the experimental group and the control group for each week were used to compare and record the changes in long-term cumulative question pass rates. We did not compare the difficulty of each week’s questions but analyzed them statistically based on the cumulative method of each week’s questions to understand the impact of the students’ weekly answers on the overall pass rate. Figure 39 shows the difference in the learning effectiveness of the two groups of students each week.
As shown in Figure 40, the number of fault feedbacks obtained from the experimental group data and the pass-rate growth were used to calculate the ratio of the feedback given to the experimental group compared to the control group and prepare a scatter chart to obtain a trend showing the relationship between the two; in this chart, the trend shows a positive correlation. According to Figure 40, the number of students in the experimental group who triggered false detections and received feedback during the week was directly proportional to the growth of the pass rate of the experimental group compared to the control group, indicating that the number of false reports detected and the feedback provided to students affected the students’ assignment performance, and the impact was found to be positive, thereby indicating an improvement of the learning efficiency. (See Equation (2)).

6.3. Comparison with Other Studies

Table 1 presents a comparison of the experimental results of this study and those obtained by previous, related studies. The five different program feedback systems proposed in this study are better than other previously proposed methods; in addition to the basic test pass and fail, code coverage, and real-time feedback prompt part of program feedback, this research study also adds program analysis and error localization. Before analysis, the system checks each student’s program. The declarative sentence is recorded in the file, and the specific test data that cause the error are entered again and tested. The program is first tested with the originally set test data, and the program fails its tests if the system fails to compile the sample test data provided to the students; these two situations are excluded. After obtaining the coverage of the test and code, the error is located, and the results of debugging are used in GDB and Valgrind to perform dynamic analyses. After the error category is detected, it is labeled and transferred to the database for records, and simultaneously, its corresponding text message is sent back to the homework submission system, which is displayed in the student’s error inspection report interface.

6.4. Questionnaire Survey Statistics

According to Figure 41, half of the students who filled out the questionnaire received non-output fault feedback. Those who did not receive this feedback could only trigger the system feedback when their program did not meet the input conditions, which led to the extension of the execution time. By assessing the satisfaction statistics, the overall satisfaction degree of the students who completed the paper for the improvement of the system’s work correction speed was found to be 1.03, which indicates a degree of partial satisfaction.

7. Conclusions

This study proposes the use of the GDB and Valgrind tools to implement dynamic slicing by integrating them with spectral error localization technology. The similarity coefficient of the error localization analysis of the program was used to improve the performance of dynamic slicing, and the data flow and control flow were analyzed after slicing to obtain the results. Then, a dynamic dependency graph of the analyzed results was prepared. The system then analyzed whether there were common errors in the program according to the dynamic dependency graph and sorted the feedback information based on the analysis results to provide students with the direction to follow for modifying their program to correct the errors.
Most of the automatic correction systems on the market usually only provide simple information such as test failures, test passes, and compilation errors as feedback for programs uploaded by students. This simple information is often not very helpful for students attempting to successfully debug their programs. The sample questions mentioned in this study showed the correctness and incorrectness of the uploaded programs, located the specific error condition, and provided corresponding feedback. In addition, the study utilized group experiments to give students appropriate feedback; moreover, we used various data, statistical analyses, and a questionnaire specially designed for students to understand the difference in the progress of two groups of students as well as the effectiveness and practicality of the system.
Finally, here we provide evidence that implementing the problem solutions proposed in this study can be effective in helping students who often struggle with coding. We believe that tools that help to reduce this cognitive load can aid teaching assistants in helping students to learn more effectively.

Author Contributions

Conceptualization, J.-Y.K., P.-F.W. and H.-C.L.; methodology, J.-Y.K., P.-F.W. and H.-C.L.; software, J.-Y.K. and H.-C.L.; validation, J.-Y.K. and H.-C.L.; writing—original draft preparation, J.-Y.K. and H.-C.L.; writing—review and editing, J.-Y.K., H.-C.L. and Z.-G.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by [National Taipei University of Technology- Beijing Institute of Technology Joint Research Program] grant number [NTUT-BIT-108-01].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, I.-L. Research and Analysis of International Digital Skills Training Strategies. 2017. Available online: https://ws.ndc.gov.tw/001/administrator/10/relfile/0/11615/7d9e04a0-977e-4c0a-9248-3fb953c4572b.pdf (accessed on 10 June 2022).
  2. Patil, S.; Bhosale, M.; Kamble, R. Program Recommendation System for Students or Coder through View Histories and Feedback Systems. In Proceedings of the 2020 International Conference on Smart Innovations in Design, Environment, Management, Planning, and Computing (ICSIDEMPC), Aurangabad, India, 30–31 October 2020; pp. 185–187. [Google Scholar]
  3. Paiva, J.C.; Queirós, R.; Leal, J.P.; Swacha, J.; Miernik, F. Managing Gamified Programming Courses with the FGPE Platform. Information 2022, 13, 45. [Google Scholar] [CrossRef]
  4. Jung, E.; Lim, R.; Kim, D. A Schema-Based Instructional Design Model for Self-Paced Learning Environments. Educ. Sci. 2022, 12, 271. [Google Scholar] [CrossRef]
  5. Baniassad, E.; Zamprogno, L.; Hall, B.; Holmes, R. STOP THE (AUTOGRADER) INSANITY: Regression Penalties to Deter Autograder Overreliance. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), Virtual Event, 13–20 March 2021; pp. 1062–1068. [Google Scholar]
  6. Zamprogno, L.; Holmes, R.; Baniassad, E. Nudging student learning strategies using formative feedback in automatically graded assessments. In Proceedings of the 2020 ACM SIGPLAN Symposium on SPLASH-E (SPLASH-E 2020), Virtual, 20 November 2020; pp. 1–11. [Google Scholar]
  7. Marquès, J.M.; Calvet, L.; Arguedas, M.; Daradoumis, T.; Mor, E. Using a Notification, Recommendation and Monitoring System to Improve Interaction in an Automated Assessment Tool: An Analysis of Students’ Perceptions. Int. J. Hum.–Comput. Interact. 2022, 38, 351–370. [Google Scholar] [CrossRef]
  8. Malysheva, Y.; Kelleher, C. Assisting Teaching Assistants with Automatic Code Corrections. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22), New Orleans, LA, USA, 29 April 2022–5 May 2022; pp. 1–18. [Google Scholar]
  9. Sychev, O. Write a Line: Tests with Answer Templates and String Completion Hints for Self-Learning in a CS1 Course. In Proceedings of the 2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET), Pittsburgh, PA, USA, 22–24 May 2022; pp. 265–276. [Google Scholar] [CrossRef]
  10. Allen, F.E.; Cocke, J. A program data flow analysis procedure. Commun. ACM 1976, 19, 137. [Google Scholar] [CrossRef]
  11. Allen, F.E. Control flow analysis. ACM Silgan Not. 1970, 5, 1–19. [Google Scholar] [CrossRef] [Green Version]
  12. Ferrante, J.; Ottenstein, K.J.; Warren, J.D. The program dependence graph and its use in optimization. ACM Trans. Program. Lang. Syst. 1987, 9, 319–349. [Google Scholar] [CrossRef]
  13. Naish, L.; Lee, H.J.; Ramamohanarao, K. A model for spectra-based software diagnosis. ACM Trans. Softw. Eng. Methodol. 2011, 20, 1–32. [Google Scholar] [CrossRef]
  14. Jaccard, P. Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bull. Soc. Vaud. Sci. 1901, 37, 547–579. [Google Scholar]
  15. Tang, C.M.; Chan, W.K.; Yu, Y.T. Theoretical, Weak and Strong Accuracy Graphs of Spectrum-based Fault Localization Formulas. In Proceedings of the 2017 IEEE 41st Annual Computer Software and Applications Conference, Turin, Italy, 4–8 July 2017; pp. 78–83. [Google Scholar]
  16. Troya, J.; Segura, S.; Parejo, J.A.; Ruiz-Cortés, A. Spectrum-Based Fault Localization in Model Transformations. ACM Trans. Softw. Eng. Methodol. 2018, 27, 13.1–13.50. [Google Scholar] [CrossRef]
  17. Kulczynski, S. Die Pflanzenassoziationen der Pieninen. Bulletin International de l’Academie Polonaise des Sciences et des Lettres: Classe des Sciences Mathematiques et Naturelles, B (Sciences Naturelles) 1927, 2, 57–203. [Google Scholar]
  18. Wong, W.E.; Debroy, V.; Gao, R.; Li, Y. The DStar method for effective software fault localization. IEEE Trans. Reliab. 2013, 63, 290–308. [Google Scholar] [CrossRef]
  19. Ochiai, A. Zoogeographic studies on the soleoid fishes found in Japan and its neighboring regions. Bull. Jpn. Soc. Sci. Fish. 1957, 22, 526–530. [Google Scholar] [CrossRef] [Green Version]
  20. Janssen, T.; Abreu, R.; van Gemund, A.J. Zoltar: A toolset for automatic fault localization. In Proceedings of the 2009 IEEE/ACM International Conference on Automated Software Engineering, Auckland, New Zealand, 16–20 November 2009; pp. 662–664. [Google Scholar]
  21. Pearson, S.; Campos, J.; Just, R.; Fraser, G.; Abreu, R.; Ernst, M.D.; Pang, D.; Keller, B. Evaluating and Improving Fault Localization. In Proceedings of the 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE), Buenos Aires, Argentina, 20–28 May 2017; pp. 609–620. [Google Scholar]
  22. Xie, X.; Chen, T.; Kuo, F.-C.; Xu, B. A theoretical analysis of the risk evaluation formulas for spectrum-based fault localization. ACM Trans. Softw. Eng. Methodol. 2013, 22, 1–40. [Google Scholar] [CrossRef]
  23. Xie, X.Y.; Kuo, F.C.; Chen, T.Y.; Yoo, S.; Harman, M. Provably optimal and human-competitive results in subset for spectrum based fault localization. In Proceedings of the International Symposium on Search Based Software Engineering, SSBSE 2013, St. Petersburg, Russia, 24–26 August 2013; Volume 8084. [Google Scholar]
  24. Weiser, M. Programmers use slices when debugging. Commun. ACM 1982, 25, 446–452. [Google Scholar] [CrossRef]
  25. Weiser, M. Program slicing. IEEE Trans. Softw. Eng. 1984, 4, 352–357. [Google Scholar] [CrossRef]
  26. Negi, G.; Elias, E.; Kohli, R.; Bibhu, V. Reliability analysis of test cases for program slicing. In Proceedings of the 2016 International Conference on Innovation and Challenges in Cyber Security (ICICCS-INBUSH), Greater Noida, India, 3–5 February 2016; pp. 36–40. [Google Scholar]
  27. Agrawal, H.; Joseph, R. Horgan, Dynamic program slicing. ACM SIGPlan Not. 1990, 25, 246–256. [Google Scholar] [CrossRef]
  28. Al-Fedaghi, S. Computer Program Decomposition and Dynamic/Behavioral Modeling. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2020, 20, 152–163. [Google Scholar]
  29. Henard, C.; Papadakis, M.; Harman, M.; Jia, Y.; Traon, Y.L. Comparing White-Box and Black-Box Test Prioritizationd. In Proceedings of the 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE), Austin, TX, USA, 14–22 May 2016; pp. 523–534. [Google Scholar]
  30. Borodin, A.V.; Zavyalova, Y.V. An Ontology-Based Semantic Design of the Survey Questionnaires. In Proceedings of the 2016 19th Conference of Open Innovations Association (FRUCT), Jyvaskyla, Finland, 7–11 November 2016; pp. 10–15. [Google Scholar]
Figure 1. Flowchart. From left to right: if-then-else and do-while controls.
Figure 1. Flowchart. From left to right: if-then-else and do-while controls.
Applsci 12 07064 g001
Figure 2. Static slicing sample code.
Figure 2. Static slicing sample code.
Applsci 12 07064 g002
Figure 3. Result of static slicing of X, Z, and TOTAL.
Figure 3. Result of static slicing of X, Z, and TOTAL.
Applsci 12 07064 g003
Figure 4. Dynamic slicing sample code.
Figure 4. Dynamic slicing sample code.
Applsci 12 07064 g004
Figure 5. Monitoring model architecture.
Figure 5. Monitoring model architecture.
Applsci 12 07064 g005
Figure 6. Feedback of array index out of bound.
Figure 6. Feedback of array index out of bound.
Applsci 12 07064 g006
Figure 7. System flow chart.
Figure 7. System flow chart.
Applsci 12 07064 g007
Figure 8. Test question (1).
Figure 8. Test question (1).
Applsci 12 07064 g008
Figure 9. Test result (1).
Figure 9. Test result (1).
Applsci 12 07064 g009
Figure 10. Feedback report (1).
Figure 10. Feedback report (1).
Applsci 12 07064 g010
Figure 11. Example program (1).
Figure 11. Example program (1).
Applsci 12 07064 g011
Figure 12. Test result after modifying program (1).
Figure 12. Test result after modifying program (1).
Applsci 12 07064 g012
Figure 13. Test question (2).
Figure 13. Test question (2).
Applsci 12 07064 g013
Figure 14. Test result (2).
Figure 14. Test result (2).
Applsci 12 07064 g014
Figure 15. Feedback report (2).
Figure 15. Feedback report (2).
Applsci 12 07064 g015
Figure 16. Example program (2), (a) Source code macro definition, (b) when the current value in the array indexed starts at 61, (c) The declaration of an array with MAX as the size, (d) When the current value in the array indexed starts at 61.
Figure 16. Example program (2), (a) Source code macro definition, (b) when the current value in the array indexed starts at 61, (c) The declaration of an array with MAX as the size, (d) When the current value in the array indexed starts at 61.
Applsci 12 07064 g016
Figure 17. Example program (3), (a) Line 82 variable declares array size is 61*2, (b) Line 100 declares the arrays a and b and passes them as parameters on lines 106, 111, 117, 123.
Figure 17. Example program (3), (a) Line 82 variable declares array size is 61*2, (b) Line 100 declares the arrays a and b and passes them as parameters on lines 106, 111, 117, 123.
Applsci 12 07064 g017
Figure 18. Test result after modifying program (2).
Figure 18. Test result after modifying program (2).
Applsci 12 07064 g018
Figure 19. Test question (3).
Figure 19. Test question (3).
Applsci 12 07064 g019
Figure 20. Test result (3).
Figure 20. Test result (3).
Applsci 12 07064 g020
Figure 21. Example program (4).
Figure 21. Example program (4).
Applsci 12 07064 g021
Figure 22. Test result after modifying program (3).
Figure 22. Test result after modifying program (3).
Applsci 12 07064 g022
Figure 23. Test question (4).
Figure 23. Test question (4).
Applsci 12 07064 g023
Figure 24. Test result (4).
Figure 24. Test result (4).
Applsci 12 07064 g024
Figure 25. Feedback report (4).
Figure 25. Feedback report (4).
Applsci 12 07064 g025
Figure 26. Example program (5), (a) define a 10*10 Gobang disk, (b) Describe the pch pointer array points to the ch two-dimensional array, the getpass function is called with pch as the parameter, (c) describe ‘Segmentation Fault’ indicates that the line points to an unconfigured space.
Figure 26. Example program (5), (a) define a 10*10 Gobang disk, (b) Describe the pch pointer array points to the ch two-dimensional array, the getpass function is called with pch as the parameter, (c) describe ‘Segmentation Fault’ indicates that the line points to an unconfigured space.
Applsci 12 07064 g026
Figure 27. Test result after modifying program (4).
Figure 27. Test result after modifying program (4).
Applsci 12 07064 g027
Figure 28. Test question (5).
Figure 28. Test question (5).
Applsci 12 07064 g028
Figure 29. Test result (5).
Figure 29. Test result (5).
Applsci 12 07064 g029
Figure 30. Feedback report (5).
Figure 30. Feedback report (5).
Applsci 12 07064 g030
Figure 31. Example program (6).
Figure 31. Example program (6).
Applsci 12 07064 g031
Figure 32. Modified sample program.
Figure 32. Modified sample program.
Applsci 12 07064 g032
Figure 33. Test result after modifying program (5).
Figure 33. Test result after modifying program (5).
Applsci 12 07064 g033
Figure 34. Results of the students’ programming assignments.
Figure 34. Results of the students’ programming assignments.
Applsci 12 07064 g034
Figure 35. Statistical analyses of the number of questions answered by students in a single week.
Figure 35. Statistical analyses of the number of questions answered by students in a single week.
Applsci 12 07064 g035
Figure 36. Correct-answer rate and average pass rate for assignments each week.
Figure 36. Correct-answer rate and average pass rate for assignments each week.
Applsci 12 07064 g036
Figure 37. Total pass rate and various data analyzed in the above-mentioned assessments.
Figure 37. Total pass rate and various data analyzed in the above-mentioned assessments.
Applsci 12 07064 g037
Figure 38. Line chart showing the weekly pass rate of the students.
Figure 38. Line chart showing the weekly pass rate of the students.
Applsci 12 07064 g038
Figure 39. Line chart presenting the weekly pass rates of the students.
Figure 39. Line chart presenting the weekly pass rates of the students.
Applsci 12 07064 g039
Figure 40. Scatter chart to detect faults and pass-rate growth.
Figure 40. Scatter chart to detect faults and pass-rate growth.
Applsci 12 07064 g040
Figure 41. Student questionnaire survey results.
Figure 41. Student questionnaire survey results.
Applsci 12 07064 g041
Table 1. Comparison of the findings of this study with those observed in other studies.
Table 1. Comparison of the findings of this study with those observed in other studies.
Code Error Feedback MethodThis PaperLucas et al. [6]Joan et al. [7]
Test failure vs. test passYesYesYes
Code coverageYesYesYes
Feedback analysis interfaceYesYesNo
Program analysis and program error locationYesYesNo
Feedback message after debug feedback functionYesNoNo
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kuo, J.-Y.; Lin, H.-C.; Wang, P.-F.; Nie, Z.-G. A Feedback System Supporting Students Approaching a High-Level Programming Course. Appl. Sci. 2022, 12, 7064. https://doi.org/10.3390/app12147064

AMA Style

Kuo J-Y, Lin H-C, Wang P-F, Nie Z-G. A Feedback System Supporting Students Approaching a High-Level Programming Course. Applied Sciences. 2022; 12(14):7064. https://doi.org/10.3390/app12147064

Chicago/Turabian Style

Kuo, Jong-Yih, Hui-Chi Lin, Ping-Feng Wang, and Zhen-Gang Nie. 2022. "A Feedback System Supporting Students Approaching a High-Level Programming Course" Applied Sciences 12, no. 14: 7064. https://doi.org/10.3390/app12147064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop