Next Article in Journal
Understanding Prospective Teachers’ Task Design Considerations through the Lens of the Theory of Didactical Situations
Next Article in Special Issue
Traveling Waves for the Generalized Sinh-Gordon Equation with Variable Coefficients
Previous Article in Journal
Exact Traveling Waves of a Generalized Scale-Invariant Analogue of the Korteweg–de Vries Equation
Previous Article in Special Issue
Additive Noise Effects on the Stabilization of Fractional-Space Diffusion Equation Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Cuboid Registers Topic, Activity and Competency Data to Exude Feedforward and Continuous Assessment of Competencies

by
Francisco Mínguez-Aroca
1,
Santiago Moll-López
2,
Nuria Llobregat-Gómez
3 and
Luis M. Sánchez-Ruiz
2,*
1
Bionline, AI Company, Calle Catedrático Agustín Escardino 9, 46980 Valencia, Spain
2
Departamento de Matemática Aplicada, Universitat Politècnica de València, 46022 Valencia, Spain
3
Departamento de Lingüistica Aplicada, Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(3), 415; https://doi.org/10.3390/math10030415
Submission received: 4 January 2022 / Revised: 19 January 2022 / Accepted: 25 January 2022 / Published: 28 January 2022
(This article belongs to the Special Issue Advanced Methods in Computational Mathematical Physics)

Abstract

:
Evaluating competencies achieved by students within a subject and its different topics is a multivariable and complex task whose outcome should provide actual information on their evolution. A relevant feature when a continuous assessment (CA) rules this evaluation is to track their learning process so that pertinent feedforward may be harnessed to proactively promote improvement when required. As this process is performed via a number of activities, such as lectures, problem solving, and lab practice, different competencies are developed, depending on the recurrence and type of conducted activity. Measuring and registering their achievement is the leitmotif of competency-based assessment. In this paper, we assemble topic, activity and competency data into a 3D matrix array to form what we call a TAC cuboid. This cuboid showcases a detailed account of each student evolution, aiding instructors and students to design and follow, respectively, an individualized curricular strategy within a continuous and aligned assessment methodology, which facilitates each student to adequately modify his/her level of development of each competency. In addition, we compare the TAC cuboids’ usage in grading a mathematics subject versus a traditional CA method as well as when a dynamical continuous assessment approach is considered to measure the achievement of mathematical competencies.

1. Introduction

Assessment is a very important part of the learning process, not only because it can measure the assimilation of some educational aspect, skill, competence, or knowledge, but because it can also be used as a compass for decision-making and fixing pathways so that it becomes a complete and true learning endeavour. In the literature, assessment definitions highlight some of the aforementioned objectives. For example, in [1], test development is the process of measuring some feature of an individual’s knowledge, skills, abilities, interests, attitudes, or other characteristics. In [2], assessment is defined as a systematic collection of information about a student’s learning process, using the time, knowledge, expertise and resources available, with the objective to inform decisions that affect student learning. Similar definitions can be found in [3,4].
Both the measurement and decision-making aspects are closely linked to the objectives of instruction [4,5], since these provide the cognitive development steps that must be achieved. In this line, Baker and Zuvela [6] used feedforward to provide first-year students with some exposure on their real capabilities prior to actual assessment and through online and distributed learning environments, so that students could get involved in meaningful engagement with their learning process with assessment and enhanced student performance as the goal. Cathcart et al. [7] went further, and, looking for student-centred agile teaching and learning methodologies, they engaged in a framework involving feedforward closely related to a concurrent evaluation with feedback. Thus, assessment was continuously used in an agile learning environment, hence demonstrating that evaluation did not just benefit future students but it benefited current students. To obtain meaningful feedforward for students, an important characteristic that arises in the continuous assessment (CA) process is that it must be systematic. An approach to CA of competencies under a blended-learning methodology, where the strengths of both strategies have assembled, was developed in [8] by considering a dynamical component to encourage students’ improvement of previously assessed competencies, which was coined as Dynamical Continuous Discrete Assessment (DCDA).
Despite all the above advances in assessment techniques, at the end, students usually receive just a single number as their final mark reflecting their level of competency. Bearing in mind the different activities that may have been developed, such as project-based learning, lab sessions, collaborative learning, and so on, this single number may look like an average grain of sand that does not reflect all the learning pyramid that has been generated.
Mastering (acquisition and training) a set of competencies is linked to properly conducting the activities related to some topic(s). The grade for a given activity or topic may indicate that some competencies are lacking, but the student is not usually aware which competencies should have been developed or improved. Concurrently, data on the progress of competency achievement are not normally communicated in the academic world.
Thus, the research question addressed in this paper is how to register all the relevant data of the learning process, i.e., all the activities conducted, linked to the learning objectives and the competencies to which they are related, so that a meaningful feedforward may be provided at any moment. This paper also aims to achieve thorough monitoring to provide a clear picture of the activities and level of competencies achieved on the different topics of a subject during the learning process.
This issue recalls the rationale for the Bologna Framework to provide a mechanism to relate national frameworks to each other so as to enable international transparency. This mechanism is the Diploma Supplement, which ensures that qualifications can be easily read and compared across borders [9,10]. The Diploma Supplement provides much more information than just the name of a degree, as it provides a wide range of information, including personal achievements, course credits, grades, and what a student has learned. It contains information on the type and level of qualification awarded, the institution that issued the qualification, the content of the courses, results gained and even details of the national education system.
The procedure and computational method developed to tackle the proposed research question generates a TAC (Topics addressed through Activities and Competencies achieved) cuboid that provides a device exuding exhaustive information on the activities performed and competencies achieved related to the topics covered in a subject, similar to how the Diploma Supplement depicts, in a transparent way, a degree under the Bologna Framework.
The method has been tested in the assessment of mathematical competencies but it can be used to register data and monitor the learning process of any subject where different topics, activities and competencies are on stage.
In Section 2, we highlight the relevance of feedforward within a formative methodology and the three elements of TAC. Section 3 describes how the TAC cuboid emerges in a natural way to register TAC data, and in Section 4, we show how the cuboid has been implemented to assess topics, activities and competencies in a Mathematics subject run in a first-year engineering degree. Section 5, Section 6 and Section 7 present the results and discussion, conclusions, and possible future work, respectively.

2. Background

2.1. Feedback vs. Feedforward

Assessment is defined as formative when decisions based on it can positively influence the learning process [4,8] by providing students with some feedforward as input for their improvement and growth.
Assessment is summative if it occurs once the instruction has ended and the students do some activity to show their competencies in some relevant moment of evaluation. Then, students may receive only some feedback as a summary of their performance, not allowing them to take advantage of this information as part of their learning process. Kirshner et al. [11] highlight that summative assessment ignores the structures that constitute human cognitive architecture, despite the abundant evidence from empirical studies indicating that any instruction which is minimally guided becomes less efficient and less effective than those instructional approaches that emphasize the student learning process guidance.
Although some students may consider the summative assessment to be important, since it reflects the degree of fulfillment of the objectives of the instruction, and it is a mark won and remaining in their curriculum, in our opinion, only the formative assessment allows them to develop skills, knowledge and competencies during the learning process. That is, formative assessment gives room to provide the students with some feedforward that, in this way, allows students to gain control of the evolution of their competencies.
Indeed, decision-making based on the results of an assessment usually comes on the part of the instructor and the institution, though in a curriculum following the principles of andragogy [12], the students must be an active element in their education. Recent results have pursued this approach, in which feedforward prevails over feedback, with different environments [13,14,15], in some cases making it even more effective when feedforward comes out through a dialogue or verbal communication.

2.2. Topics, Activities and Competencies (TAC)

According to Tyler [16] and Nilson [5], possibly the most important feature of a curriculum is the detailed description of the learning objectives. This is most certainly the case in all subjects where the assessment of learning outcomes on the topics covered is fundamental both at the student level and the curriculum level [17,18]. This assessment seems to be significantly useful when it is part of a CA methodology [19,20].
When STEM subjects are on stage, there are other relevant features in addition to the learning objectives, e.g., mathematical language skills, understanding of that language, intrinsic competencies, etc. The Danish KOM project [21] considers several relevant points concerning mathematical competencies that are to be achieved and exposes a guide seeking the coherence and evolution in mathematics teaching, paying attention to how to evaluate the mathematical competencies. That study overlined eight mathematical competencies and how they are displayed in the different activities that are performed while covering the different topics that conmpose a syllabus.
Activities performed to achieve competencies on some given topic so that the learning objectives are reached are common beyond STEM subjects and occur frequently in any learning process. Hence, even though the next section exemplifies mathematical competencies, the message conveyed may be exported to any other discipline where competencies and activities are properly detailed, both in terms of their development and assessment.

2.3. Assessment of Activities Based on Competencies

The SEFI Mathematics Working Group [22], concerned in engineering education and inspired by the KOM project, clustered mathematical competencies into two groups:
  • Competencies related to asking and answering questions:
    • Thinking mathematically. Involves recognising mathematical concepts, understanding their scope and limitations, and extending their scope by abstraction and generalization results;
    • Reasoning mathematically. Includes understanding the notion of proof, recognising the ongoing ideas in proofs and the ability to distinguish between different kinds of mathematical statements;
    • Posing and solving mathematical problems. Comprises identifying and specifying mathematical problems and the ability to solve them;
    • Modelling mathematically. Deals with analysing and working with existing models, and the ability to conduct active modelling, too;
  • Competencies related to managing mathematical language and tools:
    5.
    Representing mathematical entities. Includes understanding and using mathematical representations;
    6.
    Handling mathematical symbols and formalism. Includes using and proper manipulating of symbolic statements and expressions;
    7.
    Communicating in, with and about mathematics. Involves understanding mathematical statements made by others and being able to express oneself mathematically to their audience;
    8.
    Using aids and tools. Includes skills regarding using digital aids and tools that are available, as well as their capabilities and limitations.
These competencies overlap, but since they emphasise different aspects, they may be considered separately. The distinguishing of relevant competencies is not a closed task. There were, for example, the competencies underlined by García et al. [23], who considered the following competencies: (a) self-learning; (b) critical thinking; (c) ICT usage; (d) problem solving; (e) technical communication; and (f) team work. Their (a) refers to engaging in independent life-long learning abilities; (b) is related to analysing, synthesising and applying relevant information; (c) is in regards to using modern digital technologies; (d) and (e) are related to to 3 and 7; and (f) is related to the ability to work efficiently in a multidisciplinary team.
In any case, if these or some other competencies are intrinsically relevant in the learning process, there should be a constructive alignment in the assessment of mathematical competencies and grading of students in every moment of evaluation (MoE), where a MoE is any assessed activity, as students take more seriously goals and settings when they are part of their assessment [24,25,26].
In order to register students’ evolution and the level of achievement of their mathematical competencies, we will evaluate how each activity performance contributes to the development of the competencies related to that given activity by using a binary rubric assessment.

2.4. Rubrics Assessing Competencies

Traditionally, each student activity has been evaluated under a quantitative grade or a letter. Instead of providing a single mark, this paper aims to evaluate the impact of each activity on the overall competencies. Therefore, it will be necessary to know the relation between each activity and the competencies involved, that is, to know what part of each competency is embedded or comprised in the performance of an activity. The constructive alignment concept will be followed, which means that students build up meaning via conducting relevant learning activities [24].
Binary evaluation lies within the assessment by a rubric framework. A rubric is a list of criteria for students’ work that describes different levels of performance quality [27]. Rubrics provide a directed structure to observe the quality of each student performance, but, moreover, they may help teachers and students judge the quality and the progression of student performance.
Rubrics are categorized as analytic or holistic depending on whether all criteria involved are evaluated separately or globally. The former are fit for formative assessment, as students may know the features of their work that need some attention. They are also for grading that is going to be used in decision making in an immediate future. The latter are fit in situations in which students do not see the grading results and information is going to be used only for grading.
Rubrics can be used for different purposes in assessment procedures, including, inter alia, promoting learning in the classroom (instructional rubrics) [28] and fixing a set of clear expectations or criteria on what is a valued in a topic or activity [29] (analytic-trait rubrics [30,31] or skill-focused rubrics) [32]. According to [33], rubrics can be used to teach and evaluate within a formative student-centered assessment, as they help develop an objective judgment about the quality of the performance. Just explaining a rubric apparently drags the student activity [28], and Hafner and Hafner [34] provide evidence that rubrics are an effective tool for peer grading by students in the realm of a university biology classroom. Schafer et al. [35] state that higher scores are the consequence of a clear operational performance definition. In short, rubrics have helped teachers and students establish a basis of common understanding for rating the performance and behavior of students during activities [36,37]. An assessment evaluation rubric with a wide scope may be found in [18] which was introduced to support, carry out and promote the systematic evaluation of learning outcomes.
We cannot dismiss that there are critical points of view on using rubrics in the educational system in general, as in [38], or on its misuse when they are poorly designed or implemented [39]. Sharing their explicit criteria with students has also been questioned, as this can lead to instrumental learning [40,41]. However, there is accumulated empirical support of their mainly positive effects [42,43,44], and we will focus on rubrics that use generic traits based on analytic performance criteria to evaluate the achievement or development of mathematical competencies.

3. Materials and Methods

The data regarding the topics covered, activities performed and competencies analysed come from all the activities executed by 118 students following the compulsory annual subject Mathematics I corresponding to the curricula of the BEng Degree in Aerospace Engineering at the Technical University of Valencia (UPV) during the 2020/21 academic year [45]. These students had been evaluated following a DCDA methodology [8] and had fulfilled a number of different activities during the course consisting of assignments controlled by tests prior to relevant exams, weekly lab sessions following a flipped methodology and closely linked to the topics and learning objectives of the subject, individual lab exams Lex1 and Lex2 at the end of each semester assessing their use of technological competencies achieved during the weekly sessions linked to the topics and learning objectives covered, and four relevant exams described in Section 4. In all these performed activities, the competencies worked out and the topics covered were clearly identified.

4. Registering TAC Data

4.1. The TAC Cuboid Hacthling

The KOM project [21], Chapter 9, declares that the core idea of a competency is an insight-based readiness to act, where “the action” can be physical, behavioural —including oral —, or mental. Therefore, a valid and comprehensive assessment of the mathematical competencies of a person must start by identifying the presence and extent of these features in relation to the mathematical activities in which the respective person has been/is being involved. And also that a mathematical activity can, for example, to solve a pure or applied mathematical problem, to understand or construct a concrete mathematical model, to read a mathematical text with the view of understanding or acting on it, to prove a mathematical theorem, to study the interrelations of a theory, to write a mathematical text for others to read, or to give a presentation. Different activities have different impacts on each of the eight competencies [25] recalled in Section 4.1. For example, a theoretical question provides a stronger input in the first four competencies, while a problem-solving activity is more linked to other competencies, such as representing mathematical activities, the handling of mathematical symbols, and, in some cases, to using calculus tools. Then, the first task consists of estimating the usage of each competence while some activity or different types of activities are conducted and distributing the impact weights on a binary basis (1-weighted). With this aim, a collection of different types of activities is needed, as exemplified in the eighth chapter of the KOM project, where we find a descriptive matrix gathering subjects and competencies.
Different type of activities present different impact spectra on the competencies considered. The SEFI MWG developed a study about the relationship between kinds of activities and competencies [22]. Hence, it follows the need of creating a picture of the influence of the different activities to achieve the desired competencies and, consequently, to design them with a competency-oriented perspective in the curricula. Clearly, this quantification should be done by teacher communities in order to obtain a normalized procedure of assessment. Then, it is possible to construct a master table about impacts of activities on competencies related to a given topic. Therefore, each class activity i (lectures, projects, etc.) should have some impact a i j on each of the competencies C j considered. Moreover, each type of activity will have some impact or weight w i on the assessed topic.
Table 1 may be considered a master table where different activities of the same type might attribute different impacts on different competencies. This master table works as a reference table, and some minor tuning might be carried out in specific activities. In this way, a type of activity may be split into several subclass activities which have to be 1-weighted in order to estimate a grade for each type of activity or else consider each subclass activity as a type of activity.
Since a given topic is covered by different types of activities, a quantitative grade G of topic T can be obtained by means of
G ( T ) = i w i G ( A i ) i w i
where w i fixes the relevance of the each activity in topic T. Finally, a third dimension in the model is represented by the list of topics to be studied in a course, giving each of them a specific relevance represented by a given weight. The final grade F G may be calculated by means of the expression
F G = i p i G ( T i )
where p i represents the relevance of topic T i in the subject, with i p i = 100 .
Table 1 can be considered as a 2D matrix that measures the impacts of activities on competencies related to a given topic (Topic 1), as shown in Figure 1, in which the elements a i j have been changed by a i j k , where k = 1 indicates the topic being evaluated.
When one considers adding the impacts of a different topic (Topic 2), a 3D matrix arises (Figure 2) with two levels, one per topic. This 3D matrix is called a cuboid, and it can be upgraded with as many levels as topics are evaluated.
In brief, a TAC cuboid is a 3D matrix exuding the connection between activities, topics and competencies which are quantified by assigning appropriate weights in each topic-activity-competency knot, as depicted in Figure 3. Namely, element a i j k of the TAC cuboid represents the weight corresponding to the activity A i as impact on the competencies C j weighted on a curricular list of topics T k .

4.2. Personal TAC Cuboid

A personal TAC cuboid (pTAC) is an all-zero cuboid assigned to each student at the beginning of a course where each one of its elements changes its value as long as it registers information on:
  • One of the different types of activities performed;
  • One of the different topics covered that conform to the learning objectives of the subject;
  • One of the competencies considered, or a combination of them if their achievement is considered to be assessed indistinguishably.
When some activity on a given topic is performed, it is assessed by means of a binary evaluation from the perspective of each competency involved based on a true/false assessment. This binary assessment generates a computed vector that is adequately placed in the pATC. To do so, each non-zero binary component of the assessed competencies is multiplied by the weight of the subclass activity assessed in that MoE and placed within the cuboid.
Whenever a subclass activity may be performed several times, the number of occurrences within each subclass activity has to be saved, and a new row must be placed within the cuboid duly averaged with the values of the respective row by using some activity frequency factor. Considering all the activities performed by each student in a course and pushing them into the corresponding pTAC with binary assessment, a tool to implement continuous assessment based on competencies is obtained.
As the course progresses, the students should increase their overall competencies. Thus, an activity that is properly and well-conducted should have a positive impact on the related results which, in turn, might redeem past deficiencies or gaps or reconsider previous assessments of the level of competency, as discussed in [8], where a dynamical component was included as a continuous assessment approach. For this purpose, a grade impact amplifier as a continuous assessment procedure ( G I A | C A P ) is introduced to recognise the improvement in the level of competency or knowledge achieved throughout the learning process. This consists of a set of weights by considering the new activity rows placed within the pTAC. This may be monitored by using adequate G I A | C A P stages after relevant MoE in order to establish different controlling steps along the programme. These impact amplifiers are established only in the upward sense as a tool to motivate students to develop a continuous effort to improve their overall competency in the subject.
The keystone is to design a well-balanced TAC cuboid that enables each activity performance to be computed. It must be carefully designed, considering the linkage between each activity, competency and topic, based on historical data and tuned in a timely fashion, if appropriate, according to feedback obtained during the last course.

4.3. Targeting with TAC Cuboids

As the course advances, each pTAC is filled with the binary assessment results of the different activities conducted. At any given moment, the pTAC provides information on the activities performed and its impact on the level of competencies achievement related to the topics covered. The internal structure of the cuboid, far from being a simple scalar, works like a Computed Axial Tomography showing the status quo of the students mastery on topics and learning objectives covered up to that moment. This complex information provides data that iScholars [46], digital natives used to an increasingly digitalised society, may find more meaningful than single grades. With it as a handy source of the evolution and actual state of the learning process of each student, this registering methodology works as a valid tool to address continuing mathematical deficiencies with advanced diagnostic testing [47].
On the other hand, the educational objectives can be placed as reference cuboids to establish some target levels that can be called target TAC (tTAC). These tTAC may guide individualized pathways for students to reach the learning objectives. Targeting is a way to predict future results which embraces the studies that intend to get some class of predictors [48], or those based on conduction of daily activities such as quizzes or tests to follow the continuous progression as an estimate of the final results [49]. Finally, aggregating every pTAC, we may depict the development of competencies within the whole student collective and their topic mastery in the learning objectives.

5. TAC Methodology in Action

5.1. Implementing the Computational Algorithm

A comparison between the evaluation method based on TAC cuboids and the continuous assessment run with Mathematics I students in Aerospace Engineering at UPV [45] has been carried out. In this degree, a formative dynamical continuous discrete assessment (DCDA) [8] has been developed for over 11 years, which has proven to encourage students toward the improvement of previously assessed competencies when chains of topics had been recognised. This meant a shifted re-assessment of their proven mathematical competencies when used in ulterior activities on topics that belong to those chains.
TAC cuboids systematically register data in the learning process. We will study if there is any significant deviation between grading provided by both methods in terms of final grades. Independently, the TAC method carries out a wealth of information on the whole learning process that awaits deep harvest.
Since we are dealing with Aerospace Engineering students, who traditionally show good mathematical skills and good levels of competency achievement, the authors decided to cluster the eight mathematical competencies considered in [22] into just four mathematical competencies characterised by the following facets:
M1
Capacity of reasoning. This involves the “thinking mathematically” and “reasoning mathematically” competencies, for which students should show this capability in other contexts;
M2
Ability to solve problems. This involves “posing and solving mathematical problems” and “modeling” of SEFI MWG. Here, attention is paid to the students’ capabilities in calculation procedures and their accuracy in results. Posing and modeling complex problems is addressed in more advanced subjects;
M3
Capacity of using formal language and communication. This clusters the “representing mathematical entities”, “handling mathematical symbols and formalism” and “communicating in, with and about mathematics” competencies of SEFI MWG. Competency in this cluster means an adequate use of symbolic and written language and representation of mathematical objects. Oral language should also be observed when appropriate. Communication in and about mathematics must be understandable and contain the expected lexis;
M4
Use of tools and aids. This involves the correct use of scientific computing algebraic systems, calculators, measuring instruments, and their implemented help tools.
The subject covers 4 topics:
  • Calculus I (CI): Devoted to one real variable functions with antiderivatives, definite and improper integration and their applications, and an introduction to ordinary differential equations and initial value problems;
  • Linear Algebra (LA): Devoted to linear algebra with matrix calculus, vector spaces and matrix diagonalisation topics;
  • Calculus II (CII): Devoted to differential calculus of two or more variables, multiple, line, surface integrals and their applications;
  • Series: Devoted to numerical, power and Fourier series.
Their specific learning outcomes are standard, and it is unnecessary to recall them, even though for the interested researcher, they are available [45]. Having fixed the competencies of interest and topics, we need to display the set of activities where these competencies will be analysed. This set must be split into different topics and each one of them must be accompanied by its corresponding MoE so that their assessment and feedforward are feasible. They happen to occur as follows:
  • MoE TP1 assesses CI topics via a test with 12 questions and a written exam containing 6 questions;
  • MoE TP2 assesses LA topics via a written exam containing 7 questions;
  • MoE TP3 assesses CII Topics via a written exam of 7 questions;
  • MoE TP4 assesses Series topics via 4 questions, re-assesses CI topics via 2 questions, and contains another 2 questions concerning CII. When students performed poorly in TP2, some extra time is given and 3 additional questions on LA are assessed;
  • MoE LP (Lab practice) is split in two different types of activities: 27 laboratory sessions — one per week — which follow flipped teaching (FT), called “Week”, plus two individual exams, called “Lex1” and “Lex2”, linked to the lab practices;
  • MoE T consists of some flipped activities controlled via with multi-choice tests in a controlled environment, here called Asg1 and Asg2.
To follow the course development, we consider an activities backlog where each question of each MoE will be displayed as an independent activity so that the register of data performance is more comprehensive and meaningful. Whenever there are some questions with different values according to their extension or complexity, we will create some subclass of activities. With this structure, a master TAC is built to facilitate the learning process assessment as depicted in Table 2.
The backlog consists of the 72 activities mentioned above. It enables the analysis of the results of any student and topic. For instance, the backlog corresponding to topic TP2 of one student, who we will call Student X, is exposed in Table 3.
The meaning of the columns and acronyms used in Table 3 are as follows:
  • NA: A number distinguishes the different activities performed by the student;
  • ID: The student to whom the pTAC belongs;
  • MoE: An acronym to easily recognise the moment of evaluation. It may refer to a topic, learning objective or a cluster of these;
  • Act: Type of activities to be performed, such as written exams, Ex practices, etc.;
  • W: Weight of the activity subclass within the topic (included in the Master TAC SubWeight column);
  • SC (Subclass): An activity subtype identifier. Used when adequate to assign different weights or relevance to each group of activities;
  • M i : Binary evaluation of each competency considered. Value 1 is assigned if the cluster criteria is satisfied; otherwise, 0 is assigned;
  • w i : Product of M i times the weight assigned to the pair Subclass Activity—Competency in the Master TAC;
  • Gr(A): The sum of values of the w i provides the activity grading.
The pTAC of any student gathers all registered data concerning all his/her activities performed. For instance, once the 72 activities have been assessed, the pTAC of Student X may be downloaded. Its information is shown in Table 4.
The acronyms used in the columns of the pTAC are the same or similar to the ones used in Table 3. The new ones have the following meanings:
  • #Act: Number of activities performed in the given subclass activity;
  • D = W A : Sum of the weights of subclass activities within each activity type. It is obtained by adding the corresponding weight column in the backlog table,
    D t = k = 1 N t w k S ,
    with N t being the number of activities performed. w k S the weight of the activity subclass ( D t is used with averaging purpose when new binary assessment vectors are pushed into pTAC);
  • S M i : Sum of the corresponding M i for the group of activities performed in topic t,
    S M i t = k = 1 N t M i k t ,
    where M i k t is the binary component of the competency M i in the topic t for activity k;
  • w S M i : Sum of all products of w M i times the weight of each subclass activity, i.e.,
    w S M i t = k = 1 N t w k S · w k M i , S · M i t
    where w k M i , S is the weight of the activity subclass S within the competency M i and M i t is the binary component of the competency M i in topic t.
The grade g M i t obtained in competency M i for topic t is given in the g ( M i ) column by
g M i t = w S M i t D t D t 0 0 D t = 0
The grade g ( A t ) of the activities on a given topic t is given by
g ( A t ) = 1 4 g ( M i t )
For each topic t, its grade comes from evaluating all the activities belonging to the different subclasses S in topic t.
g t = k = 1 N S w k S · g A S k = 1 N S w k S
The final grade F G is the average of topic grades taking into account their corresponding weights,
F G = k = 1 N S w k t · g t k k = 1 N S w k t
where g ( t k ) is the grade of the topic t k and w k t is its weight in the course programme.
Table 4 includes a contributive final grade (CFG) which represents the grade earned throughout all the activities up to a given moment and is given by
C F G = k = 1 N t w k t · g t k k = 1 N t w k t .
In addition to the above, some other information may obtained from each pTAC, which also appears summarised in the above table. Therefore, we may be interested in obtaining the following data,
  • #Positive M evaluations: Sum of the S M i columns, i.e., the number of positive evaluations of M i ;
  • Effectiveness: Ratio of positive evaluations over the number of activities performed.

5.2. Implementing the Dynamical Computational Assessment

As may be observed, the MoE TP4 covers topics already addressed in TP1 and TP3, which means that there is a logical chain of topics, TP1 → TP3 → TP4, which has been exploited to apply the DCDA methodology in [8]. This has provided good results in the engagement of students to improve their level of competency on topics previously assessed and to recognize this positive evolution happens and all the mathematical competencies of the course have been achieved.
This dynamical assessment assumes that a good performance in an ulterior related activity showing a higher level of competency should be taken into account and lead to some reconsideration of previous MoE.
The TAC cuboid system streams the evolution of the competencies achieved and, thus, when an activity is correctly performed according to defined rules, the system may issue some reassessment of previous MoEs, which would have a direct impact improving grades in previous MoEs corresponding to predecessor topics within identified chains of topics.
In this subsection, we follow the dynamical continuous assessment approach and provide the most simple one by fixing some conditions to trigger them. These conditions are fixed to assure that students have achieved some general basic competencies in all the parts of the subject. Other instructors may elect to make them more or less demanding, or just ignore them.
In our implementation, we have defined the next workflow within a dynamical continuous assessment:
  • Activity E42 may dynamically re-assess activities E12 and E13;
  • Activity E43 may dynamically re-assess activity E22;
  • Activity L32 may dynamically re-assess activity E32.
As triggering rules we requested that the grade of the activity should be greater than 40% with G I A | C A P = 1 , and with G I A | C A P = 2 in case the activity grade is greater than 75%. Each dynamical assessment rule generates a binary vector with all components equal to 1 (grade 100%) when the retroactivity effect is applied.
The weights of the master TAC must not be changed, and they should be known since the beginning. Due to the additive nature of the pATC, an impact amplifier of order p, p being a natural number, can be enforced if agreed by introducing p identical assessment vectors with each activity conducted. This enables increasing its impact p times and accelerate the convergence of previous grading to the last assessment.
Our student X fulfills the triggering conditions, and the dynamical assessment applied to the mentioned MoE generates the assessments given in Table 5.
The pTAC of Student X evolves after computing retroactive vectors for a dynamical assessment of his/her improved level of competencies as shown in Table 6.

6. Results and Discussion

6.1. Outsight of Students’ Assessments under DCDA and TAC Cuboids

With the aim of making a comparison of the assessments done under DCDA and the TAC cuboids methodology, we compared both methodologies with the whole group of 118 students in order to verify if the differences in results are due to the assessment procedure.
For this purpose, all activities done at every MoE were assessed following TAC methodology, and in this way, for each student, we obtained paired data of these assessments and the one originally done following DCDA. Since the sample size is greater than 40 items without outliers, and the Shaphiro–Wilk test provides a p-value = 0.0786389, at a significance level of 0.05, we can assume that the sample follows a normal distribution.
The hypotheses concern a new variable d, which is based on the difference between paired grades from two data sets corresponding to each of the moments of evaluation TP1, TP2, TP3, TP4, LP, T and FG. We establish the hypotheses:
H0:
The difference in the average of scores are not significant. Both procedures generate comparable results.
H1:
The difference in the average scores is significant. The procedures do not give comparable results.
Table 7 shows the hypothesis test results for the difference between paired means.
Since the p-value is greater than the significance level (0.05) in every MoE, we cannot reject the null hypothesis, and conclude that both assessment procedures give comparable results. The value of the weights in the master TAC cuboid are chosen by the instructor so that they reflect the previously agreed-upon scoring strategy.

6.2. Insight into a Sample of Students with Different Performances

In this subsection, we place a closer insight on 7 cases of the 118 students, who had performed with different levels of success: 2 A-level students (St-A1 with A+, and St-A2 with standard A), 2 C-level students (St-C1 with C+, and St-C2 with standard C), 2 D-level students (St-D1 with D+, and St-D2 with standard D), and 1 F-level student.
This insight has been done in two different stages. Firstly, Table 8 gathers the results of this sample without taking into account the dynamical assessment, i.e., without re-assessing competencies that students had proven to have improved along chains of topics whose activities used previously assessed competencies.
If students receive acknowledgement of improving their level of achievement in competencies along chains of topics as considered in the DCDA approach developed in [8], Table 9 displays the results of that re-assessment.
The results obtained by both methodologies are quite alike. In Figure 4 we find an illustrative image of the differences between TAC cuboids compared with the seminal DCDA approach [8].
This confirms that the general results of the previous section seem to have been conveyed to the different type of students. The dynamical assessment with TAC, in general, seem to provide some insignificantly higher increased recognition in lower marks and lower increase in higher marks. Clearly, the higher or lower effect depends on the design of the rules for dynamical continuous assessment. In this implementation, rules have mimicked those deployed at UPV.

6.3. Discussion

The TAC cuboid registers data on topics, activities and competencies to assess students’ performance and provide feedforward. Its implementation is flexible enough that the dynamical re-assessment may be ignored, causing its implementation to be much simpler, or considered with different levels of intensity. It is a decision to be made by instructors in advance, before its implementation, and may be applied with different emphasis depending on purpose, intention and, obviously, possible local constraints.
The weight assigned to each activity within a topic, or set of activities that compose a given moment of evaluation, may be defined to highlight the importance of some activities over others. This way, the instructor may introduce a number of activities in the learning process where the keystone is their weight. In this line, in the case presented in this paper, the activities developed under FT have received little weight, so that, even though there might have been plenty of them, their input into the final grade is limited.
For convenience, the weight of the topics within the overall assessment process has received a 100 point distribution, but any other arrangement is feasible; a transformation might be required to get a final grade in the standard range of each local community. TAC-based assessment is additive but used with a formative goal; hence new activities may be added, and if dynamical assessment rules are triggered, students’ improvement in their level of achievement of competencies is duly acknowledged. The main difference between DCDA- and TAC-based methodology is the procedure to assess an activity through binary indicators. Rather than getting scalar grades, TAC cuboids generate vector grades related to some predefined competencies. TAC cuboids register a detailed record of the progress of students who may receive instant feedforward.
From the results, it follows that the implementation of TAC cuboids does not necessarily convey a great difference in the grading records. Its advantage relies in the systematic registering of data and a wealth of information that apparently awaits to be exploited. This data favours making information on topics, activities and competencies visible and accessible, with a clear image of its complexity as described in this paper.

7. Conclusions

The objective of a competency assessment is to measure the progress in the achievement of skills associated with learning objectives, but in the current educational system, each student usually receives only a single score as the measure of the level of achievement of the course learning objectives.
TAC cuboids are structures that, through a binary evaluation, exude information in the context of the performance of activities designed to achieve the desired competencies. By placing adequate weights on each conducted activity in the competencies considered, the instructor and the students receive a detailed assessment of the outcome, as well as a grade for the activity.
Different activities may have different relevancy, as TAC cuboids support the performance of daily activities as well as discrete relevant moments of evaluation. This allows students to be provided with feedforward and enables the use of dynamical continuous assessment, which softens the effect of some unsuccessful performance in the context of a formative assessment methodology.
The personal TAC cuboid enables the instructor to control the performance of activities. The more activities are introduced, the better the pTAC will report the students’ progress, and the evaluation will be closer to a continuous assessment in “real” time.
The competencies assessment becomes a control method that directs the system of competencies achieved with appropriately selected inputs so that the system output is applied as feedforward in the learning process, forcing the system output to converge to the desired level of achievement of competencies. Simultaneously, the learners get involved in their learning process as they indeed become co-producers of new learning activities that will eventually modify their level of achievement of competencies. This setting transforms continuous assessment into a meaningful, responsive dialogue between students and their instructors, providing continuous feedforward to the former ones that will grow as scholars in a non-planar assessment environment.
Master TAC cuboids represent the metrics that enable the quantification of the detailed binary assessment of the activities performed by each student to be placed into his/her personal cuboid. On the other hand, different target TAC cuboids that might be fixed would mark different development milestones of the subject. Thus, personal TAC cuboids reflect a finite succession of cuboids that must be compared to one of these target TAC cuboids. This would landmark the success of students in achieving the desired level of development of competencies.

8. Future Work

After implementing TAC cuboids, it might be interesting to investigate what activities and what type of activities related to different competencies are more meaningful in the learning process among different groups of students. This valuable feedback would enable future work to feedforward training needs within the implemented learning process and also adapt the activities for the different learning styles of students.
TAC cuboids may be used to register data whenever different topics, activities and competencies coexist within a course. The competences considered in the cuboid might include transversal competencies or soft skills as well. The weights on each competence might be reconsidered, if required, so that the instructor would consider the competencies that are relevant for the learning objectives of the subject, but all data would be available in the cuboids so that research may be developed in search of the influence and relationship between the different competencies and the specific competencies of the subject.

Author Contributions

Conceptualization, F.M.-A. and L.M.S.-R.; methodology and software, F.M.-A., S.M.-L. and L.M.S.-R.; validation and investigation, F.M.-A., S.M.-L. and N.L.-G.; data curation, F.M.-A. and S.M.-L.; writing—original draft preparation, F.M.-A. and S.M.-L.; writing—review and editing, S.M.-L., N.L.-G. and L.M.S.-R.; visualization, S.M.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was developed within a project funded by Instituto de Ciencias de la Educación (ICE) through the Educational Innovation and Improvement Project (PIME) A + D 2019, number 1699-B, at Universitat Politècnica de València (UPV), Spain.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This experience has been developed within the GRoup of Innovative Methodologies for Assessment in Engineering Education GRIM4E, of Universitat Politècnica de València (Valencia, Spain).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MoEMoments of evaluation
FTFlipped teaching
CAContinuous assessment
DCDADynamical continuous discrete assessment
TACTopics addressed through Activities and Competencies achieved
STEMScience, technology, engineering and mathematics
SEFIEuropean Society for Engineering Education
(in French, Société Européenne pour la Formation des Ingénieurs)
MWGMathematics working group
UPVUniversitat Politècnica de València

References

  1. American Educational Research Association; American Psychological Association; National Council on Measurement in Education. Standards for Educational and Psychological Testing, 2nd ed.; American Educational Research Association: Washington, DC, USA, 2014. [Google Scholar]
  2. Walvoord, B.E. Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education, 2nd ed.; Jossey Bass: San Francisco, CA, USA, 2010. [Google Scholar]
  3. Palomba, C.A.; Banta, T.W. Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education, 1st ed.; Jossey Bass: San Francisco, CA, USA, 2015. [Google Scholar]
  4. Banta, T.W.; Palomba, C.A. Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education, 2nd ed.; Jossey Bass: San Francisco, CA, USA, 2015. [Google Scholar]
  5. Nilson, L. Teaching at Its Best: A Research-Based Resource for College Instructors, 4th ed.; Jossey Bass: San Francisco, CA, USA, 2016. [Google Scholar]
  6. Baker, D.J.; Zuvela, D. Feedforward strategies in the first-year experience of online and distributed learning environments. Assess. Eval. High. Educ. 2013, 38, 687–697. [Google Scholar] [CrossRef]
  7. Cathcart, A.; Greer, D.; Neale, L. Learner-focused evaluation cycles: Facilitating learning using feedforward, concurrent and feedback evaluation. Assess. Eval. High. Educ. 2014, 39, 790–802. [Google Scholar] [CrossRef] [Green Version]
  8. Sánchez-Ruiz, L.M.; Moll-López, S.; Moraño-Fernández, J.A.; Roselló, D. Dynamical Continuous Discrete Assessment of Competencies Achievement: An Approach to Continuous Assessment. Mathematics 2021, 9, 2082. [Google Scholar] [CrossRef]
  9. Education, Audiovisual and Culture Executive Agency. The European Higher Education Area in 2020. Bologna Process Implementation Report. Publications Office of the European Union, Luxembourg 2020. Available online: https://eacea.ec.europa.eu/national-policies/eurydice/sites/default/files/ehea_bologna_2020.pdf (accessed on 12 December 2021).
  10. Bologna Follow-Up Group. Advisory Group on the Revision of the Diploma Supplement. Available online: http://www.ehea.info/page-diploma-supplement (accessed on 12 December 2021).
  11. Kirschner, P.A.; Sweller, J.; Clark, R.E. Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educ. Psychol. 2006, 41, 75–86. [Google Scholar] [CrossRef]
  12. Knowles, M.S.; Holton, E.F., III; Swanson, R.A. The Adult Learner, 6th ed.; Elsevier: Burlington, MA, USA, 2005. [Google Scholar]
  13. Hill, J.; West, H. Improving the student learning experience through dialogic feed-forward assessment. Assess. Eval. High. Educ. 2020, 45, 82–97. [Google Scholar] [CrossRef]
  14. Wolstencroft, P.; de Main, L. ‘Why didn’t you tell me that before?’ Engaging undergraduate students in feedback and feedforward within UK higher education. J. Furth. High. Educ. 2021, 45, 312–323. [Google Scholar] [CrossRef]
  15. Latifi, S.; Noroozi, O.; Talaee, E. Peer feedback or peer feedforward? Enhancing students’ argumentative peer learning processes and outcomes. Br. J. Ed. Techn. 2021, 52, 768–784. [Google Scholar] [CrossRef]
  16. Tyler, R.W. Basic Principles of Curriculum and Instruction; The University of Chicago Press: Chicago, IL, USA, 1949. [Google Scholar]
  17. Tractenberg, R.E.; Lindvall, J.M.; Attwood, T.K.; Via, A. Guidelines for curriculum and course development in higher education and training. Open Arch. Soc. Sci. (SocArXiv) 2020, 3, 1–18. [Google Scholar] [CrossRef]
  18. Tractenberg, R.E. The Assessment Evaluation Rubric: Promoting Learning and Learner-Centered Teaching through Assessment in Face-to-Face or Distanced Higher Education. Educ. Sci. 2021, 11, 441. [Google Scholar] [CrossRef]
  19. Aina, J.K.; Adedo, G.A. Correlation between continuous assessment (CA) and Students’ performance in physics. J. Educ. Pract. 2013, 4, 6–9. [Google Scholar]
  20. Combrinck, M.; Hatch, M. Students’ Experiences of a Continuous Assessment Approach at a Higher Education Institution. J. Soc. Sci. 2012, 33, 81–89. [Google Scholar] [CrossRef]
  21. Niss, M.; Højgaard, T. Competencies and Mathematical Learning. Ideas and Inspiration for the Development of Mathematics Teaching and Learning in Denmark; English edition; Roskilde University: Roskilde, Denmark, October 2011; IMFUFA tekst nr. 485/2011. [Google Scholar]
  22. Mathematics Working Group SEFI. A Framework for Mathematics Curricula in Engineering Education, 3rd ed.; Alpers, B.A., Demlova, M., Fant, C.H., Gustafsson, T., Lawson, D., Mustoe, L., Velichova, D., Eds.; European Society for Engineering Education (SEFI): Brussels, Belgium, 2013; Available online: http://sefi.htw-aalen.de/Curriculum (accessed on 21 December 2021).
  23. García, A.; García, F.; Martín, A.; Rodríguez, G.; de la Villa, A. Learning and Assessing Competencies: New challenges for Mathematics in Engineering Degrees in Spain. In Proceedings of the 16th SEFI MWG Seminar Mathematical Education of Engineers, Salamanca, Spain, 28–30 June 2012; Available online: https://oa.upm.es/22922/ (accessed on 12 December 2021).
  24. Biggs, J.B. Aligning Teaching and Assessment to Curriculum Objectives; The Higher Education Academy: York, UK, 2003. [Google Scholar]
  25. Alpers, B.; Demlova, M. Competence acquisition in different learning arrangements. In Proceedings of the 16th SEFI MWG Seminar Mathematical Education of Engineers, Salamanca, Spain, 28–30 June 2012. [Google Scholar]
  26. Carr, J. Technical issues of grading methods. In Grading and Reporting Student Progress in an Age of Standards; Trumbull, E., Farr, B., Eds.; Christopher Gordon: Norwood, MA, USA, 2000; pp. 45–70. [Google Scholar]
  27. Brookhart, S.M. How to Create and Use Rubrics for Formative Assessment and Grading; ASCD Store, North Beauregard Street: Alexandria, VA, USA, 2013; ISBN 978-1-4166-1507-1. [Google Scholar]
  28. Andrade, H. Using rubrics to promote thinking and learning. Educ. Leadersh. 2000, 57, 13–18. [Google Scholar]
  29. Airasian, P.W.; Russell, M.K. Classroom Assessment: Concepts and Applications, 6th ed.; McGraw-Hill: New York, NY, USA, 2008. [Google Scholar]
  30. Arter, J.; McTighe, J. Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance; Corwin Press/Sage Publications: Thousand Oaks, CA, USA, 2001. [Google Scholar]
  31. Wiggins, G. Rational Numbers: Toward Grading and Scoring That Help Rather Than Harm Learning. Am. Educ. Winter 1988, 12, 20–25. [Google Scholar]
  32. Popham, W.J. Classroom Assessment: What Teachers Need to Know, 2nd ed.; Allyn & Bacon: Needham Heights, MA, USA, 1999. [Google Scholar]
  33. Stiggins, R.J. Student-Involved Classroom Assessment, 3rd ed.; Merrill/Prentice-Hall: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
  34. Hafner, J.; Hafner, P. Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. Int. J. Sci. Educ. 2003, 25, 1509–1528. [Google Scholar] [CrossRef]
  35. Schafer, W.; Swanson, G.; Bené, N.; Newberry, G. Effects of teacher knowledge of rubrics on student achievement in four content areas. Appl. Meas. Educ. 2001, 14, 151–170. [Google Scholar] [CrossRef]
  36. Montgomery, K. Classroom Rubrics: Systematizing What Teachers Do Naturally, The Clearing House. Educ. Modul. 2000, 73, 324–329. [Google Scholar]
  37. Moskal, B.M. Scoring Rubrics: What, When and How? Practical Assessment. Res. Eval. 2000, 7, 3. [Google Scholar]
  38. Kohn, A. The trouble with rubrics. Engl. J. 2006, 95, 12–15. [Google Scholar] [CrossRef]
  39. Wilson, M. Why I won’t be using rubrics to respond to students’ writing. Engl. J. 2007, 96, 62–66. [Google Scholar] [CrossRef]
  40. Sadler, D.R. The futility of attempting to codify academic achievement standards. High. Educ. 2014, 67, 273–288. [Google Scholar] [CrossRef]
  41. Torrance, H. Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post-secondary education and training can come to dominate learning Assessment in Education: Principles. Policy Pract. 2007, 14, 281–294. [Google Scholar]
  42. Panadero, E.; Jonsson, A. A critical review of the arguments against the use of rubrics. Educ. Res. Rev. 2020, 30, 100329. [Google Scholar] [CrossRef]
  43. Gregori-Giralt, E.; Menendez-Varela, J.L. The content aspect of validity in a rubric-based assessment system for course syllabuses. Stud. Educ. Eval. 2021, 68, 100971. [Google Scholar] [CrossRef]
  44. Herbert, S.; Vale, C.; White, P.; Bragg, L.A. Engagement with a formative assessment rubric: A case of mathematical reasoning. Int. J. Educ. Res. 2022, 111, 101899. [Google Scholar] [CrossRef]
  45. Universitat Politècnica de València. Bachelor’s Degree in Aerospace Engineering, Description Programme for Mathematics I. Available online: https://www.upv.es/titulaciones/GIA/index-en.html (accessed on 12 December 2021).
  46. Llobregat-Gómez, N.; Sánchez-Ruiz, L.M. Defining the Engineering student of 2030. In Proceedings of the 43rd SEFI Conference, Orleans, France, 29 June–2 July 2015. [Google Scholar]
  47. Carr, M.; Murphy, E.; Bowe, B.; Ni Fhloinn, E. Addressing Continuing Mathematical Deficiencies with Advanced Mathematical Diagnostic Testing. Teach. Math. Appl. 2013, 32, 66–75. [Google Scholar] [CrossRef] [Green Version]
  48. Shortera, N.A.; Young, C.Y. Comparing assessment methods as predictors of student learning in an undergraduate mathematics course. Int. J. Math. Educ. Sci. Technol. 2012, 42, 1061–1067. [Google Scholar] [CrossRef]
  49. Mawhinney, V.T.; Bostow, D.E.; Laws, D.R.; Blumenfeldm, G.J.; Hopkins, B.L. A comparison of student’s studying-behavior produced by daily, weekly, and three-week testing schedules. J. Appl. Behav. Anal. 1971, 4, 257–264. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Activity impacts on basic competencies for a given topic.
Figure 1. Activity impacts on basic competencies for a given topic.
Mathematics 10 00415 g001
Figure 2. Activity impacts on basic competencies for two different topics.
Figure 2. Activity impacts on basic competencies for two different topics.
Mathematics 10 00415 g002
Figure 3. A TAC cuboid exudes the relationship between activities, topics and competencies. The impact a i j k of each activity A i k on the competencies C j concurrent for each topic T k are represented at the different levels of the cuboid. The weight w i k of each type of activity A i k on each topic T k is displayed on the left side of the cuboid.
Figure 3. A TAC cuboid exudes the relationship between activities, topics and competencies. The impact a i j k of each activity A i k on the competencies C j concurrent for each topic T k are represented at the different levels of the cuboid. The weight w i k of each type of activity A i k on each topic T k is displayed on the left side of the cuboid.
Mathematics 10 00415 g003
Figure 4. Comparative grades between TAC method and DCDA.
Figure 4. Comparative grades between TAC method and DCDA.
Mathematics 10 00415 g004
Table 1. Activity/topic impact on basic competencies.
Table 1. Activity/topic impact on basic competencies.
Topic (T)Impact on Competencies of Types of Activities
Activities C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8
w 1 Lectures a 11 a 12 a 13 a 14 a 15 a 16 a 17 a 18
w 2 Exam a 21 a 22 a 23 a 24 a 25 a 26 a 27 a 28
..............................
w n Activity type n a n 1 a n 2 a n 3 a n 4 a n 5 a n 6 a n 7 a n 8
Table 2. Master TAC to assess topics, activities and competencies.
Table 2. Master TAC to assess topics, activities and competencies.
MoEWeightActivitiesSubweightSubclass M 1 M 2 M 3 M 4 Total
TP110Test100P116030100100
7.50 % 90Exam30E114030300100
30E124030300100
40E134030300100
TP2100Exam60E214030300100
14.25 % 40E224030300100
TP3100Exam60E314030300100
15.75 % 40E324030300100
TP4100Exam60E414030300100
30.00 % 40E424030300100
40E434030300100
LP30Week100L11002080100
25.00 % 28Lex1100L2120101060100
42Lex2100L3120101060100
100L3220101060100
T50Asg1100T1130202030100
7.50 % 50Asg2100T2130202030100
Table 3. Activities backlog for topic C 2 and Student X.
Table 3. Activities backlog for topic C 2 and Student X.
NAIDMoEActWSC M 1 M 2 M 3 M 4 w 1 w 2 w 3 w 4 Gr(A)
19XTP2Ex160E21000000000
20XTP2Ex260E2111104030300100
21XTP2Ex360E2101000300030
22XTP2Ex440E2211104030300100
23XTP2Ex560E21000000000
24XTP2Ex660E2111104030300100
25XTP2Ex760E21000000000
Table 4. pTAC of Student X after evaluating the 72 activities.
Table 4. pTAC of Student X after evaluating the 72 activities.
MoEs Run:6 Start:1End:72
MoEWActSC#Act D SM 1 SM 2 SM 3 SM 4
TP110TestP11212004440
TP190ExamE161905350
TP2100ExamE274003430
TP3100ExamE374006660
TP4100ExamE484008840
LP30LabL1272700190024
LP28ExamL211000011
LP42ExamL322002122
T50AssigT111001001
T50AssigT211001000
#Positive M evaluations49262528
Effectiveness68.157.855.6 87.5
GradesTP1TP2TP3TP4LPTCFGFG
70.644.588.583.585.145.075.3 75.3
MoE w S M 1 w S M 2 w S M 3 w S M 4 g M 1 g M 2 g M 3 g M 4 g A t
TP124,00012,00040000 20.010.03.30.033.3
TP1640030004800033.715.825.30.074.7
TP2640066004800016.016.512.00.044.5
TP314,40010,80010,2000 36.027.025.50.088.5
TP416,00012,00054000 40.030.013.50.083.5
LP38,00000192,00014.10.00.071.185.2
LP00100060000.00.010.060.070.0
LP40001000200012,000 20.05.010.060.095.0
T300000300030.00.00.030.060.0
T300000030.00.00.00.030.0
Sum:115,20045,40032,200213,000
Table 5. Vectors generated by retroactive rules.
Table 5. Vectors generated by retroactive rules.
NAIDTopicMoEWSC M 1 M 2 M 3 M 4 wM 1 wM 2 wM 3 wM 4 Gr(A)
73XRetroTP130E1211104030300100
74XRetroTP140E1311104030300100
75XRetroTP130E1211104030300100
76XRetroTP140E1311104030300100
77XRetroTP240E2211104030300100
78XRetroTP240E2211104030300100
79XRetroTP240E2211104030300100
80XRetroTP340E3211104030300100
81XRetroTP340E3211104030300100
Table 6. pTAC of Student X after dynamical assessment.
Table 6. pTAC of Student X after dynamical assessment.
MoEs Run:6 Start:1End:81
MoEWActSC#Act D SM 1 SM 2 SM 3 SM 4
TP110TestP11212004440
TP190ExamE1103309790
TP2100ExamE210 5206760
TP3100ExamE394808880
TP4100ExamE484008840
LP30LabL1272700190024
LP28ExamL211000011
LP42ExamL322002122
T50AssigT111001001
T50AssigT211001000
#Positive M evaluations58353428
Effectiveness71.664.863.0 87.5
GradesTP1TP2TP3TP4LPTCFGFG
80.257.390.483.585.145.078.1 78.1
MoE w S M 1 w S M 2 w S M 3 w S M 4 g M 1 g M 2 g M 3 g M 4 g A t
TP124,000 12,0004000020.010.03.30.033.3
TP112,000 72009000036.421.8 27.30.085.5
TP211,200 10,2008400021.519.6 16.20.057.3
TP317,600 13,20012,600036.727.5 26.30.090.4
TP416,000 12,0005400040.030.0 13.50.083.5
LP38,00000192,000 14.10.00.071.185.2
LP00100060000.00.010.0 60.070.0
LP4000 1000200012,00020.05.010.060.095.0
T300000300030.00.00.0 30.060.0
T300000030.00.00.00.030.0
Sum:128,80055,60042,400213,000
Table 7. Hypothesis test for the difference between paired means.
Table 7. Hypothesis test for the difference between paired means.
TP1TP2TP3TP4LPTFG
Mean0.00670.01560.02550.02060.0245-0.02820.0169
Standard deviation0.4567 0.44430.47510.41580.45870.47700.2040
Significance level α 0.05000.05000.05000.05000.05000.05000.0500
Standard error S E 0.04200.04090.04370.03830.04220.04390.0188
Critical value (normalized α )1.96001.96001.96001.96001.96001.96001.9600
Margin of error M E 0.08240.08020.08570.07500.08280.08610.0386
Confidence interval (lower limit)−0.0757−0.0646−0.0602−0.0544−0.0583−0.1142−0.0199
Confidence interval (upper limit)0.08910.09580.11120.09560.10720.05790.0537
Test statistic0.01470.03510.05370.04950.0533−0.05910.0830
p-value0.79780.79740.79670.79690.79680.79650.7951
Table 8. Results in the sample of students: non-dynamical TAC cuboid vs. traditional CA.
Table 8. Results in the sample of students: non-dynamical TAC cuboid vs. traditional CA.
Non-dynamical TAC Method
StudentTP1TP2TP3TP4LPTFG
A18.759.758.759.259.209.009.04
A27.258.258.009.857.586.758.30
C17.055.557.857.858.007.307.33
C25.156.758.858.026.756.507.30
D13.005.459.007.206.757.306.82
D22.753.007.206.155.455.505.40
F12.002.504.354.506.006.204.50
(Non-dynamical) Traditional CA method
StudentTP1TP2TP3TP4LPTFG
A19.0010.008.259.009.009.259.00
A27.509.808.508.769.259.259.00
C17.207.258.759.007.256.257.92
C24.756.058.008.757.756.757.53
D13.257.007.607.257.007.006.90
D22.003.507.757.255.002.455.50
F10.502.505.204.755.253.004.18
Table 9. Results in the sample of students: (Dynamical) TAC cuboid vs. DCDA.
Table 9. Results in the sample of students: (Dynamical) TAC cuboid vs. DCDA.
(Dynamical) TAC Method
StudentTP1TP2TP3TP4LPTFG
A19.009.759.229.509.259.009.28
A28.259.008.259.258.858.258.65
C17.006.458.007.757.226.587.78
C27.207.908.307.776.426.256.85
D14.505.126.756.345.385.055.66
D23.254.755.456.755.775.355.47
F12.452.855.004.356.434.004.60
DCDA evaluation
StudentTP1TP2TP3TP4LPTFG
A19.2510.009.509.259.758.509.46
A28.759.759.509.209.158.209.20
C17.256.008.808.208.356.257.80
C26.457.257.907.206.256.006.93
D14.305.006.256.655.155.155.69
D22.406.006.255.005.255.005.20
F12.253.505.203.255.003.224.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mínguez-Aroca, F.; Moll-López, S.; Llobregat-Gómez, N.; Sánchez-Ruiz, L.M. A Cuboid Registers Topic, Activity and Competency Data to Exude Feedforward and Continuous Assessment of Competencies. Mathematics 2022, 10, 415. https://doi.org/10.3390/math10030415

AMA Style

Mínguez-Aroca F, Moll-López S, Llobregat-Gómez N, Sánchez-Ruiz LM. A Cuboid Registers Topic, Activity and Competency Data to Exude Feedforward and Continuous Assessment of Competencies. Mathematics. 2022; 10(3):415. https://doi.org/10.3390/math10030415

Chicago/Turabian Style

Mínguez-Aroca, Francisco, Santiago Moll-López, Nuria Llobregat-Gómez, and Luis M. Sánchez-Ruiz. 2022. "A Cuboid Registers Topic, Activity and Competency Data to Exude Feedforward and Continuous Assessment of Competencies" Mathematics 10, no. 3: 415. https://doi.org/10.3390/math10030415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop