Next Article in Journal
Geodesic Mappings onto Generalized m-Ricci-Symmetric Spaces
Previous Article in Journal
Logit Model for Estimating Non-Profit Organizations’ Financial Status as a Part of Non-Profit Financial Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reforming the Teaching and Learning of Foundational Mathematics Courses: An Investigation into the Status Quo of Teaching, Feedback Delivery, and Assessment in a First-Year Calculus Course

by
Yusuf F. Zakariya
1,*,
Øystein Midttun
2,
Svein Olav Glesaaen Nyberg
2 and
Thomas Gjesteland
2
1
Department of Mathematical Sciences, University of Agder, 4630 Kristiansand, Norway
2
Department of Engineering Science, University of Agder, 4879 Grimstad, Norway
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2164; https://doi.org/10.3390/math10132164
Submission received: 7 May 2022 / Revised: 17 June 2022 / Accepted: 20 June 2022 / Published: 21 June 2022

Abstract

:
Several universities are witnessing an increase in students’ enrolment in mathematics-intensive programmes over the last decades. This increase has come with the price of high failure rates in foundational mathematics courses, which poses challenges to mathematics teaching and learning in higher education. It is therefore inevitable, for some universities, to transform the teaching and learning of mathematics to more student-centred approaches that engage the students mathematically and enhance their success rates. We approach this transformative effort by investigating students’ perception of teaching, feedback, and assessment as a first step in reforming the teaching of a first-year mathematics course at a Norwegian university. The results of both quantitative and qualitative analyses of the data generated using a questionnaire from 107 (80 men) engineering students show that the status quo of teachings offers little support for learning. The teaching is dominated by teacher-led instruction, note-taking, and large pieces of proof which make learning difficult for students during class activities. The results also show that the current structure of the course offers limited formative feedback to students and that the assessment tasks require restructuring to capture students’ time and effort. We discuss the implications of these findings and make some recommendations for improvement.

1. Introduction

Our society is changing, and it is changing very fast. In a natural response to the changes in society over the last few decades, many higher education institutions across the world are finding it difficult to cope with two issues: (1) unprecedented huge enrolment of highly distracted students [1], and (2) high failure rates in foundational mathematics courses [2]. On the one hand, these students are highly distracted by the proliferation of fun-based technologies, social media platforms, and other social pressures in society [1,3]. On the other hand, evidence from the United States of America, for instance, shows that 25–75% of about 2.25 million students yearly enrolled in foundational calculus courses either failed, received a D-grade, or withdrew from the courses [2,4]. Within the Norwegian borders, Gynnild, Tyssedal [5] mention a failure rate ranging from 21.5 to 39.2% in a foundational mathematics course while Zakariya [6] reports a 43% failure rate in a first-year calculus course among Norwegian engineering students. Among other things, factors such as approaches to learning, students’ attitudes, emotional problems, and teaching methods have been implicated in the poor performance of students in first-year mathematics courses with suggestions for improvement [7,8,9,10]. Regrettably, the unacceptably high failure rates in first-year calculus courses have become a catalyst for many students to leave mathematics-intensive programmes, to delay progression from one academic level to the next, and to drop out of the universities [11]. As such, it becomes incumbent on course coordinators, university administrators, and other higher education stakeholders to change the teaching and learning of foundational mathematics courses in response to changes in society.
Change in the teaching and learning of mathematics in higher education is inevitable but rather difficult. However, empirical evidence shows that—with the involvement of the right people that have the power to influence the structures within a shared cultural/symbolic heritage—changes can be enacted, implemented, and successfully sustained [12]. Reinholz and Apkarian [13] posited a theoretical framework consisting of four crucial frames (people, power, symbols, and structures) that should be taken into consideration in enacting and sustaining changes in higher education. According to Reinholz and Apkarian [13]:
[S]tructures are the roles, routines, and practices of a department; their enactment and meaning are dependent on symbols, which are the norms, values, and ways of thinking in a department; changes are ultimately enacted by people whose individuality impacts their intentions and perceptions; and the distribution of power determines who makes certain decisions and influences interactions (p. 5, italics in the original).
In this study, we draw on insights from this theoretical framework by involving people of varying power to make some effort in transforming the teaching and learning of a first-year calculus within the existing structure and the symbols of a Norwegian university. We identify teaching, feedback delivery, and assessment methods as prime areas of the first-year calculus course through which change efforts can be enacted. The teaching includes the learning outcomes, what is taught, and how it is taught. The feedback delivery includes the quality, quantity, accessibility, and utility of feedback by the students. The assessment methods include what is assessed, how it is assessed, and its flexibility. The interaction between and the alignment of these prime areas form another important focus of the change effort. We take the view that the first attempt in any change effort is to critically examine the status quo in the teaching and learning of the course. To this end, we present findings on the students’ perceptions of teaching, feedback, and assessment in a first-year calculus course.
The remaining parts of this article are arranged such that relevant literature is reviewed in the next section to conceptualise feedback, assessment, and their relationships. Then, we briefly describe the research context and present the research questions. The fourth section presents methodological issues such as the sample of the study, measuring instruments, procedures for data collection, and data analysis. We then present and discuss the results of both the qualitative and quantitative analyses of the generated data in the fifth section. Finally, the article concludes with highlights of major findings, implications, and recommendations for improving the teaching and learning of foundational mathematics courses within and outside the Norwegian borders.

2. Review of Relevant Literature

2.1. Conceptualising Feedback

Feedback is a crucial component of the teaching and learning process, and it characterises all sorts of information that is made available by an agent (e.g., a teacher, software, peer) in reactions to one’s performance and understanding of presented tasks [14]. This definition emphasises the agent (the provider of feedback), the receiver, and the fact that feedback comes as a consequence of the receiver’s action. The nature of the agent, in some cases, suggests the type of feedback in mathematics education literature as in the teacher feedback, e.g., [15,16] and peer feedback, e.g., [17,18]. If both the teacher and peer feedback are transmitted through the use of software, the literature suggests that the feedback can be computer-based, e.g., [19,20]. In this study, we focus on teacher feedback on students’ mathematics tasks regardless of whether it is delivered through software or otherwise.
Some theoretical arguments are proposed to make sense of the relationship between the provider and the receiver of feedback. Some of these theoretical perspectives are commognition theory, e.g., [16], cultural-historical activity theory, e.g., [21], and socio-cognitive theory, e.g., [22]. These various theoretical perspectives suggest that feedback is an emergence of a dialogical social activity between the provider, the context, and the receiver whose effects are mediated by social and personal factors [16,23]. More so, there is a possibility of a bidirectional relationship between the provider of feedback (e.g., the teacher) and the receiver of feedback (e.g., the student). The student receives feedback to shape the learning activity while the teacher takes advantage of the student’s reaction to shape subsequent teaching activity. This process view of feedback offers a renowned opportunity to investigate teachers’ purpose of feedback [24,25], and students’ engagement with and use of feedback [26,27]. Guo and Wei [24] argued that teachers’ purposes of feedback are to verify students’ responses through correct or incorrect judgement, to scaffold students’ learning using hints and cues, to give directives on problem solutions, to criticize, and to praise the students’ performance and affective inclination towards learning. The question of whether the teachers’ purposes of feedback are perceived and used by the students has been equally investigated.
Research shows that students’ engagement with and use of feedback rest on some factors which can either bolster the engagement or impair its usage [26,27]. In a review of related literature, Jonsson [26] draws on previous studies to argue that feedback is perceived as useful by the students provided it is specific, personalised, and detailed enough to encourage students’ engagement. In contrast, teacher authoritative feedback and students’ personal factors such as lack of strategy to use or understand some technical words in the feedback may hinder engagement with the feedback. Evidence, e.g., [28] shows that feedback is more effective in improving students’ engagement with mathematics if it is interactive. Interactive feedback here means a step-by-step digital process that provides iterative guidance to students in a problem-solving session. Other researchers, e.g., [29,30], have identified the quality, quantity, time, and complexity of feedback as crucial factors that influence the use of feedback. The feedback that is readily available to students either in a delayed or an immediate timeframe has the potential to attract students’ engagement. The former is linked with conceptual knowledge while the latter is linked with the development of procedural mathematical skills [30]. In addition, the quantity and appropriate simplicity in feedback delivery facilitate feedback usage by the students while undue complexity may disrupt feedback engagement for productive learning by the students [26]. By productive learning, we mean learning activities that engage the students mathematically, both individually and with each other, and lead improved success rate in mathematics.

2.2. Conceptualising Assessment

Assessment is a crucial component of the teaching and learning activities with several conceptualisations in the literature. Our work is inspired by the conceptualisation of assessment proposed by Sangwin [31] who defined assessment as:
[T]he process by which a teacher forms a judgement about a student (by considering the student’s responses to mathematics tasks) and on the basis of that judgement assigns outcomes, such as feedback and a numerical mark/score (p. 21, italics in the original).
Assessment is therefore a process through which a teacher gathers both quantitative and qualitative evidence on how much of the learning outcomes have been achieved by the students. Basically, there are two theoretical perspectives to assessment. These are the measurement perspective and standard perspective [32,33]. The measurement perspective conceives assessment as a relative measurement in which students are judged in comparisons with each other and graded based on some predetermined norms of expected distribution curves [32]. The purposes of assessment in the measurement perspective are ranking, sorting, comparing, and evaluating students’ general knowledge within a broadly conceived area of achievement [32,33]. The standard perspective, on the other hand, conceives assessment as criterion-referenced in which students are judged against some standards/criteria set in the course description. The purpose is basically to gauge students’ performance against the pre-set criteria, i.e., the level of attainment of the pre-set learning outcomes by individual students [32,33].
Further, assessment has been used for formative feedback delivery (i.e., formative assessment) and summative grading (i.e., summative assessment) even if both summative and formative are popularly identified as types of assessment [25,32]. Historically, the distinction between formative and summative assessment can be traced to Michael Scriven who used these terms to characterise methods of evaluation as far back as 1967 as claimed by [34]. However, over the last fifty years, these terms have been adapted into assessment terminologies and used interchangeably with formative feedback, assessment for learning, and assessment for grading [25]. The rationale of using either summative or formative assessment is to gauge how well students have done or are doing in a teaching and learning activity. Meanwhile, central to formative assessment is to provide feedback: to use the assessment as a communication between the teacher and learners geared towards modifying the students’ thinking process. In return, the teacher can modify subsequent teaching within the timeframe of a course. Summative assessment, on the other hand, comes at the end of a course and communicates the level of attainment in the course to the students. In the present article, we use assessment broadly to cover both formative and summative purposes following the standard perspective, while assessment tasks will be used for means of gathering assessment evidence such as the course assignments and exams.

2.3. Research Context and the Research Questions

The focus of the present research is on a compulsory foundational course, Mathematics 1, for first-year engineering students in a Norwegian university. It is a 7.5 credit course that is offered every autumn to undergraduate students enrolled in the following study programmes: civil and structural engineering, computer engineering, electronics and electrical engineering, renewable energy, and mechatronics. The course content comprises basic skills in functions, differentiation and integration of functions and their applications, Taylor series, and complex numbers. Mathematics 1 contains lectures (physical and live streaming, twice a week, 1 h and 30 min each, and a break of 15 min) and problem-solving sessions (twice a week, 1 h and 30 min each, and a break of 15 min). The course is traditionally taught in the sense that lectures are teacher-led and mostly end with few or no questions from the students. In Mathematics 1, there is a distinction between examination criteria and assessment tasks. The examination criterion (i.e., a requirement before a student can be allowed to sit for the final exam) is a sufficient number (70–80%) of approved exercises in the three mandatory assignments during the course. The students’ scores in the mandatory assignments do not count toward their final grades in the course. As such, there is only one high-stake individual examination (eight to nine mathematical tasks) at the end of the term upon which the students are graded.
These modes of teaching, learning, and assessing in Mathematics 1 open ways for some questions which we attempt to address in the present study. The main research question is what are the students’ perceptions of teaching, feedback delivery, and assessment in Mathematics 1? This main research question prompts some follow-up questions such as do the teaching activities offer the opportunity for productive students’ learning, do the students get quality and sufficient formative feedback, and how well do the assessment tasks capture students’ time and effort?

3. Methods

3.1. Sample of the Study

The present research focuses on second-year university students who completed Mathematics 1 during the first year of their university education. This set of students was given preference over the third-year students on the premise that their experience of teaching, feedback delivery, and assessment in Mathematics 1 is fresher as compared to the latter. Meanwhile, the first-year students did not qualify for this study because they were in the middle of the course at the time of data collection in autumn 2021. A total of 107 s-year engineering students (80 men) gave consent and anonymously participated in the study. Their average age is 23.07 years with a standard deviation of 4.16. The consent was voluntary, and some students received emails from the researchers while others were persuaded in their physical classrooms to take part in the research. Thus, the resultant 107 engineering students form a convenient sample of the study.

3.2. Measuring Instrument

We used the Norwegian adaptation of the assessment experience questionnaire [35] to generate data on students’ perceptions of teaching, feedback delivery, and assessment in Mathematics 1. The assessment experience questionnaire was originally developed by Gibbs and Simpson [36] before being adapted and validated in the Norwegian context. The final version of the Norwegian validation of the assessment experience questionnaire (N-AEQ) has 17 items which are distributed into 6 subscales. Five of the six subscales have three items each and one subscale has only two items. A series of exploratory and confirmatory factor analyses show that N-AEQ exhibits construct validity and its subscales have reliability coefficients of 0.75, 0.70, 0.69, 0.77, 0.66, and 0.50 using Cronbach alpha [35]. For the present study, we excluded the two-item subscale of the N-AEQ because of its reported low reliability coefficient of 0.50 and we added some items that will be discussed in the subsequent paragraph. As such, the adapted version of N-AEQ used in the present study has fifteen closed-ended items which are distributed equally into five subscales. Table 1 shows each subscale of the adapted N-AEQ, short descriptions of each dimension, sample items, and the corresponding reported reliability coefficients. The full English and the Norwegian translations of the questionnaire are available in the Appendix A and Appendix B, respectively.
On the questionnaire, the students rated their agreement with each item statement on a six-point Likert scale: strongly disagree, disagree, slightly disagree, slightly agree, agree, and strongly agree. Additionally, we added some items that requested the students to provide their gender, age, and an optional open-ended question: Are there any other comments you would like to make about the assessment and feedback in Mathematics 1? For the open-ended question, the students were asked to provide their answers in written form using the space provided on the questionnaire.

3.3. Data Collection and Analysis

We prepared both electronic and paper versions of N-AEQ and administered the questionnaire in autumn 2021. Most of the students following the lectures physically completed the paper version of the questionnaire during class visitations but only five students responded to the electronic version that was provided to the students following lectures remotely. The generated data with the closed-ended items of the questionnaire were coded from 1 for strongly disagree to 6 for strongly agree. Initial screening of the data showed that the data contained no outliers and only had a few missing values which posed no challenge to subsequent analysis. We thus computed the scores for the five dimensions of N-AEQ: feedback quality, feedback quantity, use of feedback, quality of effort, and exam and learning, by taking averages of the corresponding item scores. We then analysed the generated data with the closed-ended items of N-AEQ using basic descriptive statistics involving means, proportions, and standard deviations.
To analyse the data generated from the open-ended item on the questionnaire, the researchers used thematic analysis as described by Braun et al. [37]. The purpose of using the thematic analysis is to critically examine each open comment the students made and make coherent arguments for cross-cutting meanings (themes) that appropriately describe their perceptions of teaching, feedback delivery, and assessment in Mathematics 1. Themes in the present research reflect “a pattern of shared meaning, organized around a core concept or idea” [37]. This conceptualisation of themes contrasts with another school of thought that views themes as a domain summary [38]. The view of themes as a pattern of shared meaning offers an opportunity to go beyond the surface summary of the contents being analysed and dig deep into the underlying meaning of the contents. The approach to thematic analysis in the present study is reflexive such that emphasis is on contextual meaning(s) and researchers’ subjectivity is not only valid but also used in the coding process [37]. Thus, the coding is free from any pre-designed codebook and follows non-linear processes of coding, reflecting, and recoding to achieve coherent outcomes (themes).

4. Results and Discussion

4.1. Quantitative Analysis Results

The first set of results concerns the quantitative analysis of the fifteen closed-ended items of N-AEQ. Through the students’ responses to the closed-ended items of the questionnaire, we infer the students’ general perceptions of teaching, feedback delivery, and assessment in Mathematics 1. The students’ responses to three items of each corresponding dimension of N-AEQ are averaged over individual students, and the averages of each dimensional score per person are then averaged over the total participating students. The final averages of each dimension of N-AEQ are presented in Figure 1.
The presented results in Figure 1 show that both the quantity and quality feedback dimensions of the questionnaire have means of 3.57 and 3.89, respectively. These results show that there is not an overwhelming agreement with the statement that the feedback they received in Mathematics 1 is sufficient and timely. Furthermore, there is no overwhelming agreement with the statement that the feedback fosters their understanding and highlights specific areas of improvement in their work. Moreover, Figure 1 shows that both the use of feedback and the exam and learning dimensions of the questionnaire have means of 4.17 and 4.58, respectively. These results show that the students agree more that the limited feedback they received during the course is used to improve their learning. Further, they slightly agree that the exam is aligned with the course content materials and fosters learning. On a more positive side, Figure 1 shows that the quality of effort dimension of the questionnaire has a mean of 5.25. That is, there is a general agreement among the students on the statement that Mathematics 1 and its assessment tasks necessitate consistent effort. In sum, the results of the quantitative analysis provide a general perspective of the students that they receive limited and/or poor-quality feedback on their works, whereas the assessment tasks are relevant in fostering their learning.
It is crucial to remark that each of the dimensions of N-AEQ does not operate in isolation. This is because previous studies, e.g., Refs. [39,40] show that both feedback quantity and quality have substantial relationships with the use of feedback. As such, given that the students in the present study agree that they make use of the limited feedback they receive, suggests the plausibility of increasing the quantity and quality of feedback as a proxy to foster their learning experience in Mathematics 1. It is also important to mention that, just like several other quantitative analysis results, Figure 1 only provides evidence of students’ perceptions at an average level. The findings may not be directly applicable to each student. To get a feeling of what each student thinks of the teaching, feedback delivery, and assessment in Mathematics 1, the following section presents the qualitative analysis of students’ comments on the open-ended item on the questionnaire.

4.2. Qualitative Analysis Results

To gain more insights into the quantitative data, we performed thematic analysis, as described in the data collection and analysis section, on the responses of students that answered the open-ended question:
Are there any other comments you would like to make about the assessment and feedback in Mathematics 1?
Of the 107 students who anonymously participated in the research and returned their questionnaires, 37 students answered this open-ended question. Eight of these students either wrote some comments such as “Great, but difficult”, and “The teachers were very good. Thank you:)”, or their handwriting was not legible enough to read. The comments of these students were excluded from further analysis. As such, comments from the remaining 29 students were analysed and we discuss the results of this thematic analysis under the following headings:
  • Students’ perceptions of teaching.
  • Feedback delivery and the assessment tasks.

4.2.1. Students’ Perceptions of Teaching

A theme that emerges as a pattern of shared meaning across the students’ comments is that the teaching in Mathematics 1 offers little support for learning. This disposition of students towards the teaching in the course may be justified from two perspectives. First, the students feel that there was too much content to cover within a limited time in the course. For instance, one of the students wrote:
The subject moved on very quickly to new topics, which made it difficult to get proper benefits from learning in class.
Another student wrote:
It was hectic and constant working. There are many topics “fighting” about study time. You end up in a situation where you try to keep up with everything, but some topics have to be sacrificed to perform in others.
The second perspective is their perception that the teachings are dominated by too much note-taking, formulae, and large pieces of proof. For instance, one of the students wrote:
The teaching was hectic and the lecturer often ‘rushes’ through large pieces of proof and calculations.
Another student wrote:
The lectures were used for a lot of unnecessary proof.
Another student wrote:
There was not so much at lectures. YouTube is better.
The excerpts of students’ comments form an understanding that the teachings in Mathematics 1 offer little support for learning. From the students’ perspectives, heavy course contents, the proliferation of note-taking, formulae, and large pieces of proof in the lectures make it difficult to learn appropriately during the course delivery. This perception of teaching in the course fits the description of lectures and lecturing as explicated by Greiffenhagen [41] when he wrote that lectures “involve a great deal of writing. It is not untypical for a lecturer to fill several blackboards during one lecture. Furthermore, lecturers typically write the definitions, theorems, and proofs out in full” (p. 505). As rightly perceived by the students, this method of teaching mathematics has been shown to be less effective when compared with more student-centred and active learning approaches to teaching mathematics [42,43,44]. It then becomes imperative for stakeholders in teaching and learning of the course to devise some innovative approaches toward improving the content delivery of the course.

4.2.2. Feedback Delivery and the Assessment Tasks

More than half of the students (59%) commented on their perceptions of feedback delivery in the course. A theme that emerges as a pattern of shared meaning across these comments is the dissatisfaction with the quality and quantity of feedback in the course. This theme provides further support to the students’ view on the adequacy, timeliness, and relevance of feedback in fostering their understanding as presented in the quantitative analysis. It appears that the only window through which they receive feedback on their work is through the mandatory assignments. Admittedly, there are informal channels such as the drop-in centres and occasional group discussions among the students through which they get help on their work. However, such feedback is not sufficient. Many of the students remarked that the only feedback they received is either approved or not approved on their mandatory assignments which did not have much value to them. More so, some of them mentioned that there was no feedback on the exam. However, this is expected since there is only one exam in the course. One of the students wrote:
I wish it was possible to get feedback on the exam to learn from it.
Many of the students mentioned that the mandatory assignments are too bulky in content and suggested that the instructors may consider breaking the assignments into pieces that are well distributed across the course contents. They suppose that a greater number of assignments will trigger more feedback in the course. One of the students wrote:
With larger and fewer assignments, it was difficult to learn the material as it took longer each time I worked on the subject.
Another student wrote:
I did not get much feedback from the teacher. Had little compulsory and the obligatory was difficult (did not get much out of them). Better with small assignments.
Some of the students also suggested that the mandatory assignments should count towards the final grade in the course. For instance, one of the students wrote:
Have more obligations so you get feedback continuously. Should have graded scores on submissions that count toward the exam.
Further, there are some reservations among a few students on the level of difficulty, limited time, and the large chunk of content to be covered for the final exam. However, these reservations are expected considering the structure of the course as described in the research context and the research questions section. Meanwhile, some students politely suggested a reduction in the exam weight; a suggestion worthy of consideration especially considering some consequences of the COVID-19 pandemic. For instance, apart from the immediate previous student’s comment, another student wrote:
Could also have had something to do that counts towards the exam during the semester (e.g., a 2-week project or something).
These findings point to a conclusion that the present structure of Mathematics 1 offers limited formative feedback to the students, and that the assessment tasks require restructuring to capture students’ time and effort. Considering the substantial influence of qualitative formative feedback on students’ success in mathematics [15,18,40], the findings of the present study pose a challenge to stakeholders in the teaching of mathematics to devise innovative techniques for enhancing feedback delivery in the course.

5. Conclusions

The teaching and learning of mathematics in higher education are challenged by the high enrolment of degree-seeking students and high failure rates in foundational mathematics courses. Admittedly, the challenge is tough and multi-faceted with several calls for reformation and adoption of more student-centred instructions that will engage students mathematically, encourage peer-to-peer interaction, use students’ mathematical thinking to inform teaching, and will make a genuine effort to address equity in higher education [44,45]. In the present study, we investigated students’ perception of teaching, feedback delivery, and assessment as a first step in reforming the teaching and learning of a first-year mathematics course in a Norwegian university. The findings are revealing with several implications for the concerned stakeholders on the next line of action.
For instance, both the quantitative and qualitative analysis results conclude that the status quo of teaching offers little support for learning in the course. The teaching being dominated by teacher-centred instruction, the proliferation of note-taking, and large pieces of proof make productive learning difficult for students during class activities. A suggestion to address this problem could be to restructure the teachings such that half of the class time, especially in the problem-solving sessions, is used for cooperative learning where students are allowed to interact and engage mathematically with each other. This type of engagement could foster productive learning [43,46]. We suppose that such restructuring will not alter the structures and the collective norms regarding the teaching in the course [13]. Another solution could be to split the course into two or three components with each component being assessed separately. It is envisaged that with more than one exam in the course there will be opportunities for formative assessment to shape the knowledge of the course and modify the subsequent teaching [32,47]. Admittedly, the two suggestions will require additional time and effort from the instructors as well as the involvement of people with power to influence such changes. However, the gain in students’ success in the course will eventually be worthy of the sacrifice.
Another crucial observation from the findings of the present study is the low rate of students’ satisfaction with the quality and quantity of feedback in the course. This finding exposes a flaw in the present structure of the course and necessitates genuine effort toward improving the students’ success in the course. A viable option to address this challenge is to restructure the assignments such that they are both individual, for skills and procedures practice, and team-based, for solving conceptual problems. These assignments may be given on a weekly basis. The weekly assignments and the formative feedback therein can be delivered through technological tools (e.g., System for Teaching and Assessment using a Computer algebra Kernel—STACK). Following some empirical evidence in the literature, e.g., [20,31,48], we align with the fact that the use of technological tools will guarantee feedback delivery that will facilitate students’ engagement with and use of the feedback to shape their learning. More importantly, the technological tools should be designed based on theoretical frameworks such as the framework in [49] that emphasises reproduction, application, generation, and reflection in task development. Our next line of action could be to implement some principled changes in subsequent semesters with the support of people who have the power to influence changes such as the head of section, course instructors, and research leaders in the faculty. Following this, we will then evaluate the effectiveness of these changes in improving students’ performance in Mathematics 1.
The implications of the present study are not restricted to the context of the research even if the data used are locally generated and analysed. University students’ poor performance in foundational mathematics is a challenge to many universities within and outside the Norwegian borders [2,6]. Our approach of taking a step backwards to critically examine the status quo in the teaching and learning of the course could be duplicated at other struggling institutions with similar problems. Further, our mixed approaches of complimenting quantitative with qualitative methods to data collection, analysis, and interpretation of results may be replicated by other institutions within and outside Norway. This will offer the opportunity of combining the strengths of both methods to make coherent arguments for the problem under investigation. Moreover, some potential solutions, e.g., the use of technological tools for feedback delivery, could be useful to address similar problems in teaching, feedback delivery, and assessment in foundational mathematics courses, elsewhere.

Author Contributions

Y.F.Z. conceptualised the study, conducted formal analysis, wrote, and revised the original draft of the manuscript. Ø.M. and S.O.G.N. were involved in data collection, formal analysis and writing of the original draft. T.G. supervised and wrote the original draft. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

The authors sought the individual consent of the anonymous respondents in accordance with the national data protection guideline.

Data Availability Statement

The data are available upon request from the corresponding author.

Acknowledgments

The authors appreciate the efforts of Chris Rasmussen and John David Monaghan for their comments and recommendations on the first draft of the manuscript. The authors also acknowledge the support of the University library for funding the APC of the manuscript.

Conflicts of Interest

The authors declare neither financial nor non-financial competing interests.

Appendix A. Your Experience of Assessment and Feedback

The purpose of this questionnaire is to find out how you feel about the assessment and feedback in MA-178 (Mathematics 1) course. The results will be used to help your mathematics teachers improve the assessment and make the feedback more useful to you. The questionnaire is anonymous. Data will be used for research and evaluation purposes.
For each statement, show the extent of your agreement or disagreement by putting a cross in the box which best reflects your current view of MA-178 course so far.
Strongly DisagreeDisagreeSlightly DisagreeSlightly AgreeAgreeStrongly Agree
1The requirements of this course make it necessary to work consistently hard.
2The feedback I receive makes me understand things better.
3I read the feedback I receive from the teacher carefully and try to understand the teacher’s assessments and comments.
4On this course, it is necessary to work consistently and regularly.
5I have hardly received any feedback on submitted assignments.
6I learn new things while preparing for the exams.
7As a rule, the feedback on assignments makes me go back over material we have covered earlier.
8The feedback gives me a clear sense of what needs to be improved for next time.
9The feedback makes me understand better why the teachers are assessing my work as they do.
10Both exam preparations and the exam provide me with a greater overview and understanding of the material.
11I use the feedback I received to go back over what I had done in my work.
12Feedback comes quickly.
13I learn new things better as a result of the exams.
14Whatever feedback I received on my work came too late to be useful.
15The way the assessment system works here, it is necessary to work regularly every week.
I. Gender: Male……. Female……. [Mark with a cross (X)]. II. Age: ……………. Years. Can you key in your letter grade in MA-178? (optional):………………….
Are there any other comments you would like to make about the assessment and feedback in Mathematics 1?
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Appendix B. Din Opplevelse av Vurdering og Tilbakemeldinger

Formålet med dette spørreskjemaet er å finne ut hva du føler angående vurderingene og tilbakemeldingene på kurset MA-178 (Matematikk 1). Resultatet vil brukt til å hjelpe matematikklærerne dine til å forbedre vurderingen, og å gjøre tilbakemeldingene nyttigere for deg. Spørreskjemaet er anonymisert. Dataene vil bli brukt til forsknings- og evalueringsformål.
For hvert av utsagnene, indiker hvor enig eller uenig du er ved å sette et kryss i den boksen som best svarer til ditt nåværende syn på kurset MA-178 så langt.
Veldig UenigUenigLitt UenigLitt EnigEnigVeldig Enig
1Kravene på studiet gjør det nødvendig å jobbe hardt hele tiden
2Tilbakemeldingene til meg gjør at jeg forstår tingene mye bedre
3Jeg leser nøye igjennom tilbakemeldingene jeg får og prøver å forstå lærerens vurderinger og kommentarer
4For å gjøre det bra på dette studiet må vi jobbe jevnt og regelmessig
5Jeg har nesten ikke fått tilbakemeldinger på innleverte oppgaver
6Jeg lærer nye ting når jeg forbereder meg til eksamen
7Som regel fører tilbakemeldinger på oppgaven(e) at jeg repeterer lærestoff vi har arbeidet med tidligere
8Tilbakemeldingene gir meg klar beskjed om hva som bør forbedres neste gang
9Tilbakemeldingene gjør at jeg forstår bedre hvorfor lærerne vurderer arbeidet mitt (oppgavene) som de gjør
10Både forberedelser til eksamen og selve eksamen gir meg oversikt og bedre forståelse av kunnskapsstoffet
11Jeg bruker tilbakemeldingene til å gå igjennom oppgaven på nytt
12Tilbakemeldinger (feedback) kommer raskt
13Jeg lærer ting bedre som resultat av eksamen
14Tilbakemeldingene kommer nesten alltid for sent til å være av noen nytte
15Slik vurderingssystemet fungerer her er det nødvendig å jobbe jevnt hver uke
I. Kjønn: Mann....... Kvinne........ [Marker med et kryss (X)]. II. Alder: ……………. År. Kan du skrive inn karakteren din i MA-178? (frivillig ekstraspørsmål): …………………………….
Har du andre kommentarer du kunne tenke deg å ta med angående vurdering og tilbakemelding på Matematikk 1?
 
 
 
 
 
 
 
 
 
 
 
 
 
 

References

  1. Dontre, A.J. The influence of technology on academic distraction: A review. Hum. Behav. Emerg. 2020, 3, 379–390. [Google Scholar] [CrossRef]
  2. Smith, W.M.; Rasmussen, C.; Tubbs, R. Introduction to the special issue: Insights and lessons learned from mathematics departments in the process of change. PRIMUS 2021, 31, 239–251. [Google Scholar] [CrossRef]
  3. Chen, Q.; Yan, Z. Does multitasking with mobile phones affect learning? A review. Comput. Hum. Behav. 2016, 54, 34–42. [Google Scholar] [CrossRef]
  4. Laursen, S. Levers for Change: An Assessment of Progress on Changing STEM Instruction, 1st ed.; American Association for the Advancement of Science: Washington, DC, USA, 2019. [Google Scholar]
  5. Gynnild, V.; Tyssedal, J.; Lorentzen, L. Approaches to study and the quality of learning. Some empirical evidence from engineering education. Int. J. Sci. Math. Educ. 2005, 3, 587–607. [Google Scholar] [CrossRef]
  6. Zakariya, Y.F. Undergraduate Students’ Performance in Mathematics: Individual and Combined Effects of Approaches to Learning, Self-Efficacy, and Prior Mathematics Knowledge; Department of Mathematical Sciences, University of Agder: Kristiansand, Norway, 2021. [Google Scholar]
  7. Leomarich, C. Factors affecting the failure rate in mathematics: The case of Visayas State University (VSU). Rev. Socio-Econ. Res. Devel. Stud. 2019, 3, 1–18. [Google Scholar]
  8. Almeida, M.E.B.d.; Queiruga-Dios, A.; Cáceres, M.J. Differential and Integral Calculus in First-Year Engineering Students: A Diagnosis to Understand the Failure. Mathematics 2021, 9, 61. [Google Scholar] [CrossRef]
  9. Zakariya, Y.F.; Bamidele, E.F. Investigation into causes of poor academic performance in mathematics among Obafemi Awolowo University undergraduate students, Nigeria. GYANODAYA—J. Progress. Educ. 2016, 9, 11. [Google Scholar] [CrossRef]
  10. Zakariya, Y.F.; Nilsen, H.K.; Bjørkestøl, K.; Goodchild, S. Analysis of relationships between prior knowledge, approaches to learning, and mathematics performance among engineering students. Int. J. Math. Educ. Sci. Technol. 2021, 1–19. [Google Scholar] [CrossRef]
  11. Ellis, J.; Fosdick, B.K.; Rasmussen, C. Women 1.5 times more likely to leave STEM pipeline after calculus compared to men: Lack of mathematical confidence a potential culprit. PLoS ONE 2016, 11, e0157447. [Google Scholar] [CrossRef]
  12. Smith, W.M.; Voigt, M.; Ström, A.; Webb, D.C.; Martin, W.G. Transformational Change Efforts: Student Engagement in Mathematics through an Institutional Network for Active Earning; American Mathematical Society: Providence, RI, USA, 2021. [Google Scholar]
  13. Reinholz, D.L.; Apkarian, N. Four frames for systemic change in STEM departments. Int. J. STEM Educ. 2018, 5, 3. [Google Scholar] [CrossRef]
  14. Hattie, J.; Timperley, H. The power of feedback. Rev. Educ. Res. 2007, 77, 81–112. [Google Scholar] [CrossRef] [Green Version]
  15. Stovner, R.B.; Klette, K.; Nortvedt, G.A. The instructional situations in which mathematics teachers provide substantive feedback. Educ. Stud. Math. 2021, 108, 533–551. [Google Scholar] [CrossRef]
  16. Kontorovich, I. Minding mathematicians’ discourses in investigations of their feedback on students’ proofs: A case study. Educ. Stud. Math. 2021, 107, 213–234. [Google Scholar] [CrossRef]
  17. Reinholz, D.L.; Pilgrim, M.E. Student sensemaking of proofs at various distances: The role of epistemic, rhetorical, and ontological distance in the peer review process. Educ. Stud. Math. 2021, 106, 211–229. [Google Scholar] [CrossRef]
  18. Reinholz, D.L. Peer-Assisted reflection: A design-based intervention for improving success in calculus. Int. J. Res. Undergrad. Math. Educ. 2015, 1, 234–267. [Google Scholar] [CrossRef] [Green Version]
  19. Fujita, T.; Jones, K.; Miyazaki, M. Learners’ use of domain-specific computer-based feedback to overcome logical circularity in deductive proving in geometry. ZDM—Math. Educ. 2018, 50, 699–713. [Google Scholar] [CrossRef] [Green Version]
  20. Robinson, M.; Loch, B.; Croft, T. Student perceptions of screencast feedback on mathematics assessment. Int. J. Res. Undergrad. Math. Educ. 2015, 1, 363–385. [Google Scholar] [CrossRef] [Green Version]
  21. Sezen-Barrie, A.; Marbach-Ad, G. Cultural-historical analysis of feedback from experts to novice science teachers on climate change lessons. Int. J. Sci. Educ. 2021, 43, 497–528. [Google Scholar] [CrossRef]
  22. Han, Y.; Hyland, F. Learner engagement with written feedback: A sociocognitive perspective. In Feedback in Second Language Writing; Han, Y., Hyland, F., Eds.; Cambridge University Press: Cambridge, UK, 2019; pp. 247–264. [Google Scholar]
  23. Zhan, Y.; Wan, Z.H.; Sun, D. Online formative peer feedback in Chinese contexts at the tertiary Level: A critical review on its design, impacts and influencing factors. Comp. Educ. 2022, 176, 104341. [Google Scholar] [CrossRef]
  24. Guo, W.; Wei, J. Teacher feedback and students’ self-regulated learning in mathematics: A study of chinese secondary students. Asia-Pac. Educ. Res. 2019, 28, 265–275. [Google Scholar] [CrossRef]
  25. Black, P.; Wiliam, D. Assessment and classroom learning. Assess. Educ. Princ. Policy Pract. 1998, 5, 7–74. [Google Scholar] [CrossRef]
  26. Jonsson, A. Facilitating productive use of feedback in higher education. Active Learn. High. Educ. 2012, 14, 63–76. [Google Scholar] [CrossRef]
  27. Brown, G.T.; Peterson, E.R.; Yao, E.S. Student conceptions of feedback: Impact on self-regulation, self-efficacy, and academic achievement. Br. J. Educ. Psychol. 2016, 86, 606–629. [Google Scholar] [CrossRef]
  28. Barana, A.; Marchisio, M.; Sacchet, M. Interactive feedback for learning mathematics in a digital learning environment. Educ. Sci. 2021, 11, 279. [Google Scholar] [CrossRef]
  29. Gibbs, G. Using Assessment to Support Student Learning; Leeds Metropolitan University: West Yorkshire, UK, 2010. [Google Scholar]
  30. Shute, V.J. Focus on formative feedback. Rev. Educ. Res. 2008, 78, 153–189. [Google Scholar] [CrossRef]
  31. Sangwin, C. Computer Aided Assessment of Mathematics; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  32. Biggs, J.B. What the student does: Teaching for enhanced learning. High. Educ. Res. Dev. 2012, 31, 39–55. [Google Scholar] [CrossRef]
  33. Taylor, C. Assessment for measurement or standards: The peril and promise of large-scale assessment reform. Am. Educ. Res. J. 1994, 31, 231–262. [Google Scholar] [CrossRef]
  34. Lipnevich, A.A.; Berg, D.A.G.; Smith, J.K. Toward a model of student response to feedback. In The Handbook of Human and Social Conditions in Assessment; Brown, G.T.L., Harris, L.R., Eds.; Routledge: New York, NY, USA, 2016; pp. 169–185. [Google Scholar]
  35. Pettersen, R.C.; Karlsen, K.H. Studenters Erfaringer med Tilbakemeldinger (Feedback): Norsk Versjon av Assessment Experience Questionnaire (AEQ). 2011. Available online: https://www.researchgate.net/publication/261572653_Studenters_erfaringer_med_tilbakemeldinger_feedback_Norsk_versjon_av_Assessment_Experience_Questionnaire_AEQ (accessed on 6 May 2022).
  36. Gibbs, G.; Simpson, C. Measuring the response of students to assessment: The assessment experience questionnaire. In Proceedings of the 11th International Improving Student Learning Symposium, Hinckley, UK, 1–3 September 2003. [Google Scholar]
  37. Braun, V.; Clarke, V.; Hayfield, N.; Terry, G. Thematic Analysis. In Handbook of Research Methods in Health Social Sciences; Liamputtong, P., Ed.; Springer: Singapore, 2019; pp. 843–860. [Google Scholar]
  38. Guest, G.; MacQueen, K.M.; Namey, E.E. Applied Thematic Analysis; Sage: Thousand Oaks, CA, USA, 2012. [Google Scholar]
  39. Vattøy, K.-D.; Gamlem, S.M.; Rogne, W.M. Examining students’ feedback engagement and assessment experiences: A mixed study. Stud. High. Educ. 2021, 46, 2325–2337. [Google Scholar] [CrossRef]
  40. Kyaruzi, F.; Strijbos, J.-W.; Ufer, S.; Brown, G.T.L. Students’ formative assessment perceptions, feedback use and mathematics performance in secondary schools in Tanzania. Assess. Educ. Princ. Policy Pract. 2019, 26, 278–302. [Google Scholar] [CrossRef]
  41. Greiffenhagen, C. The materiality of mathematics: Presenting mathematics at the blackboard. Bri. J. Soc. 2014, 65, 502–528. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Ginga, U.A.; Zakariya, Y.F. Impact of a social constructivist instructional strategy on performance in algebra with a focus on secondary school students. Educ. Res. Int. 2020, 2020, 3606490. [Google Scholar] [CrossRef]
  43. Rasmussen, C.; Kwon, O.N. An inquiry-oriented approach to undergraduate mathematics. J. Math. Behav. 2007, 26, 189–194. [Google Scholar] [CrossRef]
  44. Freeman, S.; Eddy, S.L.; McDonough, M.; Smith, M.K.; Okoroafor, N.; Jordt, H.; Wenderoth, M.P. Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. USA 2014, 111, 8410–8415. [Google Scholar] [CrossRef] [Green Version]
  45. Theobald, E.J.; Hill, M.J.; Tran, E.; Agrawal, S.; Arroyo, E.N.; Behling, S.; Chambwe, N.; Cintron, D.L.; Cooper, J.D.; Dunster, G.; et al. Active learning narrows achievement gaps for underrepresented students in undergraduate science, technology, engineering, and math. Proc. Natl. Acad. Sci. USA 2020, 117, 6476–6483. [Google Scholar] [CrossRef] [Green Version]
  46. Laursen, S.L.; Rasmussen, C. I on the prize: Inquiry approaches in undergraduate mathematics. Int. J. Res. Undergrad. Math. Educ. 2019, 5, 129–146. [Google Scholar] [CrossRef]
  47. Black, P.; Wiliam, D. Developing the theory of formative assessment. Educ. Assess. Eval. Account. 2009, 21, 5–31. [Google Scholar] [CrossRef] [Green Version]
  48. Van der Kleij, F.M.; Feskens, R.C.W.; Eggen, T.J.H.M. Effects of feedback in a computer-based learning environment on students’ learning outcomes. Rev. Educ. Res. 2015, 85, 475–511. [Google Scholar] [CrossRef]
  49. Demosthenous, E.; Christou, C.; Pitta-Pantazi, D. Mathematics classroom assessment: A framework for designing assessment tasks and interpreting students’ responses. Eur. J. Investig. Health Psychol. Educ. 2021, 11, 1088–1106. [Google Scholar] [CrossRef]
Figure 1. Students’ learning experience in Mathematics 1. Note. 1—strongly disagree, 2—disagree, 3—slightly disagree, 4—slightly agree, 5—agree, and 6—strongly agree.
Figure 1. Students’ learning experience in Mathematics 1. Note. 1—strongly disagree, 2—disagree, 3—slightly disagree, 4—slightly agree, 5—agree, and 6—strongly agree.
Mathematics 10 02164 g001
Table 1. Sample items of each dimension of the N-AEQ and the corresponding reliability indices.
Table 1. Sample items of each dimension of the N-AEQ and the corresponding reliability indices.
DimensionShort DescriptionItem NumberSample Itemα
Feedback qualityThe feedback fosters students’ understanding and highlights specific areas of improvement in students’ work.2, 8, 9The feedback I receive makes me understand things better.0.75
Exam and learningThe exam is aligned with the course content materials and fosters learning.6, 10, 13I learn new things while preparing for the exams.0.70
Feedback quantityThe feedback is sufficient and timely.5 *, 12, 14 *Feedback comes quickly.0.69
Quality of effortThe course and its assessment tasks necessitate consistent effort.1, 4, 15The requirements of this course make it necessary to work consistently hard.0.77
Use of feedbackThe feedback is used by the students to improve learning.3, 7, 11I use the feedback I received to go back over what I had done in my work.0.66
* Items that are reverse coded before analysis because of their negative wordings.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zakariya, Y.F.; Midttun, Ø.; Nyberg, S.O.G.; Gjesteland, T. Reforming the Teaching and Learning of Foundational Mathematics Courses: An Investigation into the Status Quo of Teaching, Feedback Delivery, and Assessment in a First-Year Calculus Course. Mathematics 2022, 10, 2164. https://doi.org/10.3390/math10132164

AMA Style

Zakariya YF, Midttun Ø, Nyberg SOG, Gjesteland T. Reforming the Teaching and Learning of Foundational Mathematics Courses: An Investigation into the Status Quo of Teaching, Feedback Delivery, and Assessment in a First-Year Calculus Course. Mathematics. 2022; 10(13):2164. https://doi.org/10.3390/math10132164

Chicago/Turabian Style

Zakariya, Yusuf F., Øystein Midttun, Svein Olav Glesaaen Nyberg, and Thomas Gjesteland. 2022. "Reforming the Teaching and Learning of Foundational Mathematics Courses: An Investigation into the Status Quo of Teaching, Feedback Delivery, and Assessment in a First-Year Calculus Course" Mathematics 10, no. 13: 2164. https://doi.org/10.3390/math10132164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop