Next Article in Journal
A Hybrid Approach to Ontology Construction for the Badini Kurdish Language
Previous Article in Journal
Real-Time Identification and Nonlinear Control of a Permanent-Magnet Synchronous Motor Based on a Physics-Informed Neural Network and Exact Feedback Linearization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Challenges and Trends in Student Evaluation of Teaching: Analysis of SET Data from the University of Peloponnese

by
Ilias Papadogiannis
1,*,
Costas Vassilakis
2,
Manolis Wallace
1 and
Athanassios Katsis
3
1
ΓAB LAB—Knowledge and Uncertainty Research Laboratory, University of the Peloponnese, 22131 Tripolis, Greece
2
Department of Informatics and Telecommunications, University of the Peloponnese, Akadimaikou G. K. Vlachou Str, 22131 Tripolis, Greece
3
Department of Social & Education Policy, University of the Peloponnese, 20131 Korinthos, Greece
*
Author to whom correspondence should be addressed.
Information 2024, 15(9), 576; https://doi.org/10.3390/info15090576
Submission received: 6 August 2024 / Revised: 4 September 2024 / Accepted: 18 September 2024 / Published: 19 September 2024

Abstract

:
This study examines the effectiveness of Student Evaluations of Teaching at the University of Peloponnese, which has systematically collected anonymous evaluations since 2015. The analysis focused on participation rates, average scores, and the correlation between student evaluations and their academic performance. Participation rates were notably low, averaging 14.63%, with postgraduate students showing higher rates (27.33%) than undergraduates (10.77%). The average SET scores were moderately high, with postgraduates rating their courses slightly better (M = 4.137) than undergraduates (M = 3.899). A weak positive correlation was found between course grades and evaluations among undergraduates, whereas no significant correlation was observed for postgraduates. These findings highlight challenges in using SETs as reliable measures of teaching effectiveness and suggest the need for improved participation and more comprehensive evaluation methods. The results provide insights into enhancing assessment practices and contribute to the broader discourse on the validity of student evaluations in higher education.

Graphical Abstract

1. Introduction

Measuring teaching quality is undoubtedly a useful feedback tool for academic institutions. Teaching quality in higher education has become an important issue as institutions seek to improve outcomes and enhance accountability [1]. Student Evaluation of Teaching (SET) is a tool widely adopted in academia to obtain information from students about their learning experiences. SET data collection tools are designed to be able to capture various aspects of teaching, which include all aspects of course quality, faculty engagement, and quality of support [2].
However, the dominance of SET tools as an indicator of teaching quality is a controversial issue in the literature.
The issue of the reliability of SETs as a tool for measuring the quality of teaching in universities has been the subject of international debate in recent decades [3]. Studies have questioned the reliability of SET as an accurate measure of student learning, suggesting that factors such as leniency in grading and course difficulty can bias assessments [4,5]. Data analyses show trends where perceived teaching quality is correlated with factors such as teacher comfort and attractiveness rather than effectiveness [6,7]. Factors such as gender, popularity, lecturer personality, and leniency in grading can significantly influence student evaluations, often obscuring the true effectiveness of teaching [8,9,10,11,12,13]. Response rates to these evaluations are often low, further calling into question the reliability of the effectiveness measures [14].
On the other hand, SETs can provide useful feedback and their validity is increased when combined with qualitative techniques such as interviews. Qualitative feedback allows students to express detailed opinions and observations, which can be more informative for improving teaching practice [15]. Separating evaluations of course content from those of the teacher is also crucial to avoid confusing course quality with teaching effectiveness [16,17,18]. Although the need for quality higher education has led to concerns about the quality of courses and teaching in institutions, highlighting the need for effective assessment systems to improve teaching practices and student learning outcomes, low participation rates in course evaluations are a common issue in higher education, and, according to the literature, several factors contribute to this phenomenon [17].
Students’ motivations, perceptions, and attitudes toward student evaluation of teaching (SET) surveys play a key role in survey completion [19]. In addition, the perceived value of the assessments, ensuring anonymity, survey design, and timing of survey disclosure, influence student engagement in the assessment process [20]. It is also a matter of debate as to whether generic instruments are effective in all cases. In medical schools, concerns arise about the applicability of generic higher education assessment tools, such as SETs, to assess teaching effectiveness due to differences in curriculum structure and delivery, highlighting the importance of developing comprehensive assessment systems tailored to medical education [21].
Similar issues are addressed by the University of Peloponnese (UoP), which has been systematically implementing the evaluation of teaching work since 2015, collecting anonymous data through the voluntary participation of students. The present work is the first attempt to study and draw conclusions from these data as a whole. This study seeks to draw conclusions for the first time on the success of the SET process at the University of Peloponnese and explores possible modifications. We aim to highlight these challenges by undertaking a broad analysis of SET data collected at UoP. By exploring participation rates, average scores, and the correlation between student evaluations and course grades, this study seeks to provide information on the effectiveness of SET as a tool for assessing teaching quality. In addition, this study will contribute to the wider global debate on the role of SET in higher education by providing recommendations for improving the quality and usefulness of student evaluations. The current analysis is structured based on the following research questions:
RQ1: What are the participation rates of the students of the UoP in the assessment during the period under review?
RQ2: How do the students of the UoP evaluate the courses during the period under review?
RQ3: Is there a positive relationship between the grades students receive and the assessments they give?

2. Materials and Methods

The present study attempts to assess the current situation in course evaluation at the UoP. The study follows a quantitative approach using aggregated data. For reasons of privacy, students’ course evaluations are anonymous; anonymity is also mandated by the current legislation. The process is carried out using questionnaires, which are posted on the internet. Students are asked to fill out an anonymous online form with questions, without providing any personal information. The course evaluation variables were given on a five-point Likert-type scale (1 = unacceptable, 2 = unsatisfactory, 3 = moderate, 4 = satisfactory, and 5 = very satisfactory). Table 1 lists the structure of the SET data that were used in this study; the content of the evaluation questionnaire is provided in [22].
This procedure does not allow linking the assessment data with the data from students’ records. The lowest level at which there can be a unique combination of course assessment and grade data was that of the course for each year. Therefore, assessment data were calculated at an aggregate level, by course and year.
Student grades were obtained from the university’s LMS and were anonymized, having any personal information removed. The dataset included features such as the code and name of the university department, the code and name of the course, the semester the student was in when he or she took the exam in the course, the exam period in which he or she took the exam, and the grade received. Table 2 presents a detailed breakdown of the data made available to the authors.

Data Preparation

As it was not possible to link the data from the two datasets at a student level, the following procedure was used. In the grade dataset for each course and year, the average grades were calculated and a new variable per course (“grade point average”) was created. In the assessment dataset, the average rating was first calculated and a new variable (average evaluation) was created. Then, the department code and academic year were used to create a new variable ‘concat’, which was identical in both datasets. This allowed the two datasets to be linked at the level of course per year. The final dataset contained the department code (dept_code), the name of the department or graduate school (dept_text), the academic year (year), the course code (lesson_code), the name of the course (lesson), the average grade (grade_aver), and the average evaluation (eval). In both datasets, there was no separation between undergraduate and postgraduate courses. The separation was made by the researcher using the department name, which was the department name for undergraduate courses and the graduate name for postgraduate courses. Using this distinction as a criterion, a new variable (‘und_post’) was created and used to examine the differentiation later. Student participation was calculated as the percentage of students who evaluated the course compared to those who were enrolled in the course and thus eligible to do so. The final dataset contained twelve variables, which are presented in Table 3.
Due to the voluntary nature of the evaluation, there are cases of courses with missing values in the evaluation, which are excluded from further analysis. The student evaluation process is also voluntary and anonymous. In this way, ethical issues such as consent, anonymity, and confidentiality of participants were avoided.
As we mentioned above, participation rates were low, which raises questions about the reliability of inferring conclusions to all students. For this reason, the current study favored the use of mainly descriptive statistical measures (mean, standard deviation, and frequencies) and limited the use of inferential statistics. However, the conclusions of the analyses relate to the whole UoP for the specific time period. The data analysis was performed using the open-source software JASP18, which uses the R language.

3. Results

3.1. First Research Question: Participation Rates of the Students

Starting from the participation rates in the evaluation of teaching (Table 4), it can be seen that the overall mean participation rate was 14.63% (SD = 15.26), with a very wide range between a minimum of 0.29% and a maximum of 93.33%. For postgraduate students, the mean participation rate was higher at 27.33% (SD = 18.82), ranging from 0.93% to 90.00%. Undergraduate students had a mean participation rate of 10.77% (SD = 11.50), again with a wide range from 0.29% to 93.33%. The number of courses used in this analysis was 4386 overall for eight academic years, with 1024 courses being at the postgraduate level and 3362 being undergraduate ones.
At the departmental level, there is a notable disparity in participation rates, with a range of 0.34% to 55.52% (see Appendix A). There was a notable discrepancy in the rates of undergraduate participation in teaching evaluations across different departments at the UoP. The departments with the highest rates of participation were Performing and Digital Arts (M = 21.65%, SD = 13.67%), Physiotherapy (M = 15.28%, SD = 12.99%), and Theatre Studies (M = 13.01%, SD = 9.56%). Conversely, the departments with the lowest participation rates were Management Science and Technology (M = 4.79%, SD = 6.69%), Economic Sciences (M = 6.68%, SD = 10.71%), and Information Technology and Telecommunications (M = 8.01%, SD = 11.13%). In general, the participation rates for undergraduate evaluations ranged from less than 5% to over 20%, with the majority of departments averaging between 9 and 13%.
Postgraduate participation rates also showed variation between M.Sc. programs. The programs with the highest participation rates were Modern Wireless Communications of Information Technology and Telecommunications Department (M = 55.52%, SD = 15.90%), Computer Science and Technology, also of Information Technology and Telecommunications Department (M = 37.73%, SD = 11.45%), and Public Administration and Digital Transformation of Management Science and Technology Department (M = 32.00%, SD = 3.27%). The programs with the lowest participation rates were the Nursing department’s program on care for children with special needs (M = 8.36%, SD = 2.48%), Food Science and Technology’s MBA program (M = 9.74%, SD = 4.05%), and Sport Organization and Management program on disabilities (M = 17.59%, SD = 11.99%). Postgraduate participation rates ranged from under 10% to 75%, with most programs falling between 15 and 50%. On the other hand, undergraduate participation rates are mostly gathered in the range of 5.3 to 20%. As is obvious in the following boxplot diagrams (Figure 1), the high participation rates can be considered outliers since they fall above Q3 + 1.5*IQR, where Q3 is the upper limit of the third quartile and IQR is the difference between Q3 and Q1 (the upper limit of the first quartile).
A longitudinal analysis of the dataset over the eight-year period (Table 5) revealed that the mean participation rates for undergraduate students ranged from 8.25% in 2015 to a peak of 13.24% in 2020, with standard deviations varying between 8.56% and 13.77%.
In contrast, postgraduate students demonstrated consistently higher mean participation rates than their undergraduate counterparts. The rates ranged from 18.56% in 2016 to a high of 35.87% in 2020, with standard deviations spanning from 11.09% to 21.00%. The longitudinal analysis shows very high participation rates as outliers at both undergraduate and postgraduate levels. One point to note is the substantial absence of a trend in participation rates, which confirms a stable situation with no significant signs of improvement (Figure 2 and Figure 3).
These low rates of student participation limit the potential for drawing general conclusions at the university level. Consequently, the results are a representation of the existing picture based on the data available through descriptive statistics.

3.2. 2nd Research Question: Evaluation of the Courses

The second research question sought to ascertain how students at the UoP appraised the courses in question during the period under review, or otherwise, the evaluation of the lessons (Table 6). This was conducted using a five-point Likert scale. The descriptive statistics of SET scores were calculated on a subset comprising SET ratings from both postgraduate (n = 1024) and undergraduate courses (n = 3362), resulting in a total of 4386. The mean SET score for undergraduate students is 3.899 (SD = 0.597), indicating a rating of “satisfactory” on a scale ranging from one to five, with five representing the upper limit of the scale and one the lower limit (Table 7).
For postgraduate students, the average SET score was between satisfactory and excellent (M = 4.137, SD = 0.522), with a minimum of 1.566 and a maximum of 5.000 (Table 8). Overall, the average SET score was close to satisfactory (M = 3.955, SD = 0.589). These results indicate that, on average, postgraduate students at the UoP tended to give slightly higher SET ratings than undergraduate students.
When analyzing the data according to the dimensions proposed by the SET assessment tool, it can be seen that the teaching staff received the highest SET evaluation (M = 4.091, SD = 1.215). High satisfaction is also demonstrated by the assignment dimension, with a mean of 4.016 (SD = 1.235), followed by courses with a mean of 3.850 (SD = 1.146), and students’ self-assessment (M = 3.846, SD = 1.107). Laboratories have lower levels of SET score (M = 3.549, SD = 1.297), with the lowest mean recorded in the supportive teaching dimension (M = 2.857, SD = 1.616) (Table 7). Overall, while students generally evaluate the lecturers positively and the help they receive for their assignments; on the contrary, there is significant room for improvement in the contribution of laboratories and supportive teaching practices.
The longitudinal study of the data revealed that at the undergraduate level, the average SET rating increased consistently, although the change remained relatively small over the eight-year period (Table 8). The average ratings improved from falling in the fair/satisfactory categories early on to consistently remaining near the satisfactory level in more recent years. Specifically, ratings shifted from a fair range of 3.689 to 3.816 in 2015–2018 to a range between 3.944 and 4.015 from 2019 to 2022. This suggests a gradual small positive shift in student evaluations at the undergraduate level. The steady increases in average SET scores from the UoP undergraduate students point to enhancing teaching effectiveness and other dimensions evaluated as well as student learning experiences over the eight-year period.
Similarly, the SET scores of postgraduate students at the UoP exhibited a modest gradual increase over the eight-year period (Table 9). In particular, the mean SET ratings remained consistently positioned above four on the five-point scale, exhibiting slight fluctuations between 4.0 and 4.2. Using the assessment rubric, these average values suggest that postgraduates consistently rated their lessons at a more than satisfactory level. The variation in the mean SET scores across the period from 2015 to 2022 was minimal, and a stable pattern emerged. The gains of small magnitude demonstrate a consistent trajectory of exceptionally positive teaching evaluations from postgraduate students over the examined period.
Due to limited student participation, no evaluation data are available for some academic years and departments. Some notable differences in SET evaluation are presented below; only departments and graduate programs with at least four years of data were available (Supplementary Materials). The assessment scores of the Department of Literature increased steadily from 3.64 in 2015 to 4.18 in 2022, indicating a continuous improvement in the department’s performance, according to the students. Also, the Department of Political Science and International Relations showed a steady increase in SET score, from 3.58 to 3.91 and the Department of Politics and Educational Policy from 3.62 to 4.10. The Department of Management Science and Technology saw a slight increase in scores, from 4.12 in 2019 to 4.25 in 2022. Finally, the Inter-Institutional Msc in Space Science also showed a slight positive trend, from 3.94 to 4.11.
In contrast, the evaluations of some departments or postgraduate programs showed variability, such as the Department of Performing and Digital Arts, which pared variability from 3.73 in 2019 to 4.04 in 2020 and experienced a slight drop in 2021 before recovering to 4.06 in 2022. The Inter-Institutional MSc in Space Science experienced an alarming drop, falling from a high of 3.71 in 2015 to just 3.09 in the most recent assessment. The MSc in Dramatic and Performing Arts in Education and Lifelong Learning showed the most alarming decline, with scores falling from a high of 4.23 in 2019 to a low of 2.42 in 2022.
The variations indicate that while some academic units have maintained or improved their educational quality in the eyes of students, others may need additional investigation and targeted interventions to address the root causes of declining assessment metrics.
By examining the data per university department (Appendix B), it is observed that at the undergraduate level, mean SET scores range from 3.330 (corresponding to a neutral attitude) to 4.510 near the maximum of the Likert scale, with no remarkable variations. Most departments show mean SET scores close to [4], corresponding to agreement with the positively formulated question and, therefore, satisfaction with the services provided by the department. At the postgraduate level, higher SET scores are found. The mean SET scores range from 3610 to 4700 with small standard deviations. The majority of graduate programs show SET scores higher than [4].
It is evident that students in postgraduate programs assess teaching more positively. By setting the rating of 4 as the threshold for “Satisfactory” teaching assessment, we can note that 12 undergraduate departments were rated lower than 4 and only 5 higher. On the contrary, out of the 28 postgraduate programs, 21 were rated higher than 4 and only 7 lower (Figure 4).

3.3. Third Research Question: Relationship between Grades and Assessments’ Scores

The third research question examined the extent to which students’ evaluations are related to the grades they receive in the corresponding courses. It needs to be stressed at this point that assessment questionnaires are filled in by the students between the 8th and the 12th week of course instruction; hence, students do not know their final grades when providing answers to the questionnaires. In some cases, they may have received some intermediate exam or course assignment results. Among themselves, students also discuss the outcomes of previous examination periods, characterizing the courses as easier or more difficult to succeed in. Therefore, the analysis presented in this section does not aim to assess the relationship between individual student success or failure with the course evaluation grades s/he provides, but rather the following two perspectives:
  • Can SET results for a course be a predictor for student success or failure in the particular course?
  • To what extent are average grades from previous examination periods, which are known to the students, related to the ratings they provide in the evaluation?
To examine these relationships, the correlation coefficients between the students’ average course evaluations and the average grades they received in the corresponding courses were calculated. The analysis was performed on the overall dataset, as well as between academic years and academic levels. The main limitation is again related to student participation rates and the lack of generalizability.
In this analysis Pearson’s r (Equation (1)), Spearman’s rho (Equation (2)) and Kendall’s Tau (Equation (3)) were used. The formulas and a description of the calculation of these metrics are presented below.
r = n ( Σ x y ) ( Σ χ ) ( Σ y ) [ n Σ x 2 ( Σ x ) 2 ] [ n Σ y 2 ( Σ y ) 2 ]
where n is the number of data points, x, and data points for variables “grade_aver” and “eval”.
ρ = 1 6 d i 2 n ( n 2 1 )
where n is the number of data points and d is the difference between the ranks of each pair of corresponding values.
τ = ( C D ) ( C + D + T ) ( C + D + U )
where C is the number of concordant pairs, D is the number of discordant pairs, T is the number of ties only in the “grade_aver” variable, and U is the number of ties only in the “eval” variable.
The results revealed a markedly low positive correlation coefficient across the entire sample, as well as between the academic years. The majority of coefficients were around the 0.30 threshold for all statistically significant coefficients. This suggests a consistent positive but very low (or no) correlation between students’ course evaluations and their grades in the corresponding courses across all years (Table 10).
For postgraduate students, no significant correlation was found between course evaluations and grades. The correlation coefficients (rs = 0.016, p = 0.609; rho = 0.004, p = 0.904; and rt = 0.003, p = 0.859) indicated no significant relationship. In addition, by examining the extent to which average grades from previous examination periods are related to their assessment scores, low or no correlation was found between the average student evaluations and the average grades received in the course in the previous period (Table 11).

4. Discussion

The evaluation of teaching quality at the University of Peloponnese (UoP) through student evaluations of teaching (SET) reveals several challenges that align with the broader international discourse on the effectiveness and fairness of SETs in higher education.
Participation rates in SETs at UoP are notably low, with an overall mean of 14.63%, which is significantly lower than the participation rates reported in other studies [15,16,17,18]. Postgraduate students exhibit a higher participation rate, suggesting that they are maybe more interested in providing feedback, or they might perceive the evaluations as more impactful for their academic experience. Both of these aspects may reflect the higher commitment to their studies exhibited by postgraduate students, as compared to undergraduate ones.
The low participation rates raise big concerns about the reliability, representativeness, and generalizability of the findings. With such limited data, it is difficult to draw comprehensive conclusions about the quality of teaching across the university. This issue is exacerbated by the variation in participation rates across departments. Almost all of the high participation rates are identified as outliers. Such variations suggest that managing staff are not receiving sufficient feedback to make informed decisions about teaching practices.
Taking into account the main limitation regarding student participation rates and the lack of generalizability, the analysis at the departmental level reveals significant variations in participation rates and evaluations, indicating that some departments may have different cultures or practices regarding SETs.
The study also analyzed SET scores from both postgraduate and undergraduate courses. The results indicate that postgraduate students tend to give slightly higher evaluations than undergraduate students. Additionally, undergraduate SET scores showed a consistent but small increase over the 8-year period, with average scores improving from the fair/satisfactory range in the early years to consistently near the satisfactory level in more recent years. This trend suggests a small gradual improvement in student evaluations at the undergraduate level.
The literature highlights several factors that can influence SETs, as students seem to rate the courses in which they receive higher grades more highly [4,5,6,7,8,9,10,11]. These factors can skew evaluations, making it difficult to discern true teaching effectiveness [6]. At UoP, the lack of correlation between student grades and their evaluations of courses provides indications that leniency in grading does not play a role in how students perceive and rate their courses and instructors. In contrast to the literature [6,7], this finding indicates that teacher’s grading practices or expectations are not related to course evaluation results and therefore (a) low/high assessment ratings for a course do not signify that higher/lower failure percentages should be expected and (b) past success percentages in a course, or even intermediate results in the same year, do not introduce any observable bias for the assessment ratings provided by students.
The anonymous and voluntary nature of SETs at UoP helps eliminate ethical concerns related to consent, confidentiality, and anonymity. However, this also decreases the participation rate and limits the ability to link evaluations directly with student performance data, which could provide deeper insights into the relationship between teaching practices and student outcomes.
To address the problem of low participation in SET, various measures are proposed in the literature [23,24,25,26,27,28]. UoP authorities should focus on several key areas. The increases in participation rates in SETs are essential, and this can be achieved by implementing strategies that boost student engagement, such as emphasizing the impact of their feedback on teaching quality and considering incentives for participation.
The effect of student expectations on assessments is very crucial. Teachers could form a psychological contract with their students, an agreement that outlines the potential benefits of the course assessment. Also, the connection between the knowledge and abilities learned during the course and future career potential is critical. The impact of student expectations on assessments plays a pivotal role in SET evaluations [25,26]. When students have clear expectations, it can significantly influence their engagement, motivation, and participation in SET assessments. To achieve this, the teaching staff could establish a psychological contract with their students, which goes beyond a simple agreement. This contract would clearly outline not only the expectations but also the potential benefits that students can gain from the course evaluation. Such an agreement can help students see the value of the assessments in a broader context, connecting the knowledge and skills they acquire during the course to their future career opportunities. Emphasizing the relevance of course content to real-world and career growth is essential in fostering a more meaningful learning experience. By understanding how the skills developed in the course can directly impact their professional lives, students are more likely to approach assessments with a positive attitude.
Emphasizing the importance of feedback and its role in course improvement is very important. Reminding students of the assessment deadline is vital and can take place via email or social media. Extending the availability of the surveys can accommodate various timelines, ensuring more comprehensive participation [24,27]. Allowing class time for students to complete the survey on their personal devices can increase response rates. Highlighting the anonymity of the assessments reassures students that their contributions are confidential. In addition, using qualitative tools, such as interviews that encourage constructive criticism, allows students to engage with the procedure [27]. An additional aspect that needs to be taken into account in the analysis is the extent to which evaluation results are disseminated, discussed, and lead to improvements and the degree to which all these developments are observable to the students and communicated to them.
Analyzing departmental differences in SET participation rates will allow for the development of tailored interventions that address the challenges. Finally, longitudinal analysis will allow us to examine changes in SET rates on teaching over time. By addressing these challenges, UoP can enhance the reliability and effectiveness of its teaching evaluations, contributing to the broader goal of improving teaching quality and student learning outcomes in higher education.

5. Conclusions

SET at the University of Peloponnese suffers from low participation by students, which affects the reliability and utility of the feedback offered. While some encouraging trends in evaluations are observable over time, it is evident that further efforts are required to improve participation across departments and programs.
The average course quality score corresponds to a moderate/satisfactory rating, with the score being slightly higher for postgraduate studies. Correlation analysis does not link better academic performance with higher student satisfaction. This suggests that grading practices or expectations do not influence course assessment results. It is crucial to note that the results of student evaluations suffer from a lack of generalizability due to low participation.
University authorities should focus on strategies to enhance student participation. Fortunately, measures to improve student participation in assessment have been suggested in the literature and are presented in the discussion section. Focusing on the influence of feedback to improve teaching, which increases students’ potential, sending reminders, extending the availability of surveys, and allowing in-class completion, can help increase response rates. Also, the parallel use of qualitative methods also provides the opportunity for constructive criticism. The measures outlined above represent a set of useful and easily implementable tactics for enhancing student engagement in this crucial process of improving the quality of teaching at the University of Peloponnese.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/info15090576/s1.

Author Contributions

Conceptualization, C.V. and I.P.; methodology, I.P.; software, I.P.; validation, C.V., M.W. and A.K.; formal analysis, M.W.; investigation, I.P.; data curation, I.P.; writing—original draft preparation, I.P.; writing—review and editing, M.W. and C.V.; supervision, A.K.; project administration, C.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study has been approved by the Board Research Ethics and Integrity Committee of the University of the Peloponnese (Decision number 2675/12 February 2024).

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of these data. Data were obtained from the University of the Peloponnese (permission of use decision: 11600/21 November 2023) and can be made available from the institution, subject to approval of a relevant request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Statistics Concerning the Participation of Students in the Teaching Assessment

DepartmentStudy Programme/Academic LevelNParticipationStd. DeviationMinimumMaximum
Rate Mean
DEPARTMENT OF BUSINESS AND ORGANISATION MANAGEMENT—LOCAL GOVERNMENTUndergraduate study programme10.34% 0.34%0.34%
DEPARTMENT OF BUSINESS AND ORGANISATION MANAGEMENT—MANAGEMENT OF HEALTH AND WELFARE INSTITUTIONSUndergraduate study programme10.51% 0.51%0.51%
DEPARTMENT OF DIGITAL SYSTEMSUndergraduate study programme8711.47%11.76%1.01%63.48%
DEPARTMENT OF ECONOMIC SCIENCESUndergraduate study programme926.68%10.71%0.31%70.00%
DEPARTMENT OF FOOD SCIENCE AND TECHNOLOGYUndergraduate study programme109.69%6.05%3.33%22.62%
DEPARTMENT OF HISTORYUndergraduate study programme35710.33%12.22%0.34%88.89%
DEPARTMENT OF INFORMATION TECHNOLOGY AND TELECOMMUNICATIONSUndergraduate study programme4048.01%11.13%0.29%80.00%
DEPARTMENT OF LITERATUREUndergraduate study programme5249.43%12.46%0.65%93.33%
DEPARTMENT OF MANAGEMENT SCIENCE AND TECHNOLOGYUndergraduate study programme824.79%6.69%0.79%37.50%
DEPARTMENT OF NURSINGUndergraduate study programme2129.82%11.59%0.90%63.49%
DEPARTMENT OF PERFORMING AND DIGITAL ARTSUndergraduate study programme16721.65%13.67%2.50%83.33%
DEPARTMENT OF PHYSIOTHERAPYUndergraduate study programme2015.28%12.99%5.17%57.89%
DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONSUndergraduate study programme3289.80%7.30%0.72%50.00%
DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONSUndergraduate study programme155.54%5.62%0.99%22.06%
DEPARTMENT OF SOCIAL AND EDUCATIONAL POLICYUndergraduate study programme54811.97%9.33%0.93%57.14%
DEPARTMENT OF SPORT ORGANISATION AND MANAGEMENTUndergraduate study programme43912.34%12.36%0.89%85.71%
DEPARTMENT OF THEATRE STUDIESUndergraduate study programme7113.01%9.56%0.97%40.91%
DEPARTMENT OF INFORMATION TECHNOLOGY AND TELECOMMUNICATIONSINTER-INSTITUTIONAL MSC IN ‘DATA SCIENCE’5134.80%20.41%5.26%73.91%
DEPARTMENT OF PHILOLOGYINTER-INSTITUTIONAL MSC IN ETHICAL PHILOSOPHY1017.27%9.45%4.76%35.71%
DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONSINTER-INSTITUTIONAL MSC IN GLOBAL POLITICAL ECONOMY2415.94%7.05%4.17%28.57%
DEPARTMENT OF INFORMATION TECHNOLOGY AND TELECOMMUNICATIONSINTER-INSTITUTIONAL MSC IN SPACE SCIENCE2921.86%11.76%4.76%59.26%
DEPARTMENT OF INFORMATION TECHNOLOGY AND TELECOMMUNICATIONSMSC ‘MODERN WIRELESS COMMUNICATIONS’2655.52%15.90%25.00%83.33%
DEPARTMENT OF THEATRE STUDIESMSC—THEATRE AND SOCIETY: THEORY3843.34%16.91%12.50%83.33%
DEPARTMENT OF INFORMATION TECHNOLOGY AND TELECOMMUNICATIONSMSC IN ‘COMPUTER SCIENCE AND TECHNOLOGY’1637.73%11.45%9.09%55.56%
DEPARTMENT OF FOOD SCIENCE AND TECHNOLOGYMSC IN ‘ORGANISATION AND MANAGEMENT OF ENTERPRISES IN THE AGRI-FOOD SECTOR—MBA IN AGRI-FOOD SECTOR’69.74%4.05%5.56%16.67%
DEPARTMENT OF MANAGEMENT SCIENCE AND TECHNOLOGYMSC IN ‘PUBLIC ADMINISTRATION AND DIGITAL TRANSFORMATION432.00%3.27%28.00%36.00%
DEPARTMENT OF INFORMATION TECHNOLOGY AND TELECOMMUNICATIONSMSC IN ADVANCED TELECOMMUNICATIONS SYSTEMS AND NETWORKS2112.92%9.05%4.76%37.50%
DEPARTMENT OF PHILOLOGYMSC IN ANCIENT AND MODERN GREEK LITERATURE8629.09%22.67%2.70%90.00%
DEPARTMENT OF NURSINGMSC IN CARE AND SUPPORT FOR CHILDREN AND ADOLESCENTS WITH SPECIAL HEALTH CARE NEEDS IN THE COMMUNITY48.36%2.48%4.65%9.76%
DEPARTMENT OF INFORMATION TECHNOLOGY AND TELECOMMUNICATIONSMSC IN COMPUTER SCIENCE3432.75%20.33%0.93%69.23%
DEPARTMENT OF THEATRE STUDIESMSC IN DRAMA AND PERFORMING ARTS IN EDUCATION AND LIFELONG LEARNING6614.61%12.01%3.03%43.48%
DEPARTMENT OF ECONOMIC SCIENCESMSC IN ECONOMIC ANALYSIS831.56%19.29%4.76%62.50%
DEPARTMENT OF SOCIAL AND EDUCATIONAL POLICYMSC IN EDUCATION, HUMAN RESOURCES, EMPLOYMENT POLICIES15.76% 5.76%5.76%
DEPARTMENT OF SOCIAL AND EDUCATIONAL POLICYMSC IN GLOBAL CHALLENGES AND ANALYTICAL SYSTEMS6131.24%15.69%8.33%75.00%
DEPARTMENT OF ECONOMIC SCIENCESMSC IN GOVERNANCE AND PUBLIC POLICIES4724.06%13.99%2.90%59.26%
DEPARTMENT OF NURSINGMSC IN HEALTH SERVICES MANAGEMENT AND CRISIS MANAGEMENT1626.37%6.10%13.04%37.50%
DEPARTMENT OF SPORT ORGANISATION AND MANAGEMENTMSC IN MANAGEMENT OF SPORTS ORGANISATIONS and ENTERPRISES8121.79%15.79%2.27%75.00%
DEPARTMENT OF SPORT ORGANISATION AND MANAGEMENTMSC IN OLYMPIC STUDIES1618.78%8.52%3.13%33.33%
DEPARTMENT OF SPORT ORGANISATION AND MANAGEMENTMSC IN ORGANISATION and MANAGEMENT OF PUBLIC SERVICES3126.27%28.47%2.22%80.00%
DEPARTMENT OF SOCIAL AND EDUCATIONAL POLICYMSC IN SOCIAL AND EDUCATIONAL POLICY3433.73%25.23%3.57%88.89%
DEPARTMENT OF SOCIAL AND EDUCATIONAL POLICYMSC IN SOCIAL POLICY8030.62%19.50%3.70%80.00%
DEPARTMENT OF BUSINESS AND ORGANISATION MANAGEMENT—LOCAL GOVERNMENTMSC IN LOCAL AND REGIONAL DEVELOPMENT AND GOVERNANCE15130.75%15.48%4.55%85.71%
DEPARTMENT OF SPORT ORGANISATION AND MANAGEMENTMSC IN ORGANISATION AND MANAGEMENT OF SPORTING ACTIVITIES FOR PEOPLE WITH DISABILITIES (OSA)7417.59%11.99%2.13%51.92%
DEPARTMENT OF THEATRE STUDIESMSC IN CREATIVE WRITING414.88%6.89%9.52%25.00%
DEPARTMENT OF SOCIAL AND EDUCATIONAL POLICYPEDAGOGICAL AND TEACHING COMPETENCE PROGRAMME98.68%2.37%5.56%12.50%

Appendix B. Statistics Concerning the SET Evaluation

Undergraduate StudiesMeanSt. Dev.MinimumMaximum
DEPARTMENT OF BUSINESS AND ORGANISATION MANAGEMENT—LOCAL GOVERNMENT4.510NaN4.5104.510
DEPARTMENT OF BUSINESS AND ORGANISATION MANAGEMENT—MANAGEMENT OF HEALTH AND WELFARE INSTITUTIONS4.380NaN4.3804.380
DEPARTMENT OF MANAGEMENT SCIENCE AND TECHNOLOGY4.2300.5932.1704.970
DEPARTMENT OF THEATRE STUDIES4.1200.5442.4805.000
DEPARTMENT OF PHYSIOTHERAPY4.0600.3822.9804.450
DEPARTMENT OF PERFORMING AND DIGITAL ARTS3.9700.5541.7404.830
DEPARTMENT OF SPORT ORGANISATION AND MANAGEMENT3.9700.5561.3905.000
DEPARTMENT OF HISTORY3.9600.6161.6005.000
DEPARTMENT OF ECONOMIC SCIENCES3.9500.7012.0005.000
DEPARTMENT OF LITERATURE3.9200.6611.0005.000
DEPARTMENT OF SOCIAL AND EDUCATIONAL POLICY3.9200.5121.3505.000
DEPARTMENT OF NURSING3.8900.6171.5904.910
DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS3.8300.5311.4305.000
MSC IN GOVERNANCE AND PUBLIC POLICIES3.8300.0913.7003.900
DEPARTMENT OF DIGITAL SYSTEMS3.8000.4232.8304.740
DEPARTMENT OF INFORMATION TECHNOLOGY AND TELECOMMUNICATIONS3.6600.6381.0005.000
DEPARTMENT OF FOOD SCIENCE AND TECHNOLOGY3.5800.4982.4204.210
DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS3.3300.6612.1904.600
Totals3.8990.5971.0005.000
Postgraduate StudiesMeanSt. Dev.MinimumMaximum
MSC IN CARE AND SUPPORT FOR CHILDREN AND ADOLESCENTS WITH SPECIAL HEALTH CARE NEEDS IN THE COMMUNITY4.7000.2044.4204.880
MSC ‘MODERN WIRELESS COMMUNICATIONS’4.5700.2733.6305.000
MSC IN ECONOMIC ANALYSIS4.5300.2214.1404.840
PEDAGOGICAL AND TEACHING COMPETENCE PROGRAMME4.5000.6313.0604.920
INTER-INSTITUTIONAL MSC IN ETHICAL PHILOSOPHY4.3700.7212.9605.000
MSC IN ANCIENT AND MODERN GREEK LITERATURE4.3100.3753.0304.820
MSC IN SOCIAL POLICY4.3100.3552.7804.830
MSC IN SOCIAL AND EDUCATIONAL POLICY4.3000.5362.4105.000
MSC IN ‘COMPUTER SCIENCE AND TECHNOLOGY’4.2800.4763.1705.000
MSC—THEATRE AND SOCIETY: THEORY4.2700.4053.0104.890
MSC IN ‘PUBLIC ADMINISTRATION AND DIGITAL TRANSFORMATION’4.2500.0764.1804.350
MSC IN GOVERNANCE AND PUBLIC POLICIES4.2100.4083.1904.850
MSC IN MANAGEMENT OF SPORTS ORGANISATIONS and ENTERPRISES4.2000.4032.7704.770
MSC IN OLYMPIC STUDIES4.1900.3903.4004.790
MSC IN ORGANISATION AND MANAGEMENT OF SPORTING ACTIVITIES FOR PEOPLE WITH DISABILITIES (OSA)4.1700.5382.5504.830
MSC IN LOCAL AND REGIONAL DEVELOPMENT AND GOVERNANCE4.1500.3583.0304.830
INTER-INSTITUTIONAL MSC IN GLOBAL POLITICAL ECONOMY4.1200.6043.0004.940
MSC IN ADVANCED TELECOMMUNICATIONS SYSTEMS AND NETWORKS4.1100.7611.5704.910
MSC IN HEALTH SERVICES MANAGEMENT AND CRISIS MANAGEMENT4.0500.4952.8504.730
MSC IN GLOBAL CHALLENGES AND ANALYTICAL SYSTEMS4.0300.4092.5504.750
INTER-INSTITUTIONAL MSC IN ‘DATA SCIENCE’4.0000.4032.6404.730
MSC IN ORGANISATION and MANAGEMENT OF PUBLIC SERVICES3.9500.6661.7904.940
MSC IN EDUCATION3.880NaN3.8803.880
MSC IN COMPUTER SCIENCE3.8500.8112.1004.670
MSC IN ‘ORGANISATION AND MANAGEMENT OF ENTERPRISES IN THE AGRI-FOOD SECTOR—MBA IN AGRI-FOOD SECTOR’3.7200.7172.7204.600
MSC IN DRAMATIC AND PERFORMING ARTS IN EDUCATION AND LIFELONG LEARNING3.7200.7071.7904.760
MSCC IN CREATIVE WRITING3.6401.0602.0604.310
INTER-INSTITUTIONAL MSC IN SPACE SCIENCE3.6100.4722.9004.560
Totals4.1370.5221.5665.000

References

  1. Jimma, T.T. Improving Quality in Higher Education through Cooperative Learning Pedagogies: An Ethiopian Example. Ph.D. Thesis, University of Queensland, Brisbane, Australia, 2014. [Google Scholar] [CrossRef]
  2. Hadad, Y.; Keren, B.; Naveh, G. The Relative Importance of Teaching Evaluation Criteria from the Points of View of Students and Faculty. Assess. Eval. High. Educ. 2020, 45, 447–459. [Google Scholar] [CrossRef]
  3. Palmer, S. The Performance of a Student Evaluation of Teaching System. Assess. Eval. High. Educ. 2012, 37, 975–985. [Google Scholar] [CrossRef]
  4. Beleche, T.; Fairris, D.; Marks, M. Do course evaluations truly reflect student learning? Evidence from an objectively graded post-test. Econ. Educ. Rev. 2012, 31, 709–719. [Google Scholar] [CrossRef]
  5. Chiu, Y.-L.; Chen, K.-H.; Hsu, Y.-T.; Wang, J.-N. Understanding the perceived quality of professors’ teaching effectiveness in various disciplines: The moderating effects of teaching at top colleges. Assess. Eval. High. Educ. 2018, 44, 449–462. [Google Scholar] [CrossRef]
  6. Rosen, A.S. Correlations, trends and potential biases among publicly accessible web-based student evaluations of teaching: A large-scale study of RateMyProfessors.com data. Assess. Eval. High. Educ. 2018, 43, 31–44. [Google Scholar] [CrossRef]
  7. Marinović, L. Exploring the Relationship Between Perceptions of Teaching Quality, Some Motivational Beliefs and Students’ Achievement and Satisfaction. Drustv. Istraživanja 2014, 23, 681–700. [Google Scholar] [CrossRef]
  8. Jin, C.J. Student Evaluation of Teaching in Higher Education: Evidence from Hong Kong. Int. J. High. Educ. 2019, 8, 95–109. [Google Scholar] [CrossRef]
  9. Yu, Y.; Lin, Y.-C.; Qi, J.; Yan, H. Biases in Student Evaluations of Teaching: An Exploratory Analysis to a Chinese Case Study. Innov. Educ. Teach. Int. 2022. [CrossRef]
  10. Zare-ee, A.; Don, Z.M.; Tohidian, I. Gender differences in students’ ratings of university teachers in the Iranian education system. Learn. Teach. High. Educ. Gulf Perspect. 2016, 13, 19–35. [Google Scholar] [CrossRef]
  11. Neely, K. How do Students and Educators Interpret Student Evaluations of Teaching? 2019. Available online: https://repository.tcu.edu/bitstream/handle/116099117/27047/Neely__Katherine-Honors_Project.pdf?sequence=1 (accessed on 5 August 2024).
  12. Staniec, I.; Jarczyński, J. Student evaluations of teaching at the university: Perceptions and questionnaires. In Eurasian Business Perspectives; Springer: Cham, Switzerland, 2020; pp. 199–215. [Google Scholar] [CrossRef]
  13. Wright, S.L.; Jenkins-Guarnieri, M.A. Student evaluations of teaching: Combining the meta-analyses and demonstrating further evidence for effective use. Assess. Eval. High. Educ. 2012, 37, 683–699. [Google Scholar] [CrossRef]
  14. Stupans, I.; McGuren, T.; Babey, A.M. Student evaluation of teaching: A study exploring student rating instrument free-form text comments. Innov. High. Educ. 2016, 41, 33–42. [Google Scholar] [CrossRef]
  15. Oon, P.-T.; Spencer, B.; Kam, C.C.S. Psychometric quality of a student evaluation of teaching survey in higher education. Assess. Eval. High. Educ. 2017, 42, 788–800. [Google Scholar] [CrossRef]
  16. Coelho, L.A.; de Oliveira Ribeiro, M.M.D.L. Student ratings to evaluate the teaching effectiveness: Factors should be considered. In Proceedings of the 5th International Conference on Higher Education Advances (HEAd’19), Valencia, Spain, 26 June 2019. [Google Scholar] [CrossRef]
  17. Brockx, B.; Spooren, P.; Mortelmans, D. Taking the Grading Leniency Story to the Edge. The Influence of Student, Teacher, and Course Characteristics on Student Evaluations of Teaching in Higher Education. Educ. Assess. Eval. Account. 2011, 23, 289–306. [Google Scholar] [CrossRef]
  18. Papadogiannis, I.; Vassilakis, C.; Wallace, M.; Katsis, A. On the Quality and Validity of Course Evaluation Questionnaires Used in Tertiary Education in Greece. Trends High. Educ. 2024, 3, 221–234. [Google Scholar] [CrossRef]
  19. Novák, J. Evaluation of student feedback as a tool for higher education quality enhancement. R&E-SOURCE 2023, s1, 117–127. [Google Scholar] [CrossRef]
  20. Sullivan, D.; Lakeman, R.; Massey, D.; Nasrawi, D.; Tower, M.; Lee, M. Student motivations, perceptions and opinions of participating in student evaluation of teaching surveys: A scoping review. Assess. Eval. High. Educ. 2024, 49, 178–189. [Google Scholar] [CrossRef]
  21. Constantinou, C.; Wijnen-Meijer, M. Student evaluations of teaching and the development of a comprehensive measure of teaching effectiveness for medical schools. BMC Med. Educ. 2022, 22, 113. [Google Scholar] [CrossRef]
  22. University of the Peloponnese. Course Evaluation Questionnaire. Available online: https://modip.uop.gr/images/stories/questionnaire-samples/course-eval-questionnaire-en.pdf (accessed on 5 August 2024).
  23. Bennett, L.; Nair, C.S. A recipe for effective participation rates for web-based surveys. Assess. Eval. High. Educ. 2010, 35, 357–365. [Google Scholar] [CrossRef]
  24. Adams, M.J.; Umbach, P.D. Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Res. High. Educ. 2012, 53, 576–591. [Google Scholar] [CrossRef]
  25. Knight, D.; Naidu, V.; Kinash, S. Achieving high student evaluation of teaching response rates through a culture of academic-student collaboration. Stud. Learn. Eval. Innov. Dev. 2012, 9, 126–144. [Google Scholar]
  26. Ching, G. A Literature Review on the Student Evaluation of Teaching. High. Educ. Eval. Dev. 2019, 12, 63–84. [Google Scholar] [CrossRef]
  27. Goodman, J.; Anson, R.; Belcheir, M. The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates. Assess. Eval. High. Educ. 2014, 40, 958–970. [Google Scholar] [CrossRef]
  28. Nulty, D.D. The adequacy of response rates to online and paper surveys: What can be done? Assess. Eval. High. Educ. 2008, 33, 301–314. [Google Scholar] [CrossRef]
Figure 1. Boxplot diagrams of participation rates.
Figure 1. Boxplot diagrams of participation rates.
Information 15 00576 g001
Figure 2. Boxplot diagrams of participation rates per year—undergraduate studies.
Figure 2. Boxplot diagrams of participation rates per year—undergraduate studies.
Information 15 00576 g002
Figure 3. Boxplot diagrams of participation rates per year—postgraduate studies.
Figure 3. Boxplot diagrams of participation rates per year—postgraduate studies.
Information 15 00576 g003
Figure 4. Study programs with more satisfied (>4) vs. less satisfied (<4) students, per academic level.
Figure 4. Study programs with more satisfied (>4) vs. less satisfied (<4) students, per academic level.
Information 15 00576 g004
Table 1. Variables types, SET dataset.
Table 1. Variables types, SET dataset.
VariableDescriptionType
Department_IdDepartment codeNumeric
Department_NameDepartment nameText
Course_TitleCourse nameText
Course_CodeCourse codeAlphanumeric
Course_YearAcademic yearNumeric
Evaluation questions (1–37)Assessment questionsNumeric (1–5)
Course evaluationQuestions 1–14
Assignments’ evaluationQuestions 15–21
Teacher’s evaluationQuestions 22–28
Laboratory evaluationQuestions 29–32
Students’ self-evaluationQuestions 33–37
Table 2. Variables types, Grade’s dataset.
Table 2. Variables types, Grade’s dataset.
VariableDescriptionType
Code_DepartmentDepartment codeAlphanumeric
Department_nameDepartment nameText
Course_codeThe course code.Alphanumeric
YearThe academic year of the examinationNumeric
Student_semesterThe student’s semester of study in the academic year of the examinationText
Examination_periodThe examination period in which the student participated.Text
GradeThe grade received by the student.Numeric
Table 3. Final dataset.
Table 3. Final dataset.
VariableDescriptionType
concatCombination of course code and academic year. (Unique values)Alphanumeric
precent_evalPercentage of students who evaluated the course, relative to those who were graded.Numeric
count_gradedNumber of students gradedNumeric
grade_averAverage gradeNumeric
evalAverage evaluation scoreNumeric
dept_textDepartment nameText
yearAcademic yearNumeric
lesson_codeCourse codeAlphanumeric
lessonCourseNumeric
und_postCourse characterization as undergraduate or postgraduate.Boolean (0 = undergraduate, 1 = postgraduate)
count_evalNumber of students who evaluatedNumeric
Table 4. Descriptive statistics of participation rates.
Table 4. Descriptive statistics of participation rates.
Participation RatesNMeanStd. DeviationMinimumMaximum
Postgraduate102427.33%18.820.93%90.00%
Undergraduate336210.77%11.500.29%93.33%
Overall438614.63%15.260.29%93.33%
Table 5. Participation rates by year.
Table 5. Participation rates by year.
Participation RatesMeanStd. DeviationMinimumMaximum
U *P *U *P *U *P *U *P *
20158.25%27.24%8.56%14.55%0.34%3.03%61.54%72.73%
201610.36%18.56%12.37%11.09%0.29%2.22%90.63%50.00%
201711.27%24.81%13.77%18.29%0.65%2.70%93.33%90.00%
201812.42%30.68%12.03%20.64%0.50%2.27%70.00%88.89%
201910.06%25.06%11.58%16.28%0.31%0.93%88.89%66.67%
202013.24%35.87%11.74%21.00%0.34%3.03%80.00%83.33%
20219.81%29.46%10.36%20.79%0.51%2.13%83.33%71.43%
202210.46%25.81%10.88%20.14%0.47%3.57%63.48%85.71%
* U = undergraduate, P = postgraduate.
Table 6. Descriptive statistics of SET scores.
Table 6. Descriptive statistics of SET scores.
NMeanStd. DeviationMinimumMaximum
Undergraduate33623.8990.5971.0005.000
Postgraduate10244.1370.5221.5665.000
Total43863.9550.5891.0005.000
Table 7. Descriptive statistics of SET scores per questionary category.
Table 7. Descriptive statistics of SET scores per questionary category.
DimensionsMeanStd. Deviation
Course3.8501.146
Supporting/assistive teaching2.8571.616
Assignments4.0161.235
Teaching staff4.0911.215
Lab3.5491.297
Self-assessment3.8461.107
Table 8. Descriptive statistics of SET scores per year. Undergraduate level.
Table 8. Descriptive statistics of SET scores per year. Undergraduate level.
YearNMeanStd. DeviationMinimumMaximum
20153573.6890.6581.2505.000
20163203.7540.6091.0005.000
20173513.8390.5671.9045.000
20183593.8160.6031.3945.000
20194263.9440.5871.0005.000
20204883.9850.5121.3555.000
20214984.0150.5631.7415.000
20225633.9940.6111.0005.000
Table 9. Descriptive statistics of SET scores per year. Postgraduate level.
Table 9. Descriptive statistics of SET scores per year. Postgraduate level.
NMeanStd. DeviationMinimumMaximum
20151444.0440.5721.5665.000
20161274.0090.5281.7884.839
20171594.1560.5342.3795.000
20181264.1420.4882.4145.000
2019714.2000.5042.4094.879
20201454.1600.4561.7904.743
20211124.2440.4212.5464.833
20221404.1850.5981.9354.919
Table 10. Correlation coefficients between average grades and average evaluations.
Table 10. Correlation coefficients between average grades and average evaluations.
Sub-DatasetPearson’s rp-ValueSpearman’s rhop-ValueKendall’s Taup-Value
20150.273<0.0010.304<0.0010.205<0.001
20160.200<0.0010.212<0.0010.144<0.001
20170.281<0.0010.315<0.0010.218<0.001
20180.293<0.0010.299<0.0010.206<0.001
20190.246<0.0010.261<0.0010.179<0.001
20200.176<0.0010.234<0.0010.161<0.001
20210.266<0.0010.310<0.0010.211<0.001
20220.247<0.0010.302<0.0010.207<0.001
Undergraduate0.222<0.0010.254<0.0020.174<0.002
Postgraduate0.0160.6090.0040.9040.0030.859
Total0.246<0.0010.279<0.0010.191<0.001
Table 11. Correlation coefficients between average evaluations and average grades of the previous year.
Table 11. Correlation coefficients between average evaluations and average grades of the previous year.
Sub-DatasetPearson’s rp-ValueSpearman’s rhop-ValueKendall’s Taup-Value
20160.0930.1880.1610.0220.110.021
20170.1070.0870.1840.0030.1220.004
20180.228<0.0010.237<0.0010.159<0.001
20190.0750.2080.1010.0890.0710.073
20200.241<0.0010.244<0.0010.164<0.001
20210.1130.0420.1490.0070.1050.005
20220.1650.0020.219<0.0010.152<0.001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Papadogiannis, I.; Vassilakis, C.; Wallace, M.; Katsis, A. Challenges and Trends in Student Evaluation of Teaching: Analysis of SET Data from the University of Peloponnese. Information 2024, 15, 576. https://doi.org/10.3390/info15090576

AMA Style

Papadogiannis I, Vassilakis C, Wallace M, Katsis A. Challenges and Trends in Student Evaluation of Teaching: Analysis of SET Data from the University of Peloponnese. Information. 2024; 15(9):576. https://doi.org/10.3390/info15090576

Chicago/Turabian Style

Papadogiannis, Ilias, Costas Vassilakis, Manolis Wallace, and Athanassios Katsis. 2024. "Challenges and Trends in Student Evaluation of Teaching: Analysis of SET Data from the University of Peloponnese" Information 15, no. 9: 576. https://doi.org/10.3390/info15090576

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop