1. Introduction
Undoubtedly, the Covid-19 pandemic challenged public health and the continuity of the economy, industry, and people’s daily lives. Additionally, education and academia were strongly impacted by the isolation measures taken at the height of this health crisis, forcing institutions to innovate or stop their training processes. While the theoretical classes found excellent support through various virtual video link tools, the practical subjects required innovative proposals, the Lab-Tec@Home kit being an example [
1].
Although Lab-Tec@Home was designed as an educational innovation proposed as a response to the restrictions of the pandemic, it should not be discarded post-isolation because of its convenience as a distance learning practice and its value in strengthening and developing skills in control engineering courses. Thus, this article presents the results of a study conducted during the implementation of Lab-Tec@Home to complement a post-pandemic face-to-face class. The intent was to measure its impact on students’ perceived achievement of the complex thinking competency and sub-competencies. The objective was to argue the validity of Lab-Tec@Home as an educational innovation that facilitates engineering processes and improves students’ perception of additional competencies valuable for their professional future.
The hypothesis that motivates this study is whether Lab-Tec@Home is a valid option for the development of complex thinking competency as well as an educational practice in a traditional laboratory.
To achieve this, this article is structured as follows. First, the educational practice carried out is outlined theoretically, considering the elements that make up Lab-Tec@Home, as well as what the students did concretely. Then, the relevance of complex thinking as a life-long skill is explored. In a second moment, the study frames the methodology used, the main results, and a discussion of them, relating them to some previous studies.
3. Methodology
Two sample groups were considered for this study, experimental and control. The experimental population consisted of 5 groups (classes) with 138 students in which Lab-Tec@Home was implemented. The control population consisted of 2 classes with 32 students who did not use Lab-Tec@Home; they had traditional classes in a conventional laboratory. The implementation occurred during the August–December 2022 semester, with facilitators supporting both groups.
The selection of the groups, to determine whether they were experimental or control, was carried out randomly in order to reduce selection bias. Seeking to avoid that the facilitators had any impact on the results or introduced bias, we looked for people with the same background and characteristics, distributing the control and experimental groups proportionally, i.e., each had one group of each.
The disparity of students between the experimental and control groups is due to the number of students enrolled per group as well as to limitations imposed by the ethics committee. Even so, the numbers are significant and sufficient for the analyses proposed in this study.
The present study is regulated by an institutional ethics committee and the Writing Lab. As this was an exploratory study, it was requested to limit the demographic information requested from the students, considering that it was not relevant to the final results. It is hoped that with the positive results presented in this article, a complementary study can be generated to provide more information.
To assess the perceived proficiency level of complex thinking competency, the validated eComplexity tool [
18] was utilized. Its purpose is to gauge the participants’ perception of their mastery level regarding the competency of reasoning for complexity and its sub-competencies. The instrument has undergone both theoretical and statistical validation as well as scrutiny by a team of experts in the field. The experts’ evaluation of the criteria was averaged as follows: clarity (3.31), coherence (3.38), and relevance (3.54). Based on the theoretical and content validation through expert judgment, the eComplexity tool was found to be highly valid and reliable [
18]. The tool consists of 25 items divided into four sub-competencies, namely Systemic Thinking, Scientific Thinking, Critical Thinking, and Innovative Thinking. The tool can be self-administered, and each item is assessed using a Likert scale with options ranging from strongly disagree to strongly agree; see
Table 2.
Thus, the implementation process was carried out in three stages: application of the instrument in a diagnostic manner, implementation of the educational practice with the use of the Lab-Tec@Home kit, application of the instrument as an evaluation of the development achieved. It is important to point out that, although Lab-Tec@Home can be carried out individually or in teams, the diagnosis and evaluation of the acquisition and development of the perception of achievement of the competency and its sub-competencies is something that was carried out individually.
Regarding data processing, computational software applications called R [
19] and Rstudio [
20] were utilized to conduct a multivariate descriptive statistical analysis. The multivariate descriptive analysis involved analyzing central tendency measures such as means and standard deviations. This analysis was further complemented with bar graphs, violin plots, principal component analysis (PCA), bi-plot analysis of shape (i.e.,
= 1), and
t-test to measure the significance in mean value differences. The primary goal of the mean analysis was to determine a reference value for the dataset in relation to a particular variable. Additionally, the standard deviation helped determine the degree of dispersion of the remaining values with respect to the mean. The analysis of the violin plot allowed for observing and examining the data distribution. The violin plot combined a box-and-whisker plot (i.e., boxplot) and the empirical kernel density into a single visualization that showed the data structure [
21].
On the other hand, the principal component analysis is a method that enables understanding the performance of students concerning sub-competencies. It helps to overcome collinearity issues and simplify the complexity of data by using a set of independent and uncorrelated variables known as principal components. These components are obtained from the original variables by capturing the maximum variability. The number of components corresponds to the number of variables, as per the works of [
11,
22]. The principal component analysis was supported by biplot analysis, which allowed observing the behavior of students concerning variables using two principal components that captured the maximum variability in the data [
23]. For analysis, we used the biplot of shape (i.e.,
= 1) to facilitate observation of our observations (students). Finally, to determine the significance between mean values based on gender, we performed a
t-test analysis, considering a
p-value of 90% (i.e.,
p-value of 0.10).
4. Results
We calculated the arithmetic mean for the students’ perception in both groups at the two measurement moments.
Table 3 shows the means and standard deviations analysis for each item evaluated. In it, it is observed that, in most cases, there is an improvement in the perceived achievement of the complex thinking competency and its sub-competencies. Only the sub-competency of systems thinking in the control group showed a regression (4.13 to 4.11). This table shows that, in the initial diagnosis, students in the experimental group perceived themselves better than those in the control group in the competency and its sub-competencies, which also translates into better results in the final evaluation. The experimental group achieved the highest mean in systems thinking (4.20) and the lowest in scientific thinking (3.89). In the control group, this is repeated, with the best mean for systems thinking (4.11) and the best for scientific thinking (3.85).
Figure 9 illustrates the perception of complex thinking competency in the initial and final diagnoses of both groups (experimental and control). It shows that the participants in both groups perceived themselves as improving after their training practices, either with Lab-Tec@Home or in a conventional laboratory. However, the experimental group achieved a higher means than the control group, primarily due to its better results in the initial diagnosis. This figure also shows that the standard deviation in the students’ perceived complex thinking competency was lower in the experimental group.
Table 4 analyzes significant differences in both groups’ initial and final means of perceived complex thinking competency. The table shows no significant differences between the mean values of perceived achievement.
Regarding the level of perception by sub-competencies, the graphs in
Figure 10 allow contrasting the initial and final measurements between both groups (Experimental and Control). This graph corroborates
Table 3 by showing that the experimental group presents better means and better concentration of responses. Thus, there is a trend towards the scaling of the participants’ perception of performance with the use of Lab-Tec@Home.
Table 5 displays the statistically significant differences in the means of the sub-competencies of complex thinking, contrasting both groups (experimental and control). The table shows no significant differences between the mean values of perceived achievement.
Table 6 presents the principal component analysis performed on the experimental and control groups for the sub-competencies of complex thinking. The table shows the first two components capturing the maximum variability in the data. Together Principal Component 1 (PC1) and Principal Component 2 (PC2) captured 84.7% of the variability. PC1 captured 75.4%, and PC2 accounted for 9.2%. Additionally, the table shows that PC1 is strongly correlated with the sub-competency of systems thinking. This component would explain the students’ ability to analyze problematic situations from different perspectives. On the other hand, PC2 highly correlates with innovative thinking, explaining the students’ ability to generate new ideas and explore various options for proposing different and fresh solutions.
From
Figure 11, this biplot analysis shows that the students in the experimental group are the ones who perceive themselves in the best way in the development of each of the sub-competencies, reaching the highest levels of the scale. Some students in the experimental group were high in the sub-competencies of systemic, critical, and innovative thinking. On the other hand, six students in the control group were high in critical, scientific, and systemic thinking. In this context, students outside the confidence ellipses for each group (i.e., extreme right) are of concern. It means there were students with a very low perception of developing the sub-competencies and, consequently, the complex thinking competency.
5. Discussion of Results
The first results relate to both groups’ means and standard deviations. As seen, although the experimental group achieved a better score (4.11) than the control group (4.02), it was not statistically significant since the level of development in both cases was the same (+0.08–2%). It is possible to point out that the difference found in the general result of the complex thinking competency corresponds to the fact that the groups were initially diagnosed with different levels and not to a statistically significant difference (
Table 5).
Figure 9 corroborates this data. Although there seems to be a difference, it is due to the starting point of the measurement. We can highlight that the experimental group had a lower standard deviation, i.e., its responses were more concentrated around the mean than the control group.
Table 4 complements this information, showing the results of the significance level (t) in the variations of the complex thinking competency of both groups. As seen in this table, none of the groups had a statistically significant variation; however, the experimental group was closer to being significant. Thus, although there was no statistically significant variation, an improvement in perceived complex thinking can be noted, especially in the group that carried out the Lab-Tec@Home practice.
Concerning the development of the sub-competencies, the highest level in the experimental group was systemic thinking (4.20), although this was not the one that achieved the most improvement, which was scientific thinking, with a variation of more than 3%. In the control group, the highest level was innovative thinking, although the most improved was scientific thinking, surpassing the experimental group (+6%). This last result is relevant, as it implies an area of opportunity for Lab-Tec@Home since the participants perceived that, theoretically, they learn more in traditional teaching.
Delving into the data,
Figure 10 shows the results of each sub-competency, contrasting the initial and final levels of both groups. Something relevant in these violin graphs is the distribution of the answers. Systems thinking is the only sub-competency that decreased perception from diagnosis to final evaluation; this occurred in the control group, where students perceived themselves as less competent (−0.5%) in identifying the parts that make up a problem and understanding how they are interconnected. This was not the case with Lab-Tec@Home; participants improved by 1.6% in perception.
An identifiable problem, however, is that this sub-competency had a significant standard deviation towards the lower part of the scale in the final evaluation, indicating that some students, even after the traditional or Lab-Tec@Home practice activity, continued to perceive themselves as unskilled in systems thinking. Although it can be concluded that Lab-Tec@Home showed better results in systems thinking, this variation from the control group was not statistically significant (
Table 5).
As for scientific thinking, as noted above, although the experimental group achieved a better level of perception (4.20), the best development was in the control group (+6%). As shown in
Figure 10, the violins of the experimental group showed better levels on the scale but reflected little difference between the initial diagnosis and the final evaluation, in contrast to the control group, where the improvement of responses in the final evaluation was notable: the violin shows a better concentration of responses around the mean and in the upper part of the graph. However,
Table 5 shows no statistically significant difference between the two groups.
In the case of critical thinking, better results were obtained by the experimental group (4.19), significantly better than the control group (4.02). However, the initial level of both groups was unequal, so analyzing the level of development shows that in both cases, it is the same (+2%). So, although the final averages seem to be very different, it is due to the initial level of the participants of both groups.
Table 5 confirms this, reflecting no statistically significant difference. What is interesting to note in
Figure 10 is that the experimental group considerably reduced the results in the lower part of the violin mean, but the control group did not. Thus, unlike the control group, the experimental group participants generally perceived themselves with better critical thinking skills.
Finally, concerning innovative thinking, the experimental group’s results were better than the control group; however, unlike the previous cases, there was a difference in their level of development. The experimental group improved three times as much as the control group (1.5% vs. 0.5%). As with critical thinking,
Figure 10 shows that one of the most significant changes was not in the level attained by the mean but in the improvement in the standard deviation of the participants, i.e., a significant reduction of students who perceived themselves as incompetent. Although, as in the previous results, the difference between the two groups was not statistically significant (
Table 6), it does not limit the differences that can be identified regarding the distribution of students’ perceptions.
For a broader view of these results,
Table 6 and
Figure 11 present the results of a principal component analysis and a corresponding biplot graph. The latter corroborates what has been previously pointed out, the better concentration of responses of the experimental group students around the mean. The biplot shows that the control group responses were dispersed throughout the scale compared to the experimental group. Thus, although there was no significant difference, the control group had more students perceiving themselves as incompetent.
In conclusion, these results indicate that Lab-Tec@Home students had a statistically similar level of perceived achievement of the complex thinking competency and its sub-competencies compared to students who performed hands-on practice in a traditional laboratory, which in itself is a valuable result. These results resemble those achieved by previous studies such as [
24,
25,
26], who demonstrate the relevance of developing complex thinking in control engineering processes, although, unlike them, this study achieves it with an educational practice at home. It should be considered that the use of educational practices at home such as Lab-Tec@Home was already common before the pandemic; however, these practices used to be associated with the development of complementary competencies to academic training in the classroom or laboratories. We can notice this in the studies of [
27,
28,
29], who use home practices for the development of communication and socialization skills as a tool to reduce educational disadvantages. Thus, this study not only allows us to appreciate valuable results regarding the relationship of this type of practices and the development of complex thinking, but it is also relevant because of the originality of carrying it out through a home practice. Thus, it is demonstrated that Lab-Tec@Home has an impact on the development of complex thinking competencies and sub-competencies perceived by the students and achieves this in a more uniform way among all participants than the group with traditional laboratory practices.
The lack of statistically significant differences does not imply that the results were not relevant but rather that the differences between the skills achieved were similar and not different between the two groups. This lack of difference is in itself an achievement of Lab-Tec@Home, as it allows us to note that at the level of skills developed, it can have an impact similar to that achieved by the use of a laboratory. These results open the possibility for future lines of research that analyze other competencies developed with Lab-Tec@Home, as well as the way in which its use impacts on technical and professional skills. Undoubtedly, the major finding of this study is centered on the homogeneity achieved in the results of the experimental group, which provides greater certainty to teachers that the educational practice has a similar impact on all students. It will be relevant to analyze in future research how this translates into grades in order to demonstrate whether this homogeneity translates into more balanced grades.
6. Conclusions
Undoubtedly, the COVID-19 pandemic challenged all educational institutions, especially those training programs that, due to their characteristics or needs, required physical infrastructure that was impossible to access due to health restrictions. Thus, Lab-Tec@Home was born, a laboratory kit designed to develop competencies and skills associated with control engineering that students could use from home, making up for their lack of access to a laboratory. Although the restrictions implemented by COVID-19 have passed, this does not imply that the pedagogical tools that gave good results should be discarded, considering that they could have an additional impact on developing complementary skills.
In this sense, the proposed hypothesis is proven, demonstrating that Lab-Tec@Home is a valid option for the development of complex thinking competency, as well as an educational practice in a traditional laboratory.
Thus, it can be concluded that Lab-Tec@Home promotes a more equitable level of perception among students, who perceive themselves as generally more competent, as opposed to a traditional group, where not everyone achieves this appreciation of their abilities. On a practical level, this study allows us to point out that, regardless of the impact Lab-Tec@Home may have on acquiring and developing disciplinary competencies associated with control engineering, this educational innovation practice is valuable in improving the perception of knowledge equity in the classroom in complex thinking and its sub-competencies.
In conclusion, this research fulfilled its objective indirectly since the impact identified was not necessarily related to the level attained or the improvement achieved but to the accessibility of knowledge and the equitable perception of students using Lab-Tec@Home.