1. Introduction
Exploring new approaches in higher vocational education is driven by a significant skill gap between labor market requirements and the educational readiness of graduates, which presents a major challenge for the sustainable development of China’s vocational education system [
1,
2]. Recent studies have highlighted that graduates often lack the competencies demanded by employers, a problem exacerbated by the rapid advancements in information technology and globalization [
3,
4]. Digital transformation and automation have fundamentally altered job requirements, making it imperative for educational content and curricula to evolve accordingly [
5]. These changes necessitate not only a shift in educational approaches but also a fundamental restructuring of the educational content to align with new job market demands.
In response to these changes, a growing number of educators advocate for the reverse design and customization of students’ curricula to align with their future career requirements [
1]. Outcome-based education (OBE) theory has attracted the attention of academic researchers with regard to producing graduates better prepared for the workforce, thereby fostering the sustainability of higher vocational education. OBE has been recognized globally for its innovative approach to learning, creating an environment where students are motivated by practical, real-world applications of their education [
6]. However, it is crucial to note that the demand for OBE varies significantly across different degree programs, with higher demand often seen in programs closely linked to industry needs, such as STEM (science, technology, engineering, and mathematics) education [
7].
The OBE approach is characterized by its curriculum design, teaching and learning activities (TLAs), and assessment strategies (ASs), all driven by the exit learning outcomes that students are expected to have achieved upon completing a program [
8,
9]. The primary objective of OBE is to provide students with a focused and coherent academic program that enhances student engagement and develops the skills and competencies required by the market. Therefore, OBE teaching is inductive, which many academic researchers argue is more effective in motivating students to learn and enhance the attainment of competence required by the labor market. Currently, the OBE approach is highly regarded and widely accepted as a strategy for reforming and renewing education policy worldwide [
6].
Despite strong advocacy and comprehensive implementation in global higher education, OBE has faced criticism from educators and academics. Challenges have arisen due to a lack of understanding among educators about the operational principles of OBE, particularly the misalignment between course learning outcomes (CLOs) and program learning outcomes [
10]. Large-scale changes in long-practiced teaching and learning methods are difficult to implement [
11]. Moreover, assessments often prioritize course content and grades over the ILOs [
12]. These knowledge gaps of OBE implementation highlight the need for a systematic and comprehensive evaluation of OBE implementation to identify the causes of the challenges it faces and explore opportunities for enhancement [
13]. Moreover, it is necessary to consider the influence of students’ personal backgrounds and intellectual aptitudes on their success in achieving learning outcomes, as highlighted by Dr. Mark Payne’s work on educational outcomes among Slovak Roma children [
14], which draws on Bronfenbrenner’s “Process-Person-Context-Time” model [
15].
Constructive alignment (CA) is a crucial concept for effective OBE implementation, ensuring that TLAs and ASs are aligned with ILOs to enhance competency achievement [
16,
17]. This alignment helps students understand what is expected of them and how to achieve different levels of learning outcomes, fostering active involvement and engagement [
18]. In defining ILOs for this study, we employed Bloom’s Taxonomy to ensure clarity and constructive alignment. Bloom’s Taxonomy categorizes educational goals into six hierarchical levels: remembering, understanding, applying, analyzing, evaluating, and creating. This framework helps in crafting learning outcomes that are specific, measurable, and aligned with the competencies expected of graduates [
18]. For example, action verbs such as “state”, “explain”, and “solve” call for very different competencies at various levels and need to be clearly specified from the outset to help students understand what is expected of them. By clearly specifying these verbs from the outset, educators help students understand the expectations and guide them in achieving the learning outcomes.
Project-based learning (PjBL), as a systematic teaching method, enables students to achieve different levels of competencies by linking projects with the professions in which such projects would be worked on [
19]. It is a student-centered process that caters to students’ needs and encourages active involvement in their learning [
20,
21]. PjBL aligns with the principles of OBE, which emphasizes student-centeredness, competency development, and real-world relevance. As Tang [
22] noted, implementing OBE with PjBL enhances student learning engagement. In addition, Alonzo et al. and Lui and Shum assert that the careful design of assessment strategies is crucial for implementing coherent and effective learning, thereby ensuring the learning process is more structured and goal-oriented [
23,
24].
Therefore, relying on the theory of CA, this study integrated PjBL as a TLA into OBE implementation and investigated how learning outcomes contribute to the attainment of AGC, as well as the mediating roles of PjBL and AS in the relationships between ILOs and AGC. We specifically focused on students enrolled in Cross-border E-commerce (CBEC) programs at higher vocational colleges in Nanjing, Jiangsu province, China. The CBEC program was selected due to the rapid growth of e-commerce globally and its significant impact on international trade, especially during the outbreak of COVID-19, underscoring the urgency of equipping students with the necessary competencies. By investigating CBEC learners’ attitudes toward OBE implementation in achieving graduate competence, our study aims to bridge the skill gap by aligning educational outcomes with the specific demands of the e-commerce sector. Additionally, this approach aims to address the critical knowledge gap of OBE implementation by emphasizing the interconnectedness of outcomes, the teaching–learning process, and assessment strategies, providing a comprehensive framework for enhancing sustainable educational practices. Specifically, this study intends to address the following research questions:
Q1. What is the relationship between intended learning outcomes and the attainment of graduate competence among the CBEC learners?
Q2. What is the relationship between project-based learning and the attainment of graduate competence among the CBEC learners?
Q3. What is the relationship between assessment strategies and the attainment of graduate competence among the CBEC learners?
Q4. How does OBE implementation affect the attainment of graduate competence among the CBEC learners?
The subsequent sections of this study are structured as follows:
Section 2 provides a comprehensive literature review and outlines the hypothesis development;
Section 3 introduces the research methodology employed in this study;
Section 4 details the results derived from the data analysis;
Section 5 engages in a discussion of the findings and explores the theoretical, practical, and managerial implications arising from the findings of this study; and
Section 6 ends with the conclusions and limitations of this study.
3. Research Methodology
3.1. Research Design
This study employed a quantitative approach using a cross-sectional research design to investigate how OBE implementation contributes to the attainment of graduate competence and evaluate the mediating roles of project-based learning and assessment strategies in the relationship between learning outcomes and graduate competence. According to Mohajan [
64], a cross-sectional design involves conducting a survey at a specific point in time or over a short period, measuring the outcomes and exposures of the subjects simultaneously [
65]. This approach is particularly useful for assessing OBE implementation in educational programs, as it allows researchers to evaluate the current state of OBE implementation and its impact on achieving expected graduate competencies. Furthermore, the use of questionnaires is common in cross-sectional studies [
66], as they are considered to constitute one of the simplest and most reliable methods for collecting standardized data from a large sample [
67]. Therefore, we conducted a cross-sectional online questionnaire survey, adapted from previous studies, to gather data on learners’ attitudes towards the effects of different aspects of OBE implementation in relation to attaining various levels of graduate competencies in academic programs.
3.2. Instrument Design and Development
The questionnaire was designed based on previous instruments whose validity and reliability had already been validated by the original researchers. The questionnaire had two primary sections. The initial section comprised demographic inquiries, encompassing gender, affiliation, major, and the duration of exposure to OBE. The subsequent section, constituting the core of the questionnaire, featured four latent variables along with 24 measurement items.
Selecting previous questionnaires was guided by two key factors: their proven validity and reliability, and their adherence to OBE guidelines, which were deemed suitable for this investigation. Additionally, since the targeted respondents primarily consisted of students, the chosen instruments were deemed relevant to this study. Authorization to adapt the questionnaire was obtained from the original researchers, ensuring ethical compliance.
Respondents were tasked with rating the measurement items on a 7-point Likert scale, ranging from 1 (strongly disagree) to 7 (strongly agree). For ILOs, seven items were adapted from Custodio et al. [
68]; for PjBL, six items were sourced from Grossman et al. [
69]; for ASs, seven items were drawn from Custodio et al. [
68] and Baguio [
70]; and for AGC, four items were adapted from Custodio et al. [
68].
According to Bhattacherjee [
71], content validity contributes to assessing whether the adapted measurement items align better with the latent variables in the questionnaire. Firstly, the validation process is usually confirmed by inviting experts from a related field. Three OBE experts and a deputy director were invited to validate the relevance and accuracy of the survey items related to OBE implementation in the questionnaire. Their tasks involved examining the instrument content to identify any irrelevant items, ambiguity, repetition, and language nuances.
To ensure content validity, two expert panels were assembled to scrutinize the adapted questionnaire items, identifying any potential issues such as irrelevance, ambiguity, or unclear wording. Following their recommendations, two additional demographic questions were incorporated into the initial section: respondents’ age and the duration of their involvement in PjBL. Subsequently, language translators were employed to assess the clarity and appropriateness of the items in the Chinese version, resulting in minor adjustments for linguistic accuracy.
As suggested by the language experts, a brief introduction to the OBE guidelines and the key concepts of key items such as ILOs, PjBL, and ASs were provided at the beginning of the questionnaire to enhance understanding of OBE and assess its effect on attaining graduate competence. Secondly, to assess the face validity of the adapted instrument [
72], six students from the target demographic were asked to participate in a voluntary pretest of the initial questionnaire. Their feedback revealed no ambiguities or issues requiring further clarification. Additionally, a QR code was created using WeChat to allow the respondents to ask questions or provide feedback on survey items they found confusing during the survey. Further details regarding the measurement items can be found in
Appendix A.
3.3. Sampling and Data Collection
The population for this study was purposively selected to be learners majoring in CBEC at higher vocational colleges in Nanjing, the capital of Jiangsu province, located in Central China. Specifically, the selected colleges include Jiangsu Maritime Institute, Jiangsu Vocational Institute of Commerce, Nanjing Vocational Institute of Railway Technology, and Nanjing Vocational College of Information Technology. These colleges were chosen as research sites because they had implemented OBE for more than two years during the research period. Additionally, Nanjing’s reputation for both commercial activity and educational excellence further enhances its suitability as a setting for studying international trade and electronic business. These colleges offer a range of business programs, like international trade, electronic business, marketing, c-commerce, and others, aimed at developing business expertise in line with evolving market demands. Therefore, learners enrolled in these business programs at the selected colleges could perceive the effects of OBE implementation and investigate the relationships between learning goals and graduate competence mediated by PjBL and ASs upon completing their program of study.
The sample size was determined according to the criterion of SEM, which recommends a minimum sample size of ten times the total number of observed variables [
73]. With 30 observed variables in this survey, 350 respondents were randomly selected from the higher vocational colleges through a lottery system to participate in the survey. To facilitate data collection, the questionnaire’s link or QR code, generated automatically using Sojump APP, was shared with the class groups via WeChat by their institutional practitioners.
Before commencing the online survey, the researchers obtained permission from the Human Research Ethics Committee of University Science Malaysia (USM). Additionally, consent was obtained from the deputy deans of the involved schools at the four higher vocational colleges. Afterward, the survey was explained to the respondents to ensure their understanding of this study. Finally, participants were informed of the study’s topic, purpose, and significance and guided to anonymously and voluntarily complete the online questionnaire.
Ultimately, 301 valid responses, with a retrieval rate of 86%, were obtained and adopted for data analysis.
Table 1 shows that the gender distribution of the respondents was relatively balanced, with males accounting for 55.1% and females accounting for 44.9%. The respondents aged 20–22 were the largest group in the sample (82.4%), followed by the age group of 23–25 and 26 above, accounting for 16.6% and 1.0%, respectively. The distribution of respondents among the institutions was relatively even, with each institution representing approximately 22.3%, 26.2%, 27.2%, and 24.3%, respectively. Regarding their enrolled programs, most participants specialized in cross-border electronic e-commerce (50.5%), while a smaller percentage of participants specialized in electronic e-commerce, international trade, and international business, comprising 8.6%, 13.6%, and 13.3%, respectively. Among these respondents, 27.2% had two years of experience in OBE learning, and 72.8% had three years of OBE learning experience. Furthermore, respondents were asked to specify how long they were engaged in PjBL, and it was found that 77.7% had two years of engagement in PjBL, while 22.3 had three years of PjBL experience. No respondents reported having only one year of experience in project-based engagement. Undoubtedly, these respondents were eligible for the survey and possess a certain level of representativeness.
3.4. Statistical Analysis
The data analysis was conducted using SPSS 23.0 and Amos 24.0. First, a test of normality was conducted to ensure the data distribution met the necessary assumptions for further analysis. Following this, the Harman single-factor test was implemented to assess common method bias. Furthermore, structural equation modeling (SEM) analysis was performed to investigate both the measurement and structural models. Specifically, confirmatory factor analysis (CFA) was conducted to evaluate reliability and validity, providing the values of factor loadings, composite reliability (CR), and average variance extracted (AVE). The goodness-of-fit index (GoF) and path coefficient were examined to assess the acceptability level of the structural model. Lastly, the bootstrapping method was employed to evaluate the statistical significance of the direct and mediating effects within the proposed hypotheses.
4. Results
4.1. Test of Normality
Normality can be assessed using various methods, including the Kolmogorov–Smirnov (KS) test, the Shapiro–Wilk (SW) test, skewness and kurtosis values, and graphical techniques such as histograms, box plots, or P-P plots [
74]. In this study, numerical methods were employed to assess the normality of the data due to their objective nature [
75], which is especially useful when experience is limited [
76]. Therefore, the normality of the data was tested based on skewness and kurtosis values for 24 measurement items. Specifically, the absolute values of skewness were ensured to be less than 3, and the absolute values of kurtosis were less than 7, aligning with the basic requirements for univariate normality [
77]. The results of the normality test are shown in
Table 2. Notably, all absolute skewness and kurtosis values fall within the acceptable range, indicating adherence to normal data distribution requirements. This section has been divided by subheadings. It should provide a concise and precise description of the experimental results, their interpretation, and the experimental conclusions that can be drawn.
4.2. Common Method Bias (CMB)
All data were collected from a single source, namely, CBEC graduates enrolled in public higher vocational colleges, through self-reported questionnaires. This uniform data collection method may lead to consistent responses, potentially resulting in common method bias (CMB) and affecting a study’s validity and reliability [
78]. To address this issue, the Harman single-factor test was conducted using SPSS 23.0 to check for CMB deviations [
79]. Employing the Exploratory Factor Analysis (EFA) method for the Harmon single-factor test [
80], the results indicated that four factors with eigenvalues exceeding 1 accounted for 79.800% of the total explained variance. Furthermore, the first factor, explaining 46.444% of the variance, fell below the critical threshold of 50% [
81]. This suggests that CMB was insignificant in the collected data and was unlikely to impact the observed relationship among variables.
4.3. Measurement Model Verification
Anderson and Gerbing [
82] advocate for using confirmatory factor analysis (CFA) to validate a measurement model. Cronbach’s alpha and McDonald’s omega are tests that measure the internal consistency of the items in a measurement tool [
83]. A coefficient between 0.80 and 1.00 indicates that a tool is highly reliable [
84]. Additionally, factor loading, composite reliability (CR), and average variance extracted (AVE) are commonly employed to assess convergent validity [
85,
86,
87]. Hair et al. [
88] suggest that standardized factor loadings above 0.5 can be considered significant, while values below 0.45 may warrant item deletion [
89]. Viladrich et al. [
90] suggests a minimum threshold of 0.70 for composite reliability to ensure reliability. Furthermore, Hair et al. [
88] recommend using average variance extracted (AVE) as a measure of convergent validity in structural equation modeling (SEM), with values of 0.5 or higher considered acceptable. The reliability and validity values of the four latent variables are presented in
Table 3.
As illustrated in
Table 3, the obtained values of Cronbach’s alpha and McDonald’ omega for the four latent variables ranged between 0.914 and 0.967. All the values are between 0.80 and 1.00, indicating excellent reliability across the measured constructs. Furthermore, all standardized factor loadings were above 0.5 and significant at
p < 0.001, with T-values ranging from 15.442 to 22.017, surpassing the suggested value of 1.96, indicating that the factor loadings of individual items in the four-factor model were all significant, which meant all the constructs had acceptable reliability. Additionally, the scores of CR ranged from 0.915 to 0.967, and the AVE scores ranged from 0.710 to 0.808, values above the suggested cut-off values of 0.70 and 0.50, respectively. These results provide strong evidence for convergent validity.
Furthermore, discriminant validity was assessed to determine the extent to which each construct in the model differed from the others [
88]. This was achieved by comparing the square root of the AVE for each construct with the correlations between latent variables. According to Fornell and Larcker [
91], when the square root value of the AVE is higher than the Pearson correlation coefficient value with other constructs, this indicates discriminant validity between the constructs.
Table 4 presents the discriminant validity test results. The values below the diagonal represent the Pearson correlation coefficients between constructs, all of which are smaller than the square root of the AVE values on the diagonal. Conclusively, all research constructs have met the requirements for discriminant validity.
4.4. Structural Model Analysis
This study evaluated the structural model through a goodness-of-fit index (GoF) and path coefficient. Firstly, a GoF test is designed to see how well the empirical data fit the hypothesized model. Many researchers have explored using model fit indices to assess overall model quality [
92,
93,
94]. These indices include the Tucker–Lewis Index (TLI), the Comparative Fit Index (CFI), the Incremental Fit Index (IFI), the Goodness-of-Fit Index (GFI), and the Adjusted Goodness-of-Fit Index (AGFI), with values above 0.9 indicating good model fit. Conversely, a Root Mean Square of Approximation (RMSEA) and a Standardized Root Mean Square Residual (SRMR) below 0.08 also signify a good fit [
95]. Furthermore, the chi-square (χ
2) value, degrees of freedom (df), and the ratio of χ
2/df were used to assess overall model fit.
However, the chi-square value is sensitive to sample size and may not always produce precise results [
96]. As Bollen et al. [
97] noted, when the sample size is larger than 200 in SEM, it is common for the chi-square value to be excessively large, leading to a poor model fit. To mitigate this, the GoF values can be adjusted through the Bollen–Stine Bootstrap correction method, which can provide an alternative way of obtaining better results [
97]. Using Chi-square divided by the degree of freedom, the ideal result should be less than 3.0 [
77]. More importantly, other fit indices can provide a more rigorous structural model fit verification standard.
The recommended threshold reference values of these indices, along with the adjusted results obtained using bootstrapping techniques, are shown in
Table 5. As illustrated in
Table 4, the results indicate a good fit for the structural model, with χ
2/df = 1.79, IFI = 0.97, CFI = 0.97, TLI = 0.97, GFI = 0.94, AGFI = 0.93, SRMR = 0.03, and RMSEA = 0.05. All the model fit indices exceeded the suggested threshold standards, confirming that the proposed structural model was acceptable. This indicates a strong alignment between the collected data and the proposed structural model.
In addition to assessing the GoF indices, SEM can be used to examine the significance and strength of the path relationships among variables.
Figure 3 presents the explanatory variance and path coefficients with standardized parameter estimates.
In the hypothesized model, ILOs accounted for 22% of the variance in PjBL, with a standardized regression coefficient of 0.468, indicating a significant positive correlation between the two constructs. The combined influence of ILOs and PjBL explained 25% of the variance in ASs, with coefficients of 0.290 and 0.295, respectively, suggesting a positive association with ASs. Moreover, ILOs, PjBL, and ASs collectively explained 50% of the variance in AGC, with standardized regression coefficients of 0.026, 0.224, and 0.566, respectively, indicating satisfactory predictive strength of the hypothesized model. However, the standardized path coefficient of 0.026 for the direct path from ILOs to AGC falls below the recommended threshold value (0.20), suggesting a minimal direct contribution [
98]. This weak direct relationship underscores the roles of PjBL and ASs in mediating the indirect effect of ILOs on AGC within the hypothesized framework.
4.5. Testing of the Hypotheses
Applying SEM and AMOS with maximum likelihood estimates provided robust results for evaluating the hypothesized constructs in this study. Bootstrapping methods can generate confidence intervals (CIs) with statistical power for direct effects, particularly bias-corrected bootstrapping [
99,
100]. Byrne [
101] further noted that bootstrapping techniques can mitigate the impact of non-normal kurtosis on CFA. As detailed in
Table 6, the empirical results supported five of the six hypotheses. ILOs were found to have a positive and significant influence on PjBL (PBL) (β = 0.468,
p < 0.001), with a 95% percentile CI [0.230, 0.560] that does not include 0, affirming Hypothesis 1 (H1). Similarly, PBL and ASs were positively correlated (β = 0.295,
p < 0.001), with a 95% percentile CI [0.064, 0.585], supporting Hypothesis 2 (H2). ILOs also had a significant positive relationship with ASs (β = 0.29,
p < 0.001), with a 95% percentile CI [0.077, 0.523], confirming Hypothesis 3 (H3). PjBL was significantly linked to AGC (β = 0.224,
p < 0.001), with a 95% percentile CI [0.064, 0.512], validating Hypothesis 4 (H4), and ASs were strongly associated with graduate competence (β = 0.566,
p < 0.001), with a 95% percentile CI [0.311, 0.813], confirming Hypothesis 5 (H5). However, the direct effect of ILOs on AGC was not significant (β = 0.026,
p > 0.05), with a 95% percentile CI [−0.082, 0.152] including 0 within the values, leading to the rejection of Hypothesis 6 (H6). This suggests that ILOs may exert an indirect influence on AGC, one potentially mediated by PBL and ASs.
4.6. Mediating Effects of Project-Based Learning and Assessment Strategies
In order to analyze the mediating effects within the hypothesized model, a bootstrap method with 1000 repeated sampling procedures at a 95% confidence interval, as suggested by [
102], was employed to verify the mediating roles of PjBL and ASs. A mediating effect is recognized as significant when the Z value exceeds 1.96 and the 95% bias-corrected CI does not include 0 [
103].
Table 7 illustrates the results regarding the direct, indirect, and total direct effects of the hypothesized model. At the 95% confidence level, both the confidence intervals of the bias-corrected method and the percentile method were statistically significant (Z = 3.359, 95% bias-corrected CI [0.156, 0.511], 95% percentile CI [0.155, 0.510]), indicating a significant total effect. Similarly, the total indirect effect was significant (Z = 4.235, 95% bias-corrected CI [0.171, 0.442], 95% percentile CI [0.174, 0.448]). However, the direct effect of ILOs on AGC was not significant, with both the bias-corrected and percentile method confidence intervals including 0 (Z = 0.375), indicating that PjBL and ASs fully mediate the relationship between ILOs and AGC.
To specifically examine the mediating effects of PjBL and ASs, three alternative structural models were developed and tested.
Model 1: Project-based learning as a mediator.
This model assessed the indirect effects of ILOs on AGC through PjBL. The fit indices demonstrated a satisfactory fit (χ2/df = 2.400, IFI = 0.970, CFI = 0.970, TLI = 0.965, GFI = 0.908, AGFI = 0.879, SRMR = 0.031, RMSEA = 0.068), with confidence interval values for indirect effects not including 0, supporting hypothesis 5a.
Model 2: Assessment strategies as a mediator.
This model assessed the indirect effects of ILOs on AGC through ASs, also exhibiting satisfactory fit indices (χ2/df = 2.619, IFI = 0.962, CFI = 0.961, TLI = 0.955, GFI = 0.893, AGFI = 0.861, SRMR = 0.036, RMSEA = 0.073). The confidence intervals for indirect effects did not include 0, indicating that the indirect effect of ILOs on AGC through ASs was significant, thereby verifying hypothesis 6a.
Model 3: Project-based learning and assessment strategies as chain mediators.
This integral model examined the sequential mediating effect of PjBL and ASs in the relationship between ILOs and AGC, yielding satisfactory fit indices. The results confirmed there was a positive and significant sequential mediating effect (Z = 2.031, 95% bias-corrected CI [0.02, 0.157], 95% percentile CI [0.019, 0.148]), supporting hypothesis 6b.
Finally, it may be of interest to explore the differences in mediating effects among the three models. We examined the percentage of the total indirect effect accounted for by each mediator. The mediating effect of ASs (ILO-AS-AGC) was the greatest, accounting for 44.2% of the total indirect effect. This was followed by PjBL (ILO-PjBL-AGC), which accounted for 30.2%, and the sequential mediation of PjBL and ASs (ILO-PjBL-AS-AGC), accounting for 22.6%. These results highlight the critical mediating roles of PjBL and ASs in the relationship between ILOs and AGC, demonstrating their importance in enhancing graduate competence through structured educational strategies.
6. Conclusions
This study highlights the critical need for implementing OBE in higher vocational education to better align graduates’ skills with employment requirements. Focusing on CBEC learners, we utilized the theory of CA to construct and empirically validate a framework, examining the impact of ILOs, PjBL, and ASs on the AGC and the mediating roles played in the relationship between learning outcomes and graduate competence attainment.
While ILOs did not show a direct impact on AGC, they serve as essential foundations for guiding both teaching methods and assessments. Explicit ILOs provide a clear roadmap for educators and students, ensuring that educational activities are purpose-driven and aligned with desired competencies. This alignment is crucial for sustainable education as it promotes consistency and clarity in regard to educational objectives.
This study underscores the mediating roles of PjBL and ASs in the relationship between ILOs and AGC. This highlights the necessity of a holistic approach in OBE implementation, where teaching activities and assessments are seamlessly aligned with learning outcomes to optimize student competencies. Additionally, this study identifies a significant chain-mediating effect of PjBL and ASs in the relationship between ILOs and AGC. This underscores the importance of integrating multiple educational strategies to achieve comprehensive learning outcomes.
Effective implementation of the OBE approach requires constructive alignment, ensuring that measurable ILOs align with industry needs. This alignment should be reflected in all aspects of the educational process, ranging from teaching methods to assessments. By aligning educational practices with the demands of the job market, our study promotes sustainable educational practices that contribute to the long-term viability of both educational institutions and the industries they serve.
The limitations of this study should be mentioned, and directions for future studies should be suggested. Firstly, the survey was limited to candidates from public vocational colleges in Nanjing, Jiangsu province, China, potentially limiting the generalizability of the findings. Future research should encompass a broader range of institutions across China to provide a more comprehensive evaluation of OBE’s impact. Secondly, this study focused on a relatively small sample of students, excluding key stakeholders such as instructors, program designers, program evaluators, and administrators. This limitation may have restricted the depth and breadth of the analysis. Future research should expand the participant pool to include a diverse range of student groups from various disciplines as well as educators’ perspectives. This broader approach will offer a more comprehensive understanding of the impact of OBE implementation on educational outcomes. Thirdly, this study primarily explored the OBE process concerning ILOs, TLAs, AS, and AGC, overlooking other factors such as curriculum design, motivation, and institutional support. Future research should incorporate these variables to offer more robust recommendations for effective OBE implementation and competency enhancement in higher vocational education. Fourthly, assessing the attainment of set goals through uniform assessment rubrics and criteria often leads to formalism. Future studies should explore more flexible and adaptive assessment methods that better capture the complexities of achieving different levels of graduate competence. Finally, this study employed a cross-sectional design to measure the attainment of graduate competence rather than assessing continuous development through tangible products such as portfolios, exams, or other culminating products. Recognizing this limitation, it is recommended that future research adopt a longitudinal approach to track graduates’ actual workplace outcomes over time, thereby providing a more objective measure of AGC.