Next Article in Journal
Drivers and Implications of Land Cover Dynamics in Muger Sub-Basin, Abay Basin, Ethiopia
Next Article in Special Issue
Using Artificial Intelligence to Predict Students’ Academic Performance in Blended Learning
Previous Article in Journal
Analysis of the Social Capital in a Technological System of a Smart City Using a PLS-SEM Model
Previous Article in Special Issue
Assessing Design Ability through a Quantitative Analysis of the Design Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning Self-Regulation Questionnaire (SRQ-L): Psychometric and Measurement Invariance Evidence in Peruvian Undergraduate Students

by
César Merino-Soto
1,
Gina Chávez-Ventura
2,
Verónica López-Fernández
3,
Guillermo M. Chans
4 and
Filiberto Toledano-Toledano
5,6,7,*
1
Facultad de Ciencias de la Salud, Escuela Profesional de Medicina, Universidad César Vallejo, Av. Larco 1770, Trujillo 13009, Peru
2
Instituto de Investigación en Ciencia y Tecnología, Universidad César Vallejo, Av. Larco 1770, Urb. Las Flores, Distrito Víctor Larco Herrera, Trujillo 13009, Peru
3
Department of Education, Universidad Internacional de La Rioja (UNIR), Avenida de la Paz 101, 26006 Logroño, Spain
4
Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City 01389, Mexico
5
Unidad de Investigación en Medicina Basada en Evidencias, Hospital Infantil de México Federico Gómez, National Institute of Health, Márquez 162, Doctores, Cuauhtémoc, Mexico City 06720, Mexico
6
Unidad de Investigación Sociomédica, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, Calzada México-Xochimilco 289, Arenal de Guadalupe, Tlalpan, Mexico City 14389, Mexico
7
Dirección de Investigación y Diseminación del Conocimiento, Instituto Nacional de Ciencias e Innovación para la Formación de Comunidad Científica, INDEHUS, Periférico Sur 4860, Arenal de Guadalupe, Tlalpan, Mexico City 14389, Mexico
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(18), 11239; https://doi.org/10.3390/su141811239
Submission received: 20 July 2022 / Revised: 22 August 2022 / Accepted: 31 August 2022 / Published: 8 September 2022

Abstract

:
Given the theoretical and applied importance of self-regulation in learning, our study aimed to report the internal structure of the psychometric properties of the Learning Self-Regulation Questionnaire. Five hundred and ninety-six Peruvian university students participated in their first to tenth semesters on campuses in Lima, Trujillo, and Cajamarca. Nonparametric scalability, dimensionality, reliability (score and item levels), and latent invariance were analyzed. The results showed that reducing the number of response options was necessary. Reducing the number of items also produced better scaling. Two slightly related dimensions were strong internal validity and acceptable item reliability; furthermore, reliability was adequate. Age and gender had trivial correlations in item variability. Finally, differences between the semesters were obtained in the means, variances, and latent correlations. In conclusion, we propose a better definition of the constructs of autonomy and control measured by the SRQ-L. This article also discusses the limitations and implications of the study.

1. Introduction

Learning is omnipresent throughout life, and self-regulation is particularly relevant in learning and performance in various age groups, for example, adults [1], university students [2,3], adolescents in secondary education [4,5], and children in elementary education [6,7]. The relevance is because self-regulation modulates cognitive, affective, and behavioral facets to achieve the desired success level [8]. Therefore, self-regulation is one of the relevant variables in understanding the student’s academic performance and adaptation. The people who self-regulate their learning have a greater capacity to select and structure the content they must learn [2]. They can adapt teaching strategies to perform better [9,10,11], reflect, participate with initiative, engagement [12,13], perseverance, and proactivity in their learning [11,14,15] have firmer self-efficacy beliefs [11,16,17], become goal-oriented in a particular domain [18], adjust more easily to university life [14], and persist in their careers [19,20,21], especially during the COVID-19 pandemic’s confinement and health restrictions [22].
This study is based on the cognitive perspective of self-regulation of learning from self-determination theory [23,24]. According to this, the regulation of behavior depends on the extent to which the individual perceives themselves as controlled (that is, pressured by external demands or situations) or autonomous (internal control and internalization of self-regulation processes). This perception affects intrinsic motivation, the expression of positive effects, flexibility, and the capacity for autonomous choices [11,12,13,14]. Its relevance and transversal quality [25] make it an essential competency in all academic semesters [26]. In the university context, particularly in Peru, it is necessary to have instruments with adequate evidence of validity and reliability.
One instrument that measures learning self-regulation is the Learning Self-Regulation Questionnaire (SRQ-L) [24]. It is an adaptation to the field of learning from the clinical area. The Hispanic version of the instrument came from a Peruvian study, which was rigorously translated from English to Spanish spoken in Peru [27]. However, it could be recognized that the phrasing is generalizable in different Hispanic countries. The obtained scale integrates the content of two versions, which is also available on the author’s website (http://selfdeterminationtheory.org/self-regulation-questionnaires/ (accessed on 10 September 2021)). According to the reference website, these versions are slightly different and malleable for specific situations, suggesting transferability to other learning contents. The adaptations were developed for studies with medical students in training interviews [24] and organic chemistry [28].
In Peru, structural validity has been tested twice [27,29]. In these two studies, which had samples of 237 and 369 university students, respectively, the structure of the dimensions (autonomy and control) was empirically differentiated with satisfactory adjustment indices; the parameters of the items were also adequate to represent their dimensions validly. The relationship between control and autonomy was equal to zero [29], suggesting that they do not operate linearly to achieve learning. This situation would be in line with the study by Matos [27], which found no correlation between the two (0.08; p > 0.05).
Other problems replicated in these studies were the factorial complexity of two items, 5 and 13, and their low factor loadings. Although these items were removed during their analysis, other types of psychometric problems may need to be explored to verify the lack of fit of these items (the distribution of the items, the correlated errors, or the appropriate ordering of the response options). The answer to these problems still requires investigation because the theoretical or methodological explanations were not previously addressed in sufficient detail, which may limit the interpretation of the scores. Exploring potential problems also involves the appropriate definition of the conceptual content of the factors to achieve valid interpretations of the scores.
The practical implication of this psychometric clarity of the instrument (regarding its accuracy and content) is the correct identification of the attribute in the students, given that control and autonomy are sources of influence on the student’s adaptive behavior. Other relevant aspects of the properties of the SRQ-L need to be addressed to build a complete knowledge of it. One of these is measurement invariance, a necessary condition to make valid comparisons between groups [30], for example, between men and women or completed academic years. The differences between these groups concerning self-regulation are not uniform, but there is a clear tendency not to corroborate the invariance of the parameters of the items. On the one hand, studies have shown that women self-regulate their learning more intensely than men [24], but it has also been found that there are no differences between the sexes [31]. In another case, it was observed that women have greater autonomy than men [24], while in another, no differences were observed in control [2]. The differential performance of the instrument may also influence these differences. Therefore, interpreting the variability between these groups may contain undetected bias or parameters that violate the measurement invariance of the SRQ-L. Since this property of the internal structure has not been verified before, it needs to be resolved.
Another aspect is the reliability of the differences between scales and the identification of abnormal differences, both psychometric parameters that help describe the discrepancies between the scores while controlling the measurement error [32,33]. This type of information is practical because it is exposed in the metric of the observed score [32] and directly influences the interpretation of the score to describe the student’s position in the measured attribute and the study of the differences between both measured constructs (control and autonomy). Compared to the estimation of reliability coefficients, the information on the error variability in the observed score metric has practical and nonacademic use.
Regarding reliability, studies usually report internal consistency coefficients, specifically the alpha coefficient; its magnitude varies, typically from 0.60 but less than 0.90 on the two subscales [19,24,27,28,34,35,36,37]. More appropriate internal consistency estimates may be required because the alpha coefficient requires several assumptions to be met, such as the absence of correlated errors and the tau-equivalence relationship between the items and their construct [38]. The fulfillment of these conditions avoids overestimating or underestimating the reliability through the alpha coefficient.
The objective of the present study is to contribute to a more rigorous evaluation of the construct validity through a validated Self-Regulation of Learning Questionnaire (SRQ-L) by considering two aspects: the internal structure and the measurement invariance, overcoming the gaps from previous studies [29], and thus obtain higher metric quality during its application in university learning contexts. This metric quality can be defined by the validity of the items, the dimensional structure, the invariability of its parameters, and the functioning of the current response scaling. These properties were not previously investigated or were performed with outdated analysis strategies, so this situation suggests a source of potential inconsistency with future studies from an updated view of the internal structure of the SRQ-L. Due to the importance of adapting and using tests in education, this instrument in its Spanish version (SRQ-L) demonstrated solid psychometric properties, with the methodological strength necessary to interpret the results in the desired way. It is a functional, valid, reliable, and culturally relevant scale for higher education students, replicating these findings in other contexts and populations.

2. Materials and Methods

2.1. Participants

Five hundred and ninety-six evaluated university students from the cities of Trujillo (320, 54%), Lima (29, 29%), and Cajamarca (101, 17%), Peru, were selected through nonprobabilistic convenience sampling [39], related to the researchers’ access to professional career directors and the facilities provided by teachers for access to classrooms. Participants excluded from the study were the ones who had studied a previous professional career, were not present at the time of the survey administration, did not sign the informed consent, and did not complete the survey due to personal choice.
51% of the participants were male, 88% were single, and 72% did not work. The ages fluctuated between 16 and 40 years (M = 19.85; SD = 2.82). Their socioeconomic level was medium to medium-high. Of the participants, 77% were between the first- and third-semester studies, 22% between the fourth and sixth semesters, and 1% between the seventh and tenth semesters. They were enrolled in the professional careers of Engineering (39%), Administration (24%), Accounting (20%), Law (14%), Psychology (2%), and Communications (1%). To compare the participants per their study semester, we categorized three groups: from the 1st to the 2nd semester (141, 24%), the 3rd semester (320, 54%), and the 4th to the 10th (134, 22%). All students in the study signed the informed consent as a condition for completing the survey.

2.2. Instrument

Self-Regulatory Learning Questionnaire (SRQ-L) [24]. It is a self-report test consisting of 14 ordinal, 5-point items (from 1 = Not true at all to 5 = Completely true for me). It was translated and adapted to the Peruvian context by Matos [27], the version used in this study. Its structure comprised two factors: Autonomy (6 items) and Control (8 items). The first refers to the importance assigned to learning due to internal regulation and is based on intrinsic motivation; the basic psychological needs of competency, autonomy, and relationship were satisfied without searching for external stimuli. The second was oriented toward seeking rewards, external approval, or punishment avoidance [23,24].

2.3. Procedure

Design. The study was cross-sectional and instrumental [40,41] and employed a quantitative methodology.
Data Collection. Authorization was obtained from the directors of each university, and we solicited the support of the Psychological Orientation area to evaluate the students by psychologists and psychology interns trained for this purpose in an estimated 15 min. The test was applied in the middle of the academic semester, coordinating with the teachers to facilitate classroom access. The instruments were applied in groups of approximately 35 students, to whom the evaluation objectives were explained, and participation was voluntary, following the ethical principles of psychologists [42,43]. These instructions also communicated the anonymity of the response, the support in case of questions about the survey content, the absence of participation incentives, and the freedom not to continue filling out the survey. The applied material consisted of informed consent and the study instrument.
Analysis. The quantitative analysis focused on carefully examining (a) the univariate characteristics of the items, (b) the internal structure, (c) and the reliability. The differences according to variables that could explain the variability in self-regulation were identified using the latent variables within the methodological framework of the analysis of structural invariance. The analysis’ general strategy was to apply several approaches to reduce the dependence of the conclusions on a single analytical procedure [44].
Item analysis. As the content of the items referred to two control attributes as individual expressions [45], their involvement could generate possible response patterns associated with the functional distribution of the response categories [45,46]. To examine it, we estimated thresholds for the items, which are points of intersection between adjacent response categories associated with the probability of choosing between one or another conditional category at the level of the latent attribute (or construct). The ordering (or disordering) of these thresholds would indicate the good (or bad) performance of the response categories [46]. It was estimated within the partial credit model [47,48], derived from the Rasch modeling for polytomous items [49]. For this estimation, the eRm program [50] was used within the IANA graphical interface [51] in the R program [52]. Additionally, the correlation between sex and age was examined as a partial expression of the content validity of the items [53,54].
Nonparametric analysis. A nonparametric approach [55] was applied to the ordinal items of the SRQ-L [56] to verify several fundamental and precursor properties [57] of the instrument scores but with independence from the strong assumptions of latent variable models. Three essential characteristics [58] were explored for the completion of the monotone homogeneity model (MHM): scalability (using the coefficient H), local independence (responses to the items are not mutually influenced; examined through three conditional association indices, W(1), W(2), and W(3) [59]), and monotonicity (incremental function between the item and the latent attribute; evaluated by comparing the number of current and expected violations to the monotonic model [55]). The procedure was performed using the Mokken program [52,60].
SEM analysis. The evaluation of dimensionality was complemented by the confirmatory factor analysis for categorical data, using the weighted least square mean and variance adjusted estimator (WLSMV) [61]; fit was assessed using approximate fit indices: CFI (≥0.95), TLI (≥0.95), RMSEA (≤0.05), Gamma-hat (G-h) [62], and SRMR (≤0.05). Regarding the measurement invariance, the appropriate sequence of steps was implemented for categorical variables [63], starting with successfully implementing restrictions on the parameters of the items. We started with configurational invariance, then introduced the cumulative constraint of equality of thresholds, factor loadings, intercepts, and, finally, residuals. Since identifying the adjustment in the measurement invariance is still a matter of debate [64], together with the CFI, the centrality index (Mc) [65] was also used, given its statistical robustness [64]. After corroborating the measurement invariance, the latent differences of the means were evaluated and estimated using d [66]; also, we assessed the differences of the variances through the standardized variance heterogeneity index, SVH [67], and the correlations using the standardized index q [66]. All SEM analyses were performed with the lavaan program [52,68].
Reliability. Reliability was estimated at the item and score level for each subscale. Regarding the items, it was calculated using the coefficient corrected by attenuation [69], given its lower bias and computational ease [70]; the minimum acceptable value is around 0.30 [71]. At the scoring level: (a) consistent with the nonparametric model, the MS coefficient was estimated [56], and (b) consistent with linear SEM modeling, the coefficient ω was estimated [72]. For comparison purposes, the coefficient α was obtained.
Practical indices of the measurement error of both scores were also estimated, using the standard error of measurement (SEM) and the standard error of measurement of the difference (SEMD) [33]. For the latter, the formula S E M F 1 2 + S E M F 2 2 was used, where SEMF1 and SEMF2 correspond to the SEM of each compared score. To obtain the critical values, the SEMD was multiplied by the z values derived from the standardized normal curve (1.43, 1.64, 1.96, and 2.57) and consistent with the 0.15, 0.10, 0.05, and 0.01 levels, two-tailed (respectively). Finally, to estimate the statistical abnormality of the difference between the F1 and F2 scores obtained by a subject from a clinicometric approach, the standard deviation of the difference was calculated [73]: S D D = S D 2 2 r x y , where SD is the standard deviation in the standardized score metric (for this study, t-score, that is, M = 50, SD = 10), and rXY is the correlation between both compared scores (usually Pearson’s linear correlation).

3. Results

3.1. Item Analysis

Results are presented in Table 1. The means of the F1 (Autonomy) items were generally higher than those of F2 (Control), indicating a degree of independence of the behaviors in both factors. In contrast, an apparent more significant variability of behaviors was found in F2 (observed from the standard deviation). On the other hand, the distributional non-normality was moderately variable (skewness and kurtosis). The correlations between the items of the scale F1 with F2 tended to be lower than the inter-item correlations within the same scale, suggesting clear divergent and convergent relationships (respectively). For the correlations of the items with sex and age, the Type I error was controlled by adjusting the nominal alpha by the Bonferroni method (in F1: 0.05/6 items = 0.008; in F2: 0.05/8 items = 0.006), finding that none of the correlations with the sex and age of the participants was statistically significant. Although no cut-off points were established to indicate the magnitude of the item–criterion correlations, the magnitude of the correlations obtained could be considered between the trivial and low levels. The examination of the ordering of the response options (see Figure 1) showed a pattern of disorder, strongly concentrated in options 1 and 2, and 4 and 5; a tendency to less differentiation between thresholds 3 and 4 was also observed. Because these observations (disordering and less differentiation) decrease the appropriate interpretation of the scale scores [46], category 1 was merged with 2, and category 3 merged with 4 and 5. The new scaling with three response options effectively maintained the ordering of the thresholds, except in Items 7 and 14 of the F2 scale (Control). Without making other modifications, these items were kept for the following analyses to evaluate their effect on parameterization with factor analysis.

3.2. Nonparametric Dimensional Analysis

Scalability. The result of the Mokken modeling appears in Table 2. The scalability coefficients (H) were obtained in the total and three-semester samples. The decision to remove the items was based on (a) the pattern of comparatively low H coefficients (<0.30) [74], (b) their comparatively high standard errors, and (c) the moderate invariance of the two previous patterns in the compared groups (three levels of academic semesters). Therefore, Items 12 and 11 of F1 (Autonomy) and Items 4, 5, and 13 of F2 (Control) showed the lowest scalability, considering that the lower limit of their confidence interval (in 95%) was below 0.30. The final evaluation result of the items appears in the last column of the scalability analysis in Table 2.
Monotonicity. With the reduced version of the SRQ-L, no violation (#vi) was found in the monotonicity ratio greater than two between the items or that was statistically significant (#zsig), and the CRIT statistic for each item was less than 40 [75]. Overall, the results indicate that the monotonic homogeneity model is satisfactory.
Local independence. Finally, the local independence of the reduced version, evaluated with the indices W(1), W(2), and W(3) [59], indicated that the items of F1 (Autonomy) do not contain associations between significant items in magnitude (W(1) between 0.111 and 0.937; W(2) between 2.426 and 3.605; W(3) between 0.015 and 1.893). In F2 (Control), there was substantial inconsistency between the indices (concerning Item 2): W(1) between 0.009 and 0.618; W(2) between 3.322 and 6.714; W(3) between 0.004 and 2.355. Therefore, the final detection of local independence was evaluated by linear SEM modeling as a confirmatory option [76].

3.3. Structural Equations Modeling (SEM)-Based Parametric Analysis

Dimensionality. Dimensionality modeling with all items showed poor results: WLSMV-χ2 = 1926.135 (df = 76, p < 0.001), CFI = 0.775, TLI = 0.730, RMSEA = 0.202 (90% CI = 195, 0.210), SRMR = 0.168. However, the factor loadings were acceptable; for example, in F1 they were between 0.456 (h2 = 0.208; Item 1) and 0.784 (h2 = 0.615; Item 12); in F2, between 0.474 (h2 = 0.225; Item 4) and 0.770 (h2 = 0.593; Item 7); and the covariation between the factors was high (r = 0.621, cov = 0.165, p < 0.01). This result was considered questionable, and it was a conclusion strengthened by the observation of the modification indices (MI), in which several potential and strong factor-item respecifications were detected, not defined by the original structure (MI > 200.0, ZEPC = 0.500), and a strong covariation of residuals existed between several items (e.g., MI > 100.0, ZEPC > 0.80). These possible respecifications were consistently corroborated by a method based on the statistical power and size of the MI [77]. These problems occurred more frequently in the items detected as problematic by the Mokken nonparametric analysis (Items 11, 12, 5, and 13).
Taking into account the items selected in the Mokken scaling, the adjusted model showed higher fit indicators: WLSMV-χ2 = 118.391 (df = 26, p < 0.001), CFI = 966, TLI = 0.953, RMSEA = 0.077 (IC 90% = 0.063, 0.092), SRMR = 0.090. When reviewing the MIs, trivial factor-item coefficients were found towards a nonhypothesized item; however, strongly correlated residuals were found with Items 1 and 3 (ZEPC = 0.563) and Items 6 and 9 (ZEPC = 0.498). The procedure of Saris et al. [77] detected an inconsistency in the criteria based on power and the magnitude variation of the respecification between Items 1 and 3; therefore, these were discarded. There was consistency in the association of Items 6 and 9, but the content analysis did not reveal an explicit conceptual link, so it was also discarded. Therefore, the final model appears in Table 3, where it is shown that the items maintained strong factor loadings with their individual factors and were statistically significant (F1, λ > 0.65, z > 9.0; in F2, λ > 0.62, z > 11.0). The linear covariation between the factors was 0.09, 95% CI = 0.01, 0.169 (cov = 0.040, p > 0.10), showing orthogonal association. The correlation at the direct score level was 0.108 (95% CI = 0.028, 0.187).
Tau-equivalent model. To verify if the items were similar in their validity with their construct, the tau-equivalent model (equal factor loads) was tested to fit, finding a poor fit, WLSMV-χ2 = 1445.526 (df = 36, p < 0.001), CFI = 0.892, TLI = 0.882, RMSEA = 0.088 (90% CI = 0.063, 0.092), SRMR = 0.090. The scaled difference [78] between this model and the previously accepted congeneric was statistically significant, Δχ2 = 48.65 (df = 7).

3.4. Reliability

The MS coefficient was 0.684 and 0.772 for F1 (Autonomy) and F2 (Control), respectively. The reliability estimated by the ω coefficient for categorical variables [72] was 0.715 (s.e. = 0.031, 95% CI = 0.621, 0.776, bca bootstrap) for F1, and 0.803 (s.e. = 0.015, 95% CI = 0.767, 0.830, bca bootstrap) for F2. Compared with the alpha coefficient, it was substantially lower in F1 (α = 0.634, s.e. = 0.029, 95% CI = 0.572, 0.688, bca bootstrap) and F2 (α = 0.774, s.e. = 0.016, 95% CI = 0.740, 0.804, bca bootstrap). At the item level, reliability in F1 was acceptable but slightly low (close to 0.30) [58,59], while in F2, it was satisfactory (>0.33) but moderately variable. The clinicometric application parameters appear in Table 3.

3.5. Invariance

Measurement invariance. After verifying the internally most acceptable model, we evaluated the measurement invariance. Table 4 presents the estimates and adjustment differences. The groups chosen for this objective were study semesters carried out by the participants, grouped into three categories: Group 1 or initial (the first two semesters), Group 2 or intermediate (3rd semester), and Group 3 or advanced (the 4th to the 10th semesters). Age and gender were not included in this analysis because they showed correlations around zero with the SQR-L items. The first stage of the invariance was verifying the factorial structure (configurational invariance), where the results were acceptable, although with some slight variation in some indices (Mc and RMSEA). Although the standardized factor loadings of the model in each group were not reported, the magnitudes of the differences [79] were in the trivial (≤0.10) or small (≥0.1) range, and none in moderate magnitudes (≥0.2) or large (≥0.3). The invariance of the thresholds was also satisfactory, and its difference with the configurational model was slight. The magnitude of minor differences was maintained in comparing the rest of the evaluated models, even reaching the invariance of the residuals. At this level of strict invariance, the fit indices were uniformly satisfactory, except for Mc.
Structural invariance. After ensuring the invariance of the measurement, Table 5 shows the results of the structural comparisons between groups of the latent parameters (means, variances, and correlations). For the comparison of g1 vs. g2 and g3 groups, the reference group was g1; for the comparison between g2 and g3, the reference group was g3. Regarding the latent means, the standardized differences detected that in the Autonomy construct (F1), the differences between groups were close to zero and statistically insignificant, except that the students of the g3 group (fourth to tenth semesters) scored slightly higher (between 0.30 and 0.50) [66] than students in the third semester (group g2). On the other hand, in the Control construct (F2), the group from the third semester (g2) showed lower scores compared to the students from the first two semesters (g1), but the size of the difference was negligible. The group of advanced semesters (g3) scored more than Group g2 (3rd semester), of small magnitude.
Regarding the degree of individual differences (latent variances) in the compared semesters, students from more advanced semesters (g3, 4th to 10th semester) in Autonomy (F1), and students from 3rd semester (g2) in Control (F2), showed more variability. However, the comparisons with the SVH index were small (<0.56) [67], comparing groups within each factor and between factors. Finally, in the evaluation of the heterogeneity of the correlations [80], it was found that the relationships between the Autonomy and Control factors are different between the three groups (Q = 6.880, gl = 2, p < 0.05); the magnitude of this difference, I2 [81], can be considered strong (I2 = 90.92%). As shown in Table 5, the covariation was higher in Group g3 than in the rest.

4. Discussion

The purpose of the study was to evaluate the internal structure of the psychometric properties of the SRQ-L [24] in a group of Peruvian university students. The motivation was to explore its psychometric functioning in greater depth, given that the trend of previous studies in moderately similar groups [29] did not seem to identify some of its properties and failed to evaluate other important ones, specifically, measurement invariance, response scaling performance, divergent item–factor relationships, and correlated errors.
The analyses related to dimensionality showed results that seem to optimize the validity of the short version, given that the content relationships of the SRQ-L were maximized by selecting those items with better scaling and discriminative power; that is, they better differentiated the subject’s position on the score continuum. The selected items, as a whole, helped to define better the score derived from them, through more discriminative, representative items and with better psychometric evidence. In the reduced version, the Autonomy construct now includes behaviors of following suggestions to optimize learning and actively participating in class to understand and improve skills. In Control, this is represented by behaviors to avoid the disapproval of others, actively participating in class, and following suggestions to achieve a good grade and have a good image in the eyes of others. Two of the items in this smaller version (Items 7 and 14 of the Control scale) showed a significantly different scaling compared to the rest of the items because the distance between their thresholds was minimal, and one of them still showed threshold disorder. However, these items maintained sufficiently high inter-item correlations, high scalability, and a pattern of statistical indicators indistinguishable from the rest of the items. Therefore, they were chosen for the final version of the SQR-L. Because previous studies did not examine this, we are unsure if this result comes from a sampling variation, representing a consistent feature of these two items. At this point, one implication is that further examination is required on a new sample to characterize this particular result.
The instrument’s content derived from our analyses reduced the behaviors sampled from the original version (six items in the Autonomy factor and eight in the Control factor), which may limit the validity of the initially designed construct. However, the statistical fit of this full version using nonparametric scaling was not satisfactory; this is usually related to multidimensionality or excess variance irrelevant to the measured construct. The correlated errors between the items detected by both procedures (nonparametric and SEM) confirmed that the structure of the original SRQ-L contained inter-item relationships irrelevant to the construct measured and even that some items could be better related to the other factor. Furthermore, these problems strongly influenced the correlation between the constructs because the high correlation (>0.50) initially estimated between them was drastically reduced in the shortened version (zero).
Regarding reliability, the estimates showed acceptable magnitudes to describe the construct in a group of participants. However, they are possibly less precise if the abbreviated SRQ-L describes an intervention’s individual effects. It is accepted that higher levels of reliability are required, such as >0.89. Interestingly, the brevity does not appear to have seriously affected the reliability of the scores, and the small number of items did not bias toward high reliability. This information may suggest that the content is not redundant or repetitive and still displays moderately different behaviors linked to its constructs. On the other hand, the clinical value indicators reported in Table 3 are helpful for research but especially for professional practice [32,33]. Multiplied with the standardized values of the normal curve, the statistically infrequent (abnormal) differences show an extreme range, especially at the 95% and 99% levels. Applied use may require choosing the 90% level to reasonably determine abnormal differences or the 85% level to reduce Type II error.
Regarding the study’s implications, the reduced version can be easily incorporated into screening evaluations during some study semester periods, allowing space for other instruments. On the other hand, it can also be very efficient in measuring the change in self-regulatory skills since it is sensitive to interventions in the educational context. Indeed, some experimental studies have shown that self-regulation skills can be strengthened [4,82]. The development of a reduced version has known advantages for evaluation practice. For example, it reduces examinee fatigue when a set of measures is applied at a fixed time or in a longitudinal design. It reduces the number of estimable parameters when a short measure is used in structural equation modeling; sampled behaviors maintain stronger statistical covariations, and replicability is more guaranteed. Although the original version is not excessively long, a parsimonious measure with good psychometric properties is possibly better accepted. A final implication of the study is that since the instrument is used in various regions worldwide, our results provide working hypotheses on internal structure and group differences.
The reduced version also achieved invariance of its main psychometric parameters because the compared groups of students belonging to different academic semesters showed homogeneity in autonomy and control for learning, with a slightly greater tendency in students from the more advanced academic semesters. This finding would indicate that autonomy in learning supports, during professional training, learning achievements, well-being, and adjustment in the classroom [14]. In other contexts, promoting self-regulated learning in students during their first year at university is necessary to enable them to stay longer and avoid dropping out [83]. The difference could be explained by the period in which the data collection was carried out (the middle of the academic semester). The results might have been different if the measurement had been done at the beginning.
One of the indices that showed inconsistency with others (CFI, TLI, G-h) was the Mc [65], a fact possibly associated with the different validity of the items with their construct (factor loads) or the imbalance of the groups compared in a three-group context [84], as occurred in the present study. However, since the rest of the adjustment indices and comparisons between them were satisfactory, the validity of the invariance evaluations can also be considered satisfactory. It can be concluded that the construct is measured invariantly among the three semesters compared. The minor differences in structural invariance (e.g., means, variances, and latent correlations) are not tests of measurement invariance because the differences, even slight, are related to the latent attribute once it has been controlled for differences in the measurement (i.e., measurement invariance).
Regarding the study limitation, we did not evaluate the replicability of the results since a similar-sized sample was not available in this study phase. Although the replicability of the results can be partially concluded from the invariance study [85], a complete study with larger sample size is required. Another limitation is the possible lack of representativeness of the participants because it does not ensure the generalization of the results. The study did not prove the population representativeness of the participating sample, and the degree of representativeness is inconclusive. Finally, it is unknown how the scores obtained from this new version are associated with behaviors and constructs relevant to practical academic life, such as academic performance, student well-being, and other adaptive behaviors. These limitations suggest that, although this version of the SRQ-L can achieve a better psychometric definition in the sample studied, additional studies are required to verify its results.

5. Conclusions

The Hispanic version of the SRQ-L applied to Peruvian students is a measure with satisfactory evidence of validity in its internal structure. The results showed that the two constructs were maintained but without two items from the Autonomy scale and two from the Control scale. Because disorder and trivial distinction of some response thresholds were detected, the scaling of the items needed to be modified from an original scaling of five to one of three response options.
The items of this new version show moderate scalability (measured from Mokken Scaling Analysis) and a satisfactory fit to the monotonic homogeneity model. From structural equation modeling, the items showed to be moderately or strongly associated with their constructs. The reliability of the scores was appropriate for use in interpreting the construct in groups, and the reliability of the items was acceptable. The measurement invariance was maintained in the groups of the chosen semesters. The structural invariance indicated that this characteristic was not fulfilled concerning the mean and latent variance between some semesters. This result was explained as possible natural differences among the groups. The size of these differences was generally small. Finally, the association between the two constructs measured by SRQ-L (autonomy and control) varied between the compared semester groups, indicating a possible evolution of these constructs aligned with their dependency. Although these results point to a more complete and optimal psychometric characterization of SRQ-L, the possible intercultural variability should be evaluated and avoided inducing its validity from one or more studies [86,87,88].

Author Contributions

Conceptualization, C.M.-S. and G.C.-V.; methodology, C.M.-S.; software, C.M.-S.; validation, C.M.-S., G.C.-V. and V.L.-F.; formal analysis, C.M.-S.; investigation, C.M.-S., G.C.-V. and V.L.-F.; resources, F.T.-T. and G.M.C.; data curation, C.M.-S.; writing—original draft preparation, C.M.-S., G.C.-V. and V.L.-F.; writing—review and editing, F.T.-T., G.C.-V., V.L.-F. and G.M.C.; visualization, G.C.-V., V.L.-F., F.T.-T. and G.M.C.; supervision, F.T.-T. and G.M.C.; project administration, F.T.-T. and G.M.C.; funding acquisition, F.T.-T. All authors have read and agreed to the published version of the manuscript.

Funding

This work presents some results of the HIM/2015/017/SSA.1207 research project “Effects of mindfulness training on psychological distress and quality of life of the family caregiver”, Main researcher: Filiberto Toledano-Toledano. The present research was funded with federal funds for health research. It was approved by the Commissions of Research, Ethics and Biosafety (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez, National Institute of Health. The funding agency had no control over the study’s design, the collection, analysis, and interpretation of the data, or the writing of the manuscript.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Commissions of Research, Ethics and Biosafety (Comisiones de Investigación, Ética y Bioseguridad), Hospital Infantil de México Federico Gómez National Institute of Health.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets used during the current study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors acknowledge the financial and technical support of the Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico, in producing this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hardy, J.H.; Day, E.A.; Steele, L.M. Interrelationships Among Self-Regulated Learning Processes: Toward a Dynamic Process-Based Model of Self-Regulated Learning. J. Manag. 2019, 45, 3146–3177. [Google Scholar] [CrossRef]
  2. Arias Gallegos, W.L.; Galdos Rodríguez, D.; Ceballos Canaza, K.D. Estilos de enseñanza y autorregulación del aprendizaje en estudiantes de Educación de la Universidad Católica San Pablo. Rev. Estilos Aprendiz. 2018, 11, 83–107. [Google Scholar] [CrossRef]
  3. Alonso-Tapia, J.; Abello, D.M.; Panadero, E. Regulating emotions and learning motivation in higher education students. Int. J. Emot. Educ. 2020, 12, 73–89. [Google Scholar]
  4. Muñoz Jaramillo, L.C.; Palacios Bejarano, H.; Ramírez Velasquez, I.M. La autorregulación del aprendizaje mediante la estrategia de trabajo experimental con énfasis investigativo. In Desarrollo y Transformación Social Desde Escenarios Educativos; Giraldo Gutiérrez, F.L., Molina García, J.C., Córdoba Gómez, F.J., Eds.; Instituto Tecnológico Metropolitano: Medellín, Colombia, 2018; pp. 33–35. [Google Scholar]
  5. Sáez-Delgado, F.; Mella-Norambuena, J.; López-Angulo, Y.; Olea-González, C.; García-Vásquez, H.; Porter, B. Association Between Self-Regulation of Learning, Forced Labor Insertion, Technological Barriers, and Dropout Intention in Chile. Front. Educ. 2021, 6, 801865. [Google Scholar] [CrossRef]
  6. Stephanou, G.; Mpiontini, M.-H. Metacognitive knowledge and metacognitive regulation in self-regulatory learning style, and in its effects on performance expectation and subsequent performance across diverse school subjects. Psychology 2017, 8, 1941–1975. [Google Scholar] [CrossRef]
  7. Beekman, K.; Joosten-Ten Brinke, D.; Boshuizen, E. Sustainability of Developed Self-Regulation by Means of Formative Assessment among Young Adolescents: A Longitudinal Study. Front. Educ. 2021, 6, 746819. [Google Scholar] [CrossRef]
  8. Sitzmann, T.; Ely, K. A meta-analysis of self-regulated learning in work-related training and educational attainment: What we know and where we need to go. Psychol. Bull. 2011, 137, 421–442. [Google Scholar] [CrossRef]
  9. De la Fuente, J.; Zapata, L.; Martínez-Vicente, J.M.; Sander, P.; Cardelle-Elawar, M. The role of personal self-regulation and regulatory teaching to predict motivational-affective variables, achievement, and satisfaction: A structural model. Front. Psychol. 2015, 6, 399. [Google Scholar] [CrossRef]
  10. Robles, O.F.J.; Galicia, M.I.X.; Sánchez, V.A. Orientación temporal, autorregulación y aproximación al aprendizaje en el rendimiento académico en estudiantes universitarios. Rev. Elec. Psic. Izt. 2017, 20, 502–518. [Google Scholar]
  11. Zimmerman, B.J. Self-Regulated Learning: Theories, Measures, and Outcomes. In International Encyclopedia of the Social & Behavioral Sciences, 2nd ed.; Wright, J.D., Ed.; Elsevier: Oxford, UK, 2015; pp. 541–546. [Google Scholar]
  12. Doo, M.Y.; Bonk, C.J.; Shin, C.H.; Woo, B.-D. Structural relationships among self-regulation, transactional distance, and learning engagement in a large university class using flipped learning. Asia Pac. J. Educ. 2021, 41, 609–625. [Google Scholar] [CrossRef]
  13. Doo, M.Y.; Bonk, C.J. The effects of self-efficacy, self-regulation and social presence on learning engagement in a large university class using flipped Learning. J. Comput. Assist. Learn. 2020, 36, 997–1010. [Google Scholar] [CrossRef]
  14. Deci, E.L.; Ryan, R.M. Optimizing Students’ Motivation in the Era of Testing and Pressure: A Self-Determination Theory Perspective. In Building Autonomous Learners: Perspectives from Research and Practice Using Self-Determination Theory; Liu, W.C., Wang, J.C.K., Ryan, R.M., Eds.; Springer: Singapore, 2016; pp. 9–29. [Google Scholar]
  15. Chen, P.-Y.; Hwang, G.-J. An empirical examination of the effect of self-regulation and the Unified Theory of Acceptance and Use of Technology (UTAUT) factors on the online learning behavioural intention of college students. Asia Pac. J. Educ. 2019, 39, 79–95. [Google Scholar] [CrossRef]
  16. Duchatelet, D.; Donche, V. Fostering self-efficacy and self-regulation in higher education: A matter of autonomy support or academic motivation? High. Educ. Res. Dev. 2019, 38, 733–747. [Google Scholar] [CrossRef]
  17. Koh, J.; Farruggia, S.P.; Back, L.T.; Han, C.-w. Self-efficacy and academic success among diverse first-generation college students: The mediating role of self-regulation. Soc. Psychol. Educ. 2022. [Google Scholar] [CrossRef]
  18. von Keyserlingk, L.; Rubach, C.; Lee, H.R.; Eccles, J.S.; Heckhausen, J. College Students’ motivational beliefs and use of goal-oriented control strategies: Integrating two theories of motivated behavior. Motiv. Emot. 2022. [Google Scholar] [CrossRef]
  19. Jeno, L.M.; Danielsen, A.G.; Raaheim, A. A prospective investigation of students’ academic achievement and dropout in higher education: A Self-Determination Theory approach. Educ. Psychol. 2018, 38, 1163–1184. [Google Scholar] [CrossRef]
  20. Mujica, A.D.; Villalobos, M.V.P.; Gutierrez, A.B.B.; Fernandez-Castanon, A.C.; Gonzalez-Pienda, J.A. Affective and cognitive variables involved in structural prediction of university dropout. Psicothema 2019, 31, 429–436. [Google Scholar] [CrossRef]
  21. Bernardo, A.; Esteban, M.; Cervero, A.; Cerezo, R.; Herrero, F.J. The Influence of Self-Regulation Behaviors on University Students’ Intentions of Persistence. Front. Psychol. 2019, 10, 2284. [Google Scholar] [CrossRef]
  22. Xu, W.; Shen, Z.-Y.; Lin, S.-J.; Chen, J.-C. Improving the Behavioral Intention of Continuous Online Learning Among Learners in Higher Education During COVID-19. Front. Psychol. 2022, 13, 857709. [Google Scholar] [CrossRef]
  23. Deci, E.L. Intrinsic Motivation and Self-Determination. In Reference Module in Neuroscience and Biobehavioral Psychology; Stein, J., Ed.; Elsevier: Amsterdam, The Netherlands, 2017. [Google Scholar]
  24. Williams, G.C.; Deci, E.L. Internalization of biopsychosocial values by medical students: A test of self-determination theory. J. Pers. Soc. Psychol. 1996, 70, 767–779. [Google Scholar] [CrossRef]
  25. Vives-Varela, T.; Durán-Cárdenas, C.; Varela-Ruíz, M.; Fortoul van der Goes, T. La autorregulación en el aprendizaje, la luz de un faro en el mar. Investig. Educ. Med. 2014, 3, 34–39. [Google Scholar] [CrossRef]
  26. Panadero, E.; Alonso-Tapia, J. Teorías de autorregulación educativa: Una comparación y reflexión teórica. Psicol. Educ. 2014, 20, 11–22. [Google Scholar] [CrossRef] [Green Version]
  27. Matos Fernández, L. Adaptación de dos cuestionarios de motivación: Autorregulación del Aprendizaje y Clima de Aprendizaje. Persona 2009, 12, 167–185. [Google Scholar] [CrossRef]
  28. Black, A.E.; Deci, E.L. The effects of instructors’ autonomy support and students’ autonomous motivation on learning organic chemistry: A self-determination theory perspective. Sci. Educ. 2000, 84, 740–756. [Google Scholar] [CrossRef]
  29. Chávez Ventura, G.M.; Merino Soto, C. Validez estructural de la escala de autorregulación del aprendizaje para estudiantes universitarios. Rev. Digit. Investig. Doc. Univ. 2016, 9, 65–76. [Google Scholar] [CrossRef]
  30. Elosua, P. Evaluación progresiva de la invarianza factorial entre las versiones original y adaptada de una escala de autoconcepto. Psicothema 2005, 17, 356–362. [Google Scholar]
  31. Banarjee, P.; Kumar, K. A Study on Self-Regulated Learning and Academic Achievement among the Science Graduate Students. Int. J. Multidisc. Approach Stud. 2014, 1, 329–342. [Google Scholar]
  32. Charter, R.A. Formulas for Reliable and Abnormal Differences in Raw Test Scores. Percept. Mot. Ski. 1996, 83, 1017–1018. [Google Scholar] [CrossRef]
  33. Dominguez Lara, S.A.; Merino Soto, C.; Navarro Loli, J.S. Estimación paramétrica de la confiabilidad y diferencias confiables. Rev. Med. Chile 2016, 144, 406–407. [Google Scholar] [CrossRef]
  34. Chung-chien Chang, K. Examining Learners’ Self-regulatory Behaviors and Their Task Engagement in Writing Revision. Bull. Educ. Psychol. 2017, 48, 449–467. [Google Scholar]
  35. Ho, F.L. Self-Determination Theory: The Roles of Emotion and Trait Mindfulness in Motivation; Linnaeus University: Växjö, Sweden, 2016. [Google Scholar]
  36. Jeno, L.M.; Grytnes, J.-A.; Vandvik, V. The effect of a mobile-application tool on biology students’ motivation and achievement in species identification: A Self-Determination Theory perspective. Comput. Educ. 2017, 107, 1–12. [Google Scholar] [CrossRef]
  37. Hall, N.R. Autonomy and the Student Experience in Introductory Physics; University of California: Los Angeles, CA, USA, 2013. [Google Scholar]
  38. Elosua, P.; Zumbo, B. Coeficientes de fiabilidad para escalas de respuesta ordenada. Psicothema 2008, 20, 896–901. [Google Scholar]
  39. Aquiahuatl Torres, E.C. Metodología de la Investigación Interdisciplinaria. Tomo I Investigación Monodisciplinaria; Self Published Ink.: Mexico City, Mexico, 2015. [Google Scholar]
  40. Ato, M.; López-García, J.J.; Benavente, A. Un sistema de clasificación de los diseños de investigación en psicología. An. Psicol. 2013, 29, 1038–1059. [Google Scholar] [CrossRef]
  41. León, O.G.; Montero, I. Sistema de clasificación del método en los informes de investigación en Psicología. Int. J. Clin. Health Psychol. 2005, 5, 115–127. [Google Scholar]
  42. American Psychological Association. Ethical Principles of Psychologists and Code of Conduct. Available online: https://www.apa.org/ethics/code (accessed on 18 May 2019).
  43. Chávez Ventura, G.; Santa Cruz Espinoza, H.; Grimaldo Muchotrigo, M.P. El consentimiento informado en las publicaciones latinoamericanas de Psicología. Av. Psicol. Latinoam. 2014, 32, 345–359. [Google Scholar] [CrossRef]
  44. Yu, Y.; Shiu, C.-S.; Yang, J.P.; Wang, M.; Simoni, J.M.; Chen, W.-t.; Cheng, J.; Zhao, H. Factor analyses of a social support scale using two methods. Qual. Life Res. 2015, 24, 787–794. [Google Scholar] [CrossRef] [PubMed]
  45. Böckenholt, U.; Meiser, T. Response style analysis with threshold and multi-process IRT models: A review and tutorial. Br. J. Math. Stat. Psychol. 2017, 70, 159–181. [Google Scholar] [CrossRef]
  46. Tennant, A. Disordered Thresholds: An example from the Functional Independence Measure. Rasch Meas. Trans. 2004, 2004, 945–948. [Google Scholar]
  47. Masters, G.N. A Rasch model for partial credit scoring. Psychometrika 1982, 47, 149–174. [Google Scholar] [CrossRef]
  48. Masters, G.N. The Analysis of Partial Credit Scoring. Appl. Meas. Educ. 1988, 1, 279–297. [Google Scholar] [CrossRef]
  49. Luo, G. The relationship between the Rating Scale and Partial Credit Models and the implication of disordered thresholds of the Rasch models for polytomous responses. J. Appl. Meas. 2005, 6, 443–455. [Google Scholar] [PubMed]
  50. Mair, P.; Hatzinger, R. Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R. J. Stat. Softw. 2007, 20, 1–20. [Google Scholar] [CrossRef]
  51. Hock, M. iana: GUI for Item Analysis. In R Package (Version 0.1); R Core Team: Vienna, Austria, 2017. [Google Scholar]
  52. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing; R Core Team: Vienna, Austria, 2021. [Google Scholar]
  53. Salas-Blas, E.; Merino-Soto, C.; Pérez-Amezcua, B.; Toledano-Toledano, F. Social Networks Addiction (SNA-6)—Short: Validity of Measurement in Mexican Youths. Front. Psychol. 2022, 12, 774847. [Google Scholar] [CrossRef] [PubMed]
  54. Merino-Soto, C.; Juárez-García, A.; Salinas-Escudero, G.; Toledano-Toledano, F. Item-Level Psychometric Analysis of the Psychosocial Processes at Work Scale (PROPSIT) in Workers. Int. J. Environ. Res. Public Health 2022, 19, 7972. [Google Scholar] [CrossRef] [PubMed]
  55. Mokken, R.J. A Theory and Procedure of Scale Analysis: With Applications in Political Research; De Gruyter Mouton: Berlin, Germany; New York, NY, USA, 2011. [Google Scholar]
  56. Molenaar, I.W.; Sijtsma, K. Mokken’s approach to reliability estimation extended to multicategory items. Kwant. Methoden 1988, 9, 115–126. [Google Scholar]
  57. Brodin, U.B. A ‘3 Step’ IRT Strategy for Evaluation of the Use of Sum Scores in Small Studies with Questionnaires Using Items with Ordered Response Levels; Karolinska Institutet: Stockholm, Sweden, 2014. [Google Scholar]
  58. Sijtsma, K.; van der Ark, L.A. A tutorial on how to do a Mokken scale analysis on your test and questionnaire data. Br. J. Math. Stat. Psychol. 2017, 70, 137–158. [Google Scholar] [CrossRef]
  59. Straat, J.H.; van der Ark, L.A.; Sijtsma, K. Using Conditional Association to Identify Locally Independent Item Sets. Methodology 2016, 12, 117–123. [Google Scholar] [CrossRef]
  60. van der Ark, L.A. New Developments in Mokken Scale Analysis in R. J. Stat. Softw. 2012, 48, 1–27. [Google Scholar] [CrossRef]
  61. Muthén, B. Goodness of Fit with Categorical and Other Non-Normal Variables. In Testing Structural Equation Models; Bollen, K.A., Long, J.S., Eds.; Sage Publications: Newbury Park, CA, USA, 1993; pp. 205–243. [Google Scholar]
  62. West, S.G.; Taylor, A.B.; Wu, W. Model fit and model selection in structural equation modeling. In The Handbook of Structural Equation Modeling; Hoyle, R.H., Ed.; The Guilford Press: New York, NY, USA, 2012; pp. 209–231. [Google Scholar]
  63. Wu, H.; Estabrook, R. Identification of Confirmatory Factor Analysis Models of Different Levels of Invariance for Ordered Categorical Outcomes. Psychometrika 2016, 81, 1014–1045. [Google Scholar] [CrossRef] [Green Version]
  64. Kang, Y.; McNeish, D.M.; Hancock, G.R. The role of measurement quality on practical guidelines for assessing measurement and structural invariance. Educ. Psychol. Meas. 2016, 76, 533–561. [Google Scholar] [CrossRef]
  65. McDonald, R.P. An index of goodness-of-fit based on noncentrality. J. Classif. 1989, 6, 97–103. [Google Scholar] [CrossRef]
  66. Cohen, J. A power primer. Psychol. Bull. 1992, 112, 155–159. [Google Scholar] [CrossRef] [PubMed]
  67. Ruscio, J.; Roche, B. Variance Heterogeneity in Published Psychological Research. Methodology 2012, 8, 1. [Google Scholar] [CrossRef]
  68. Rosseel, Y. lavaan: An R Package for Structural Equation Modeling. J. Stat. Softw. 2012, 48, 1–36. [Google Scholar] [CrossRef]
  69. Wanous, J.P.; Reichers, A.E. Estimating the Reliability of a Single-Item Measure. Psychol. Rep. 1996, 78, 631–634. [Google Scholar] [CrossRef]
  70. Zijlmans, E.A.O.; van der Ark, L.A.; Tijmstra, J.; Sijtsma, K. Methods for Estimating Item-Score Reliability. Appl. Psychol. Meas. 2018, 42, 553–570. [Google Scholar] [CrossRef] [PubMed]
  71. Zijlmans, E.A.O.; Tijmstra, J.; van der Ark, L.A.; Sijtsma, K. Item-Score Reliability in Empirical-Data Sets and Its Relationship With Other Item Indices. Educ. Psychol. Meas. 2018, 78, 998–1020. [Google Scholar] [CrossRef]
  72. Green, S.B.; Yang, Y. Reliability of Summed Item Scores Using Structural Equation Modeling: An Alternative to Coefficient Alpha. Psychometrika 2009, 74, 155–167. [Google Scholar] [CrossRef]
  73. Payne, R.W.; Jones, H.G. Statistics for the investigation of individual cases. J. Clin. Psychol. 1957, 13, 115–121. [Google Scholar] [CrossRef]
  74. Hemker, B.T.; Sijtsma, K.; Molenaar, I.W. Selection of Unidimensional Scales From a Multidimensional Item Bank in the Polytomous Mokken I RT Model. Appl. Psychol. Meas. 1995, 19, 337–352. [Google Scholar] [CrossRef]
  75. van Schuur, W.H. Mokken Scale Analysis: Between the Guttman Scale and Parametric Item Response Theory. Polit. Anal. 2003, 11, 139–163. [Google Scholar] [CrossRef]
  76. Douglas, J.; Kim, H.R.; Habing, B.; Gao, F. Investigating Local Dependence with Conditional Covariance Functions. J. Educ. Behav. Stat. 1998, 23, 129–151. [Google Scholar] [CrossRef]
  77. Saris, W.E.; Satorra, A.; van der Veld, W.M. Testing Structural Equation Models or Detection of Misspecifications? Struct. Equ. Modeling 2009, 16, 561–582. [Google Scholar] [CrossRef]
  78. Satorra, A.; Bentler, P.M. Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic. Psychometrika 2010, 75, 243–248. [Google Scholar] [CrossRef] [PubMed]
  79. Yoon, M.; Millsap, R.E. Detecting Violations of Factorial Invariance Using Data-Based Specification Searches: A Monte Carlo Study. Struct. Equ. Modeling 2007, 14, 435–463. [Google Scholar] [CrossRef]
  80. Cochran, W.G. The Combination of Estimates from Different Experiments. Biometrics 1954, 10, 101–129. [Google Scholar] [CrossRef]
  81. Higgins, J.P.; Thompson, S.G. Quantifying heterogeneity in a meta-analysis. Stat. Med. 2002, 21, 1539–1558. [Google Scholar] [CrossRef] [PubMed]
  82. Díaz Mujica, A.; Pérez Villalobos, M.V.; González-Pienda, J.A.; Núñez Pérez, J.C. Impacto de un entrenamiento en aprendizaje autorregulado en estudiantes universitarios. Perf. Educ. 2017, 39, 87–104. [Google Scholar] [CrossRef]
  83. Cambridge-Williams, T.; Winsler, A.; Kitsantas, A.; Bernard, E. University 100 Orientation Courses and Living-Learning Communities Boost Academic Retention and Graduation via Enhanced Self-Efficacy and Self-Regulated Learning. J. Coll. Stud. Retent. 2013, 15, 243–268. [Google Scholar] [CrossRef]
  84. Kim, K.H.; Cramond, B.; Bandalos, D.L. The Latent Structure and Measurement Invariance of Scores on the Torrance Tests of Creative Thinking-Figural. Educ. Psychol. Meas. 2006, 66, 459–477. [Google Scholar] [CrossRef]
  85. Byrne, B.M. Structural Equation Modeling with Mplus: Basic Concepts, Applications, and Programming, 1st ed.; Routledge: New York, NY, USA, 2011. [Google Scholar]
  86. Merino-Soto, C.; Calderón-De la Cruz, G.A. Validez de estudios peruanos sobre estrés y burnout. Rev. Peru. Med. Exp. Salud Publica 2018, 35, 353–354. [Google Scholar] [CrossRef] [PubMed]
  87. Merino-Soto, C.; Angulo-Ramos, M. Metric Studies of the Compliance Questionnaire on Rheumatology (CQR): A Case of Validity Induction? Reumatol. Clin. 2021. [Google Scholar] [CrossRef] [PubMed]
  88. Merino-Soto, C.; Angulo-Ramos, M. Validity induction: Comments on the study of Compliance Questionnaire for Rheumatology. Rev. Colomb. Reumatol. 2021, 28, 312–313. [Google Scholar] [CrossRef]
Figure 1. Distribution of the thresholds of the items in their dimensions.
Figure 1. Distribution of the thresholds of the items in their dimensions.
Sustainability 14 11239 g001
Table 1. Descriptive and correlational characteristics of the SRQ-L items (n = 596).
Table 1. Descriptive and correlational characteristics of the SRQ-L items (n = 596).
Descriptive InformationCorrelations
MSDSkew.Kurt.au1au3au6au9au11au12au2au4au7au8au10au14au5au13GenderAge
Autonomy (F1)
au13.7900.993−0.138−0.8061.000 −0.1300.029
au33.9481.062−0.664−0.1990.5141.000 −0.024−0.043
au64.3100.898−1.140.7740.2280.2571.000 −0.006−0.070
au94.2270.942−0.9730.2230.2090.3160.4591.000 0.070−0.139
au114.4130.900−1.572.1860.2380.2600.3300.3151.000 0.0500.090
au124.1530.981−0.8980.0780.1760.1610.1980.2390.2801.000 −0.0150.056
Control (F2)
au21.6210.9771.4471.3140.0220.0170.0310.055−0.0530.0811.000 −0.109−0.071
au42.8941.3710.034−1.050.1440.2840.1520.1680.1030.1690.2661.000 −0.008−0.056
au72.1481.2960.779−0.5510.020−0.0010.1250.0630.0020.0810.4530.2991.000 −0.044−0.098
au82.3841.3000.541−0.691−0.075−0.0730.1700.1520.0110.1490.3620.1390.4791.000 −0.069−0.131
au102.1221.2270.751−0.436−0.0410.0440.1730.156−0.0790.0990.4090.3860.4770.4141.000 −0.026−0.080
au142.2971.4060.670−0.8540.0600.0560.1360.0860.0400.1310.3770.2690.6320.3610.4621.000 −0.075−0.075
au53.9301.085−0.682−0.2160.1170.1280.5120.4030.2520.1770.1460.1850.2080.2460.2320.1591.000 0.011−0.128
au133.9681.096−0.8520.0540.1480.1400.2290.2760.3220.6330.1050.1360.1120.2190.1540.1810.2001.0000.013−0.002
Note. Skew.: Skewness coefficient. Kurt.: Kurtosis coefficient. Correlations are not statistically significant (see text). au1 … aun: SRQ-L items. Correlations: Pearson correlation coefficient. Association with gender and age: biserial-point, and Pearson coefficients correlation, respectively.
Table 2. Results of the Mokken analysis (scalability and monotonicity) of the SRQ-L.
Table 2. Results of the Mokken analysis (scalability and monotonicity) of the SRQ-L.
Scalability (H Coefficient)Monotonicity
(n = 596)
Initial AnalysisFinal Analysis
Total Sample
(n = 596)
Level 1
Semester
(n = 136)
Level 2
Semester
(n = 309)
Level 3
Semester
(n = 131)
Total Sample
(n = 596)
#vi#zsigCRIT
Hs.e.Hs.e.Hs.e.Hs.e.Hs.e.
Autonomy (F1)
au10.3390.0360.3200.0680.3650.0550.3200.0660.4130.039000
au30.3140.0340.3390.0720.3250.0490.2730.0650.3990.037000
au60.3450.0350.2810.0780.3700.0510.3320.0550.3950.0411019
au90.3150.0330.3320.0640.3450.0440.2390.0690.3630.040000
au110.2880.0430.2350.0840.3000.0670.3360.061
au120.1870.0370.1050.0690.2050.0530.1870.073
Total0.2960.0290.2690.0550.3160.0440.2790.0480.3920.033
Control (F2)
au20.3930.0350.3220.0840.4860.0420.2620.0770.4140.0402017
au40.2580.0320.1830.0670.2540.0460.3540.057
au70.4510.0260.4130.0560.4780.0370.4210.0530.5220.027000
au80.3740.0270.3590.0540.4070.0380.3150.0590.4450.032109
au100.4390.0250.3890.0570.4740.0330.4070.0580.4650.030000
au140.4240.0260.3400.0630.4740.0340.4000.0530.4900.028105
au50.2900.0360.3230.0590.3270.0480.1540.089
au130.2570.0390.2710.0670.2800.0550.2000.084
Total0.3670.0220.3260.0460.4010.0290.3290.0470.4720.026
Note. s.e.: standard error of H. #vi: number of monotonicity violations. #zsig: number of statistically significant violations. CRIT: Combined and weighted count of #vi and #zsig.
Table 3. Confirmatory factor analysis and item reliability of the final version of the SRQ-L.
Table 3. Confirmatory factor analysis and item reliability of the final version of the SRQ-L.
λF1λF2h2ritcritem
au10.674 0.4540.4360.300
au30.689 0.4740.5030.400
au60.739 0.5460.4120.268
au90.704 0.4960.4310.294
au2 0.6350.4030.5140.341
au7 0.8720.7600.6960.626
au8 0.6660.4440.5190.348
au10 0.7230.5220.5760.429
au14 0.8190.6710.6090.479
Descriptive statistics
M16.27510.572
SD2.7544.642
Skew.−0.5830.765
Kurt.0.212−0.118
Clinicometric indicators
SEMF1SEMF2SEMDDED
Value1.4702.0606.94313
Z
85%2.1162.9661019
90%2.4183.3891122
95%2.8824.0381426
99%3.7875.3071834
Note. λ: factor loadings. h2: commonality. ritem: reliability of the item. Skew: Skewness coefficient. Kurt.: Kurtosis coefficient. ritc: corrected item–test correlation. F1: Autonomy. F2: Control. SEM: standard error of measurement of F1 and F2. SEMD: standard error measurement of difference (reliable difference). DED: abnormal deviation (abnormal difference).
Table 4. Measurement invariance parameters (semester groups: initial, intermediate, and advanced).
Table 4. Measurement invariance parameters (semester groups: initial, intermediate, and advanced).
Invariance ModelsFit EstimatesFit Differences
WLSMV-χ2
(df)
CFITLIG-hMcRMSEA
(90% CI)
ΔWLSMV-χ2
(df)
ΔCFIΔTLIΔG-hΔMcΔRMSEA
Configuration159.314
(78)
0.9730.9630.9700.9330.073
(0.056, 0.089)
Thresholds171.101
(92)
0.9740.9690.9710.9350.066
(0.050, 0.081)
17.79
(14)
0.0010.0060.0010.002−0.007
Loads,
thresholds
183.521
(106)
0.9750.9740.9710.9360.061
(0.046, 0.075)
13.21
(14)
0.0010.0050.0000.001−0.005
Loads,
thresholds,
intercepts
206.634
(110)
0.9680.9690.9650.9220.067
(0.053, 0.081)
10.07 *
(14)
−0.007−0.005−0.006−0.0140.006
Residuals200.111
(124)
0.9750.9780.9710.9380.056
(0.041, 0.070)
1.40
(14)
0.0070.0090.0060.016−0.011
Note. G-h: Gamma-hat index. Mc: McDonald index. CFI: Comparative Fit Index. TLI: Tucker-Lewis Index. RMSEA: Root Mean Square Error of Approximation. Δ: difference. * p < 0.05.
Table 5. Structural invariance (means, variances, and latent covariances).
Table 5. Structural invariance (means, variances, and latent covariances).
Comparison of Latent MeansVariances and Latent Correlations
F1 (Autonomy)F2 (Control)
g1g2g3g1g2g3 g1g2g3SVH
Variances
g1
(1st to 2nd semester)
F10.7460.7791.0620.116
g2
(3rd semester)
F20.4011.0410.4620.321
Δno-Z−0.075 −0.323 *SVH0.3000.1430.393
ΔZ−0.085 −0.317 *
Correlations
g3
(4th to 10th semester)
g1g2g3
Δno-Z0.0290.116−0.0110.322 *r(F1,F2)−0.0750.0970.244
ΔZ0.0280.322−0.0170.459 *
Note. Δno-Z: nonstandardized difference; ΔZ: standardized difference. g1, g2, g3: semester groups compared (initial, intermediate, and advanced, respectively). F1: Autonomy factor. F2: Control factor. SVH: standardized variance heterogeneity. Ng1 = 136. Ng2 = 309. Ng3 = 131. * p < 0.05.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Merino-Soto, C.; Chávez-Ventura, G.; López-Fernández, V.; Chans, G.M.; Toledano-Toledano, F. Learning Self-Regulation Questionnaire (SRQ-L): Psychometric and Measurement Invariance Evidence in Peruvian Undergraduate Students. Sustainability 2022, 14, 11239. https://doi.org/10.3390/su141811239

AMA Style

Merino-Soto C, Chávez-Ventura G, López-Fernández V, Chans GM, Toledano-Toledano F. Learning Self-Regulation Questionnaire (SRQ-L): Psychometric and Measurement Invariance Evidence in Peruvian Undergraduate Students. Sustainability. 2022; 14(18):11239. https://doi.org/10.3390/su141811239

Chicago/Turabian Style

Merino-Soto, César, Gina Chávez-Ventura, Verónica López-Fernández, Guillermo M. Chans, and Filiberto Toledano-Toledano. 2022. "Learning Self-Regulation Questionnaire (SRQ-L): Psychometric and Measurement Invariance Evidence in Peruvian Undergraduate Students" Sustainability 14, no. 18: 11239. https://doi.org/10.3390/su141811239

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop