1. Introduction
Advances in the knowledge society has derived from the use of technologies in any area of our life, whether it be work, personal, or social, among others, has brought about a transformation in the learning acquired at university. These teaching–learning processes are modified by the need to incorporate the management of informational competence (IC) into the broad range of competences acquired by higher education students.
IC is defined as a complex competence that develops on the basis of four key dimensions: the search for information; the effective analysis, selection, and evaluation of information; the proper organisation and processing of information; and the strategies used to communicate information to society [
1,
2].
Along these lines, Gómez [
3] and Rubio [
4] explained that IC represents the set of knowledge, abilities, and skills that allow a person to discern when information is needed, select where to find it, identify mechanisms to evaluate selected information, and decide how to use and communicate it ethically and truthfully to society. Specifically, Pinto [
2] defined the elements that make up CI: the search for information; the evaluation of information; the processing of information; and the communication and use of information. Regarding the search for information, this is understood as the knowledge and use of source documentation, analysing the terms used in the area of study in which information is being sought, including the strategies necessary to plan and carry out searches. The evaluation of information encompasses the strategies needed to recognise the truthfulness and validity of the resources found, identifying the author’s purpose in the text, the type of sources selected, the updating of the information, and the evaluation of the authorship of the information. As for information processing, this refers to the tools used to organise the selected information through the use of database administrators in order to map it out, recognise the structure of the text, and use bibliographic reference managers. Finally, the communication and use of information consists of the skills required to convey the information to a specialised or lay audience, to do this in other languages, to edit the texts to be properly displayed through presentation programmes, and to have knowledge of professional ethics and legislation on the use of information for dissemination via the internet.
The definition provided by Pinto [
2] regarding the elements that make up IC is in line with the terminology presented by other authors [
5], who consider informational competences as psychological configurations that must be implemented in an integrated manner, whereby, in a specific context and with specific content, all necessary resources within the person’s grasp (knowledge, skills, etc.) are coordinated to successfully overcome difficulties or problems, with a high degree of quality and effectiveness, learning through appropriate interaction with information without the need to exclude it depending on type, format, or medium. This entails being able to discern the truthfulness of the information, evaluate it, apply it, transform it, and re-transmit.
Accordingly, the starting point for this study is to consider IC as a plural term, speaking of Informational Competences (IC), because they are configured on the basis of four dimensions, as mentioned above.
The implicit learning associated with the management of IC in the university setting requires this collective to achieve certain skills and autonomy in the use of these competences in order to maintain permanent communication with others, in which they are immersed thanks to the benefits of using technology [
6]. In addition, university is considered the most favourable setting for dealing with IC because it is the space for conscientiously training future professionals who, in any sector or field, are going to transmit, apply, and create new knowledge in order to generate critical and analytical thinking among the collectives to which they transfer their knowledge [
7,
8].
Some of the research carried out in the field of IC, with regard to the self-perception of university students, highlights the need for specific training in the dimensions that make up these competences so that they are integrated into current curricula [
9]. The study carried out with students from the Universidad Internacional de Valencia on its degrees in Early Years and Primary Education shows that students at this university perceive themselves to be competent in the elements that make up IC: searching, browsing, and filtering information. They also consider that the training received has prepared them to obtain an advanced level of knowledge to discern the validity and truthfulness of the selected information [
10].
In short, the development of IC in the university setting will make it possible for students to build good knowledge [
11]. However, in order for this to happen with all the guarantees of success, university teachers must be involved since they are a key component in mediating and stimulating this learning by relating the knowledge necessary to use IC so that university students can analyse and evaluate the information they receive or seek in order to generate their own knowledge and transmit it to others [
7].
In light of the above, the aim of this study is to ascertain the opinions of undergraduates studying Education Degrees at the Universidad de Córdoba regarding the knowledge they possess about informational competences for their professional development, extracting the defining elements of each of them through the practical application of exploratory factor analysis.
2. Materials and Methods
The research design used is non-experimental, descriptive, and correlational [
12], based on a survey that used an adaptation of the ALFIN-HUMASS questionnaire [
2] to measure students’ assessments of their knowledge of informational competences for academic progress. The instrument comprises a total of 69 assessment elements evaluated by means of a nine-point scale. These are grouped into the four established IC (see
Appendix A): search for information (19), evaluation of information (13), processing of information (11), and communication of knowledge (26). Following the changes made in the original tool and administered to the reporting group, the team felt it advisable to confirm the internal consistency of the measures obtained, with the overall Cronbach alpha value being 0.982, indicating a high reliability index for the tool [
13]. Content validity was sought by means of an item discrimination test to determine the ability of each element to distinguish high-, medium-, and low-scoring subjects in reference to the construct measured by the instrument and also using Student’s
t-test for independent samples after [
14] the elements have been distributed into three groups based on the sum total of the items (high, medium, and low) between the high and low groups. The results indicated that 100% of the elements present acceptable discriminatory power (values
p = 0.000), which shows that the instrument has acceptable validity.
The sample was made up of a total of 537 undergraduates studying Education Degrees at the Universidad de Córdoba (Spain), with 73.5% women and 36.5% men, an average age of 20.73 (SD = 2.499), and with equal participation over the four academic years that make up the different university degrees (1st = 23.9%; 2nd = 27.1%; 3rd = 24.7%; 4th = 24.4%).
The analytical technique used is exploratory factor analysis, a strategy that allows us to accurately explore the underlying dimensions, constructs, or latent variables of those observed, i.e., it ascertains the extent to which a measurement tool adequately represents the latent constructs of interest or different dimensions within the same construct [
15].
Technically, it is used to reduce a large number of phenomena, concepts, or variables to a smaller number of components or factors so that they are representative of those concepts. Blalock [
16] pointed out that, “if we have a large number of indexes or variables related to each other, these reciprocal relationships may be due to the presence of one or more underlying variables or factors related to those to varying degrees” (p. 417, [
16]). It is thus assumed that high intercorrelations within a group of variables are due to one or more general factors or variables [
17]. This technique is therefore aimed at identifying these factors and giving meaning to these sets of correlations.
Kerlinger and Lee [
18] argued that this procedure serves the cause of scientific parsimony, reducing the multiplicity of tests and measures to achieve greater simplicity. Hence, it indicates which tests or measures go together and the extent to which they do, reducing the number of variables scientists must deal with, helping them to locate and identify fundamental units or properties that underlie tests and measures. Ultimately, the method is designed to find what variables have in common.
Amérigo and Pescador [
19], citing Yela [
20], distinguished four phases in the development of this analytical strategy, as follows:
Preparation: this involves calculating the correlations between the variables by forming the correlation matrix.
Factorisation: this consists of extracting the number of factors.
Rotation: this involves determining the relationships between each factor and the study variables in order to know the content of each factor and promote its interpretation.
Interpretation: in this phase, we study the specific variables that saturate each factor, trying to determine why some variables saturate in a certain factor and others in others, which culminates with the labelling of each of them.
In the application of Exploratory Factor Analysis, we must take into account that the size of the reporting group should never be less than 50 cases, preferably larger than 100 and ideally between 300 and 400 cases [
21]. In this study, the group about which information has been compiled encompasses 537 people, ensuring the fulfilment of this condition.
3. Results
The first phase of this analysis aims to verify the suitability of the technique to the data collected. One of the requirements that must be met for the application of this technique is that the variables are concomitant. In this regard, the matrix of correlations between all the items of the instrument should be studied with the aim of deciding whether or not it is appropriate to apply a factorisation process. The existence of high correlations in this matrix allows us to deduce the existence of interdependence between them, recommending the use of this technique. Its study is determined by means of various statistical procedures:
- 5.
Identifying the Determinant of the Correlation Matrix: this is an indicator of the degree of correlations between variables. As Bisquerra [
22] and García, Gil, and Rodríguez [
23] pointed out, a very low determinant assumes the existence of variables with very high correlations with one another, indicating that the data may be suitable for factor analysis. In this study, the determinant obtained an extremely low value of 2279 × 10
−29, indicating the existence of high correlations between the variables, which makes it possible to apply this technique.
- 6.
Bartlett’s Test of Sphericity: this test is used to verify the hypothesis that the correlation matrix is an identity matrix, a matrix whose primary diagonal is made up of ones (correlation of the item to itself), and the rest are zeros (null variables). It consists of an estimate of chi-square based on a transformation of the correlation matrix. The value obtained is 38,546.299, which, with a value p = 0.000, has proved significant at a significance level of 0.01, indicating that the correlation matrix is not an identity matrix, with significant, probably high, correlations since the value found in the test is statistically high. This indicates that the data matrix is suitable for factor analysis.
- 7.
Anti-image Correlations: these indicate the strength of the relationships between two variables by eliminating the influence of others. The coefficients of the matrix of anti-image correlations must be low outside the main diagonal for the sample in order to apply factor analysis. A study of this matrix shows that the correlation coefficients are mostly less than 0.07, which means that factor analysis can be applied and the 69 items summarised in factors.
- 8.
Kaiser–Meyer–Olkin KMO Sample-Fit Measure: this test compares the magnitudes of correlation coefficients observed in the correlation matrix with the magnitudes of correlation coefficients observed in the anti-image correlation matrix. This value was 0.970, so it is a meritorious value [
22] that advises the application of factor analysis since the correlations between pairs of variables cannot be explained by the other variables.
- 9.
Measure of Sampling Adequacy—MSA: this index is reflected in the main diagonal of the anti-image correlation matrix. Low values in this diagonal discourage the use of factor analysis. In this case, the adequacy measures are high (values greater than 0.938), which would support the use of this technique.
As we have seen in this first phase of the analysis, with tests carried out based on the correlation matrix, the data collected are acceptable for the application of this multivariate technique.
The second phase aims to determine the minimum number of common factors capable of successfully replicating the observed correlations between the variables. Decoser [
24] suggested that the principal component extraction method is best suited to reduce the initial dimensionality of the data to a smaller set of components that maximises the explanation of the total observed variance.
Since the main objective is to explain the common variance between the variables (communality) with the least number of factors (parsimony), we must first confirm, through the study of communalities, that the total variability of our matrix will be explained by all the extracted components. This study presents values higher than 0.53, indicating that all variables contained in the study are explained by the extracted components. This is because values extracted close to zero indicate an absence in the explanation of variable variability.
Next, the explanation of variance must be maximised with the least number of factors, which will determine the total number of items to extract. Based on the rule of preserving those components with eigenvalues greater than unity, a total of 9 factors were obtained, with total explained variance of 69.653%. According to the ideas expressed by García, Gil, and Rodríguez [
23], the minimum number of variables that must configure a factor must be greater than three since, with a smaller number, it is clear that mathematically we will find a single factor that encompasses the information of the correlations between variables. Until we achieve a good factor model, we need to gradually define the sample of variables that best represents the domain of a study by eliminating the minor factors (those that explain the least variance or those with less general content). The percentage of the total explained variance is a decisive criterion in deciding the number of factors to maintain. In the case of social sciences, Hair et al. [
21] indicated a minimum of 60% as a satisfactory threshold for the extraction of factors, a criterion that is fulfilled in this work. Up to factor number eight, at least three variables make up each factor, presenting a high level of correlation, with an explained variance percentage of 66.446% (see
Table 1).
Looking at the correlation matrix between the different extracted components (see
Table 2), we see that the interaction is high between all the factors (correlation greater than 0.5), with the exception of between factors 8–1, 8–2, 8–3, and 8–7, which is understood as medium (correlations between 0.4 and 0.5) [
25,
26]. These data indicate linear associations between the different factors, giving meaning to the application of factor analysis.
In a third stage, in order to simplify the interpretation of each factor, the extracted factors were rotated using the Varimax method, recommended by Kim and Mueller [
27], which is able to extract in an orthogonal way the value of the correlation of the variance in the factor (zero correlation between the factors). By determining the relationships between each factor and the study variables, we can know the content of each factor and facilitate its interpretation.
Because the analysis was carried out on the basis of considering each item in the questionnaire as a variable, the rotated component matrix (see
Table 3) shows the variables ordered for each factor in terms of correlation with themselves as well as the high internal consistency value for each one. We have taken into account the need expressed by Stevens 2002, cited in Field [
28], for the magnitude of factor loadings to be greater than 0.40 to obtain a satisfactory result.
The last phase of this technique is to label each of the factors based on common explanatory criteria for the elements that saturate in each of them and to explain their contents. The factors obtained, showing with their name and their contribution to the model explanation, are given in
Table 4.
The first factor, called Application of rules to preserve copyright, explains 18.8% of the variance and consists of 17 elements that refer to the use of the guidelines that must be followed to accredit the words or ideas of other authors cited in the discourse, taking into account responsibility and ethics in the processing of information. The second factor, called Assessment of the information found and integration into discourse, consists of 14 elements relating to the exhaustive evaluation of the information selected for later use in the preparation of a paper and explains 10.8% of the total variance. The third factor, called Checking the reliability of sources, explains 8.3% of the variance of the questionnaire and is made up of six elements related to checking the quality of the sources consulted. The fourth factor, called Selection of sources, is made up of six elements that explain 7.4% of the variance and alludes to the knowledge that must be taken into account to select sources, both primary and secondary, to search for information. The fifth factor, called Analysis of information, explains 6.2% of the variance and is made up of six elements related to the identification and verification that must be carried out when searching for information, with a view to documenting the subject under study. The sixth factor, called Preparation for information search, explains 5.8% of the variance and integrates strategies prior to searching for information on a topic. The seventh factor, designated Organisation and dissemination of information, explains 4.8% of the variance and refers to planning the information to be disseminated and the ways of doing this. Finally, the eighth factor called Use of Advanced Search, explains 4% of the total variance and includes filters and operators that help optimise the location of information efficiently, thereby saving time and effort for the author of the paper.
4. Discussion
By analysing data in order to ascertain the IC knowledge possessed by undergraduates studying Education Degrees at the Universidad de Córdoba (Spain), we were able to bring together and regroup the four starting dimensions that composed IC—the search for information, the evaluation of information, the processing of information, and the communication and use of information [
2]—into eight explanatory factors: application of rules to preserve copyright, assessment of information found and integration into discourse, checking the reliability of sources, selection of sources, analysis of information, preparation for information search, organisation and dissemination of information, and use of advanced search. This new classification may be due to the fact that, as Carvajal et al. [
25] indicated, aspects related to the evaluation of information are considered an essential condition for the path of access or search; processing and communication of information; and understanding this evaluation as the necessary judgement made in relation to the information contained in the sources accessed, alluding to the truthfulness, reliability, validity, relevance, actuality, pertinence, authenticity, and authorship of the information consulted.
The factors obtained here correspond directly to the definition of IC developed by CRUE-TIC & REBIUM [
1] and assumed by Pinto [
2]. The knowledge and skills necessary for an individual to be able to recognise when he or she needs information correspond to factor 6 (preparation for information search), factor 1 (application of rules to preserve copyright), factor 2 (assessment of information and integration into discourse), and factor 3 (checking the reliability of sources); the location of information corresponds to factor 4 (selection of sources) and factor 8 (use of advanced search); assessment of the adequacy of information corresponds to factor 5 (analysis of information,); and the appropriate use of information corresponds to factor 7 (organisation and dissemination of information).
Similarly, Gómez [
3], Rubio [
4], and De Pablos [
26] agreed with the previous classification, stressing that the essential skills that a university student must deploy in the management of IC pertain to how to find the information needed, analyse and select information efficiently, organise information appropriately, and know how to use and communicate information based on ethical and legal aspects to build truthful knowledge to be transmitted to others.
5. Conclusions
The usefulness of exploratory factor analysis has been indisputable not only because of its psychometric value to estimate the construct validity of different measurement instruments but also as a test to estimate models of underlying variables of robust theoretical entities that explain different objects of measurement in the social sciences.
In this case, and for subsequent applications of this analytical strategy, it is important to take into consideration a number of suggestions to ensure its success: firstly, the nature of the variables studied and the scalar response format of the items that measure these variables must be considered. Aspects related to perceptions, attitudes, or personality do not present problems in terms of interpretation [
29]. Similarly, the use of few response categories can pose a problem of attenuation that leads to distorted estimates of factor loadings [
30]. Furthermore, ensuring an acceptable sample size to run the analysis is important. In addition, it is necessary to have acceptable values for suitability indices that recommend the use of this analysis strategy, such as KMO and MSA [
23]. It is also essential to justify the method of factor extraction by virtue of the nature of the data and the topic studied, the criteria for determining in a justified way the number of factors obtained (number of variables per factor, percentage of variance explained by the resulting model, and saturation levels of variables with their reference factors), as well as the rotation method.
In short, this has been the ideal strategy in line with the goal of this study and to set up a theoretical explanatory model of IC considered by students to be relevant in their academic progress during the development of their university studies.