Next Article in Journal
E-Commerce in the Retail Chain Store Market: An Alternative or a Main Trend?
Previous Article in Journal
Conversion of End-of-Life Household Materials into Building Insulating Low-Cost Solutions for the Development of Vulnerable Contexts: Review and Outlook towards a Circular and Sustainable Economy
 
 
Article
Peer-Review Record

Effects of Faking on the Predictive Validity of a Quasi-Ipsative Forced-Choice Personality Inventory: Implications for Sustainable Personnel Selection

Sustainability 2021, 13(8), 4398; https://doi.org/10.3390/su13084398
by Alexandra Martínez *, Silvia Moscoso and Mario Lado
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Sustainability 2021, 13(8), 4398; https://doi.org/10.3390/su13084398
Submission received: 17 March 2021 / Revised: 2 April 2021 / Accepted: 7 April 2021 / Published: 15 April 2021

Round 1

Reviewer 1 Report

See attached Word document

Comments for author File: Comments.pdf

Author Response

Sustainability – MS. 1167154

 

Response Point-by-Point to Reviewer 1

 

We thank you very much Reviewer 1 for the suggestions His/her input helped us not only to address pending issues, but also to set us in the right direction of our manuscript. Below, we present our point-by-point response to your comments, suggestions, and concerns.

 

  1. This manuscript provides useful information about the effects of faking on validity of quasi-ipsative forced-choice personality measures of personality traits for prediction of academic performance. It reflects the trend to change measurement of such traits from single-stimulus to forced-choice instruments.  Because I believe that most practitioners and perhaps most researchers are more familiar with the single stimulus instruments than they are with forced-choice instruments, my main criticism of this paper is that it does not provide enough information about aspects of the measurement of Big Five traits using quasi-ipsative techniques. If more information were provided, I would be happy to endorse these changes.

 

Thank you for this suggestion, we have included the following information to extent the description to clarify the characteristics of FC personality measures in the section 1.3. Forced-choice inventories and control of the effects of faking of the manuscript.

 

For the normatives FC questionnaires we included this parraghap (lines 147-152):

 

“Therefore, the normative scores allow inter-individual comparisons on each personality factor assessed, that is the scores of an individual are statistically dependent on other individuals in the population and independent of other scores of the assessed individual. An example of an item of a normative FC type would be: Please check the answer that best indicates how you behave: In social meetings, usually: (a) other people introduce you; (b) you introduce yourself to others. The Myers-Briggs Type Indicator (MBTI) [67] would be an example of a normative FC personality test widely used.

 

Respect to the ipsative scores, we included the following description (lines 156-168):

 

“…the score for each dimension depends on the individual's scores on the other rated dimensions. Consequently, the sum of the scores obtained for each individual is a constant. Ipsative scores allow us to compare an individual through different personality factors (intra-individual comparisons), that is, the results are dependent at the individual level but independent of the scores of other subjects in the test. Therefore, it should be noted that it only shows us the relative importance of each factor. It is precisely for this reason that the use of this type of measures is not recommended in contexts in which it is necessary to make a comparison or ranking of all participants, such as the selection processes because the information provides with this measure would be biased [68]. In this category, we can find several personality tests widely applied in professional practice, such as the Occupational Personality Questionnaire (OPQ) [69], Edwards Personal Preferences Schedule (EPPS) [70] or the Description en Cinq Dimensions (D5D) [71].”

 

Finally, for the quasi-ipsative FC questionnaires we included this change (lines 176-203):

 

“Likewise, Hicks [65] and Meade [66] indicate that quasi-ipsative scores are defined by the following conditions: (a) the results for each factor vary between individuals over a certain range of scores, (b) the scores do not add up to the same constant for all people, and (c) increasing the score in one factor does not necessarily produce a decrease in the score in other factors. An example of an item from this type of FC inventory would be the following: In each item, mark the phrase that best describes you and the phrase that least describes you. “I am a person (a) who is open-minded; (b) who is a perfectionist; (c) who does not usually lose their temper.” Test such as the Gordon Personal Profile-Inventory (GPPI-I) [73], the IPIP-MFC [68], or the more recent QI5F-Tri by Salgado [74]) are examples of quasi-ipsative FC inventories.

Furthermore, it can distinguish two types of quasi-ipsative FC scores: (a) algebraically dependent quasi-ipsative FC, when a metric dependence exists between the scores and, therefore, there is some degree of ipsativization of scores; and (b) non-algebraically dependent quasi-ipsative FC, when the score for each personality factor is not influenced by the score in other personality variables [72].

In summary, quasi-ipsative questionnaires share properties with normative and pure ipsative measures. This uniqueness means that the scores obtained with this forced-choice format can be analyzed at the intra- and inter-individual level [62]. In other words, the scores allow us to know the individual differences of each subject and at the same time provide us information about his/her differences with respect to a reference group, which makes these measures more appropriate than the ipsative ones for research from a statistical point of view.”

 

  1. The main information missing from the presentation are correlation matrices involving all measures predictors and criteria. I believe that three tables are required 1) a table of correlations from the within-subjects sample under both honest response conditions and under faking response conditions, including correlations between paired honest and faking condition measures of the predictors. This table will be huge, perhaps dedicated to a supplementary document, but predictor-predictor correlations, criterion-criterion correlations and predictor-criterion correlations in the same condition and between honest and faking conditions will be valuable information for those contemplating use of forced-choice instruments.  2) a table of predictor-predictor, criterion-criterion, and predictor-criterion correlations from the between-subjects sample, and, if space allows, 3) a table of correlations for the total sample.  I believe that researchers and practioners moving toward forced-choice measures need this information to guide their transition to these newer ways of measuring traits.

 

We really appreciate this important suggestion and we have proceeded to include the intercorrelations of all variables (predictors and criteria). We have considered including this information as data supplemental (appendices). Specifically, we present three tables one for each sample examined in the study. In appendix A we included the matrix of correlations for the total sample (line 751); Appendix B presents the correlations matrix for the between-subject design sample (line 754) and the matrix of correlations for the within-subject sample is posit in Appendix C (line 757).

 

  1. The honest with faking condition trait correlations will address the issue of the effects of faking on measurement of traits. Everyone knows that faking affects both the central tendency and variability of single-stimulus measures of traits. It is also known that the faking condition measurement of traits do not correlate as highly as possible with honest condition measures when summated score measures are used.  The data of this MS suggest that criterion-related validty of forced-choice measures under faking conditions is not the same as criterion-related validity under honest conditions.  This suggests that forced choice faking condition measures of traits do not correlate with honest condition measures as highly as they could, but we need the correlations of honest condition measures with faking condition measures to be sure (new Table 1) arguments for the use of forced-choice measures has been that such measures are less susceptible to the effects of faking than are single stimulus measures.  Ideally, measures of traits should be the same regardless of whether they’re obtnained in faking conditions or in honest conditions. I am aware of only one study (Hendy, Krammer, Schermer & Birderman, 2020) that has claimed to have found this.

 

Thank you again for this comment. As we indicated in the previous question, we have incorporated in Appendix C (line 757) the correlation matrix for the intra-subject sample, which includes the correlations between the predictors (Big Five) in the honest and faking conditions. Therefore, this information will now be available to potential readers who wish to consult it. In relation to the findings, the results show that there is a positive correlation between all the personality factors between both conditions, which suggests that the measures of the traits are equivalent to each other. These results are supported by those obtained in Martínez, Moscoso & Lado (2021) who analyzed the effect of faking on the invariance of the measure of the same quasi-ipsative FC personality inventory used in this study. Their results indicated that faking does not affect measure equivalence, there is equivalence between the same personality factors when they are measured in different response conditions.

 

Reference:

Martínez, A., Moscoso, S., & Lado, M. (2021). Faking Effects on the Factor Structure of a Quasi-Ipsative Forced-Choice Personality Inventory. Journal of Work and Organizational Psychology, 37(1), 1-10. https://doi.org/10.5093/jwop2021a7

 

  1. Finally, although these data do not seem to be available, it would have been great to have seen how highly the forced-choice measures of the Big Five correlated with single-stimulus measures in the samples used here.

 

We appreciate this suggestion; however, a single-stimulus measure has not been applied in this study, therefore it has not been possible to obtain the correlations between both types of personality measures. Nevertheless, we will consider your recommendation for future research on this topic.

 

  1. The comparison of honest condition validities with faking condition validities are mentioned by the authors. Yet, I can find no statistical tests comparing those validities. Such tests exist.  I think they should be used as part of the comparison of validities.

 

Thank you very much for this important suggestion. Following your recommendation, we have calculated the z-statistic by comparing the validity results between the honest and faking conditions and have included the results in the text. The modified text is present following (lines 469-483):

“Specifically, conscientiousness is the only factor that predicts almost all performance criteria evaluated under faking conditions. It shows a correlation of r = .11 (z = 1.105, p > .10) with GPA and of r = .16 (z = 4.389, p < .01) with the CDTE scale.  Moreover, a correlation of r = -.15 (z = -3.017, p <.01) with the CDAN scale has been obtained; therefore, conscientiousness predicts people's propensity to commit negative academic behaviors also under conditions of faking. Although, the results showed statistically significant differences in the validity conscientiousness for predicting task performance and contextual performance between both types of response. The only criterion with which the correlation obtained is weak is contextual performance (CDCE). It is true, however, that even in honest conditions, the correlations found were lower between conscientiousness and this criterion when compared to the other criteria examined. For emotional stability, the results show that it is a robust predictor of task performance (r = -.11; z = -0.421, p >.10) and contextual performance (r = -.15; z = -0.421, p >.10) under faking response-conditions. In this case, we found no statistically significant differences between the predictive validity under both types of response-conditions.”

 

  1. The authors compared validities for predicting the GPA criterion with those predicting the rating scale criteria. I wonder much emphasis should be put on the rating scale measures. Would a rating scale measure of performance stand up in a court proceding as well as an objective GPA performance measure?  How many practitioners would go to the trouble of creating or administering a rating scale measure of performance when the GPA measure is practically automatically available?  It is interesting that persons self-ratings of performance are correlated with

sure why so much space was taken comparing the two.

 

            Thank you for this point of view. Regarding the question posed, although it is true that the GPA measure can be more quickly, we considered that, unfortunately, it is often difficult to obtain (at least in the case of Spain). Moreover, both measures are widely used, and it is considered that both types of measures allow to adequately evaluate the performance of individuals (Dámaris, Salgado & Moscoso, 2021; Kuncel, Credé & Thomas, 2005; Somers et al., 2020). Therefore, since both are validated in professional practice, we consider it pertinent to examine both types of measurement in this study and analyze whether faking affects the prediction of both types of criteria in the same way.

Likewise, the results of this study have shown that if both measures are used together, personality predicts performance better. So, from the perspective of sustainable organizations, these results suggest that both measures should be used together in order to obtain results more accurately of the person-to-person fit.

 

References:

Cuadrado, D., Salgado, J. F., & Moscoso, S. (2021). Personality, intelligence, and counterproductive academic behaviors: a meta-analysis. Journal of personality and social psychology, 120(2), 504–537. https://doi.org/10.1037/pspp0000285

Kuncel, N.R., Credé, M., & Thomas, L.L. (2005). The validity of self-reported grade point averages, class ranks, and test scores: A meta-analysis and review of the literature. Review of Educational Research, 75(1), 63-82.

Somers, C. L., Hillman, S. B., Townsend, A., Baranowski, A., & Robtoy, E. (2020). Correspondence Between Self-reported and Actual High School Grades. Educational Research Quarterly43(3), 24-51.

 

 

  1. I got the impression that the authors treated the significance of conscientiousness as if it were a new finding. But the C -> P relationship is quite well-known among those in selection (e.g., McAbee and Oswald, 2013). It is probably the closet we have to a “law” of selection – “Conscientiousness is the best personality predictor of performance.” On the other hand, the ES -> P relationship found here is unusual. This leads me to wonder about the construct validity of the forced-choice ES measure used here.  

 

We appreciate this important concern. When we focus on consciousness in this manuscript, we do not pretend to present the predictive validity of this factor as a previously uninvestigated topic. In fact, in several sections of the manuscript, we note that previous empirical evidence has shown that conscientiousness demonstrated the stronger predictive validity (for example, section 1.1, line 62-65; in section 1.4, line 211-212; or in section 1.5, line 234-237). Rather, we rely on the results of these previous studies to propose our hypotheses and develop this research, since, in this case, our study focuses on a topic that, to the best of our knowledge, has not been investigated to date: the study of the predictive validity of the Big Five measure with a FC inventory under faking response-conditions. In this sense, the results presented under faking conditions in this study can be considered as a new finding.

 

Consequently, the results obtained are a unique contribution that had not been studied to date and, hence, some of the results obtained had not been anticipated in the hypotheses, as is the case of the results of the ES (as we noted in the line 536-537: "Similar results were found in the case of emotional stability, although such results had not been anticipated in our hypotheses."). In this regard, and in response to your concern, the recent study by Martínez, Moscoso, and Lado (2021) has analyzed the factor structure of the quasi-ipsative FC personality inventory used in this study under honest and faking conditions and the results showed that data fit a structure of five-factors in the two conditions (honest and faking). Therefore, their results showed that faking does not affect the construct validity of this measure.

To clarify this point we add the following sentence in the description of the instrument (line 309-310): “Exploratory factor analyses confirmed the five-factor structure of the QI5F [25].”

 

Reference:

Martínez, A., Moscoso, S., & Lado, M. (2021). Faking Effects on the Factor Structure of a Quasi-Ipsative Forced-Choice Personality Inventory. Journal of Work and Organizational Psychology, 37 (1), 1-10. https://doi.org/10.5093/jwop2021a7

 

  1. The faking condition employed here was what some call an “instructed Faking” condition, in which respondents are instructed to distort their responses in a fashion that will increase the likelihood of their being hired. Obviously, the study needs to be replicated using a more realistic faking manipulation one mimicking real life selection in which respondents are told to respond honestly but are given an incentive to fake good. Our experience has been that the faking effect size is smaller using the “Incentive Faking” manipulation than it is when “Instructed Faking” is employed. The respondents behavior may differ in other ways between these two types of faking condition. That certainly needs to be investigated, as the authors note in the last section of the MS.

 Thank you for this comment and we totally agree with your comment. In this study, instructions have been used that forced participants to modify their scores, while in a context of real selection the incentive to commit faking is not explicit. We are aware of the possible effects that the type of faking manipulation may have on the result and, in this sense, we included this observation about the effects of the type of instructions in the section "Limitations of the study and future research" (Lines 715-722):

 

“It should also be noted that the faking condition of this study is a condition of maximum distortion in which the participants are induced to commit a high degree of faking. Therefore, it is expected that the results of faking would be less intense under normal performance contexts. In the present case, it was not possible to control for this variable (faking in maximum performance vs. faking in typical performance), which is a third limitation of the study. It would be advisable to carry out studies that analyze the criterion validity of the quasi-ipsative FC personality inventory in real selection contexts to examine whether the results obtained in this study are reproduced.”

 

           

Author Response File: Author Response.pdf

Reviewer 2 Report

Dear Authors,

Thank you for the opportunity to review “Effects of Faking on the Predictive Validity of a Quasi-ipsative 2 Forced-Choice Personality Inventory”.

 

I have read the manuscript and although the topic is up-to-date and interesting.

However, I have a few concerns pertaining to issues that need to be improved.

  1. In the introduction there is a justification for this topic in the scope of Sustainability. However, I suggest that this is also highlighted in the title. Therefore, an analysis of the relationship of the Big Five measurements in the context of sustainability should be included.
  2. I think it is essential to point out the research gap and to back this up with appropriate citation. Now this paragraph (lines 37-43) has no citation to the current literature.
  3. Paragraph 1.1 Personality Variables needs elaboration. It should be indicated which specific relationships between the five factor model and specific outcomes are indicated by the research. The authors have presented this in a nutshell. I think it should be specifically indicated which consequences are statistically linked to which factor.
  4. Development of hypotheses - I recommend that each hypothesis be preceded by a brief justification
  5. Sample - there is no information about the date and form of the survey. Also about the method of sampling. This should be completed in order to meet the criteria of scientific quality.
  6. The methods for verifying the reliability of instruments are basic and should ideally be further strengthened
  7. The conclusions need to be reinforced and specific implications need to be pointed out

Kind Regrads,

Author Response

Sustainability – MS. 1167154

Response Point-by-Point to Reviewer 2

 

Thank you very much for the comments. We revised the manuscript in accordance with the aspects that are mentioned. We present below our responses point-by-point.

 

 

  1. In the introduction there is a justification for this topic in the scope of Sustainability. However, I suggest that this is also highlighted in the title. Therefore, an analysis of the relationship of the Big Five measurements in the context of sustainability should be included.

 

Thank you very much for this suggestion. We have made some modifications in the manuscript based on your comments to better reflect the relation of this research with sustainability in organizations. In this sense, we modified the title of the article (Line 2 to 4):

 

“Effects of Faking on the Predictive Validity of a Quasi-ipsative Forced-Choice Personality Inventory: implications for sustainable personnel selection.”

 

Likewise, following your recommendation, we also add the following paragraph to explain the usefulness of using measures from the Big Five to achieve sustainable organizations following your recommendation (Line 80-86):

 

“Therefore, if the first step to achieving an efficient and sustainable organization is to design personnel selection processes that allow the incorporation of potential high-performance employees, the inclusion of personality measures based on the Big Five in the selection process seems an essential requirement. Organizations need highly innovative and productive workers to survive and be sustainable [6] and Big Five measurements can help identify those workers since, as we just noted, these measures predict organizational criteria related to individual performance and behavior at work.”

 

 

  1. I think it is essential to point out the research gap and to back this up with appropriate citation. Now this paragraph (lines 37-43) has no citation to the current literature.

 

We thank Reviewer 2 for this comment and we totally agree with your suggestion and we included the following references (paragraph lines 37-41):

  1. Bartram, D. The Great Eight competencies: A criterion-centric approach to validation. Appl. Psychol. 2005, 90, 1185–1203. https://doi.org/10.1037/0021-9010.90.6.1185
  2. Berry, C.M.; Ones, D.S.; Sackett, P.R. Interpersonal deviance, organizational deviance, and their common correlates: A review and meta-analysis.  Appl. Psychol. 200792, 410–424. https://doi.org/10.1037/0021-9010.92.2.410
  3. Borman, W.C.; Penner, L.A.; Allen, T.D.; Motowidlo, S.J. Personality predictors of citizenship performance.  J. Sel. Assess. 20019, 52–69. https://doi.org/10.1111/1468-2389.00163
  4. Cuadrado D.; Salgado J.F.; Moscoso S. Individual differences and counterproductive academic behaviors in high school. Plos One 2020, 15, e0238892. https://doi.org/10.1371/journal.pone.0238892
  5. Cuadrado, D.; Salgado, J.F.; Moscoso, S. Personality, intelligence, and counterproductive academic behaviors: a meta-analysis.  Pers. Soc. Psychol, 2021, 120, 504–537. https://doi.org/10.1037/pspp0000285
  6. Hurtz, G.M.; Donovan, J.J. Personality and job performance: The Big Five revisited.  Appl. Psychol. 200085, 869–879. https://doi.org/10.1037/0021-9010.85.6.869
  7. Judge, T.A.; Rodell, J.B.; Klinger, R.L.; Simon, L.S.; Crawford, E.R. Hierarchical representations of the five-factor model of personality in predicting job performance: Integrating three organizing frameworks with two theoretical perspectives.  Appl. Psychol. 2013, 98, 875–925. https://doi.org/10.1037/a0033901
  8. Lado, M.; Alonso, P. The Five-Factor model and job performance in low complexity jobs: A quantitative synthesis.  Work Organ. Psychol. 201733, 175–182. https://doi.org/10.1016/j.rpto.2017.07.004
  9. Salgado, J.F. The Five Factor Model of personality and job performance in the European Community.  Appl. Psychol. 1997, 82, 30–43. https://doi.org/10.1037/0021-9010.83.4.634
  10. Salgado, J.F. Big Five personality dimensions and job performance in army and civil occupations: A european perspective.  Perform. 1998, 11, 271–288. https://doi.org/10.1080/08959285.1998.9668034
  11. Salgado, J.F. The Big Five personality dimensions and counterproductive behaviors.  J. Sel. Assess. 200210, 117–125. https://doi.org/10.1111/1468-2389.00198
  12. Salgado, J.F. Predicting job performance using FFM and non‐FFM personality measures.  Occup. Organ. Psychol. 2003, 76, 323–346. https://doi.org/10.1348/096317903769647201
  13. Salgado, J.F.; Anderson, N.; Tauriz, G. The validity of ipsative and quasi‐ipsative forced‐choice personality inventories for different occupational groups: A comprehensive meta‐analysis.  Occup. Organ. Psychol. 2015, 88, 797–834. https://doi.org/10.1111/joop.12098
  14. Salgado, J.F.; Moscoso, S.; Anderson, N. Personality and counterproductive work behavior. In Handbook of Personality at Work; Christiansen, N.D., Tett, R.P., Eds.; Routledge: London, United Kingdom, 2013; pp. 606–632.
  15. Salgado, J.F.; Anderson, N.; Moscoso, S. Personality at work. In The Cambridge handbook of personality psychology; Corr, P.J., Matthews, G., Eds.; Cambridge University Press; Cambridge, United Kingdom, 2020; pp. 427–438.
  16. Salgado, J.F.; Tauriz, G. The Five-Factor Model, forced-choice personality inventories and performance: A comprehensive meta-analysis of academic and occupational validity studies. European J. Work Organ. Psychol. 201423, 3–30. https://doi.org/10.1080/1359432X.2012.716198
  17. Birkeland, S.A.; Manson, T.M.; Kisamore, J.L.; Brannick, M.T.; Smith, M.A. A meta‐analytic investigation of job applicant faking on personality measures. J. Sel. Assess. 2006, 14, 317–335. https://doi.org/10.1111/j.1468-2389.2006.00354.x
  18. Otero, I.; Cuadrado, D.; Martínez, A. Convergent and predictive validity of the Big Five Factors assessed with single stimulus and quasi-ipsative questionnaires.  Work Organ. Psychol. 2020, 36, 215–222. http://dx.doi.org/10.5093/jwop2020a17 
  19. Martínez, A.; Moscoso, S.; Lado, M. Faking effects on the factor structure of a quasi-ipsative forced-choice personality inventory. Work Organ. Psychol. 2021, 37, 1–10. https://doi.org/jwop2021a7
  20. Viswesvaran, C.; Ones, D.S. Meta-analyses of fakability estimates: Implications for personality measurement.  Psychol. Meas. 199959, 197–210. https://doi.org/10.1177/00131649921969802

 

  1. Paragraph 1.1 Personality Variables needs elaboration. It should be indicated which specific relationships between the five factor model and specific outcomes are indicated by the research. The authors have presented this in a nutshell. I think it should be specifically indicated which consequences are statistically linked to which factor.

 

We really appreciate this important suggestion and have modified the paper in accordance with it. We extend the paragraph with more specific information about the predictive validity of each personality factor. On lines 62-79 we have written:

 

“Specifically, meta-analytic research has shown that conscientiousness is the best predictor of performance criteria including general job performance, job satisfaction, self-defeating behavior, contextual performance, and training success, and that generalizes its validity across occupations and criteria. Emotional stability also showed generalized validity across criteria and occupations, but its validity size is smaller than that of conscientiousness [e.g., 13,15,16,18-21,31]. The remaining three factors have been successfully related to performance criteria in specific occupations. Extraversion is a predictor of performance in jobs that require social interaction such as managerial, commercial, and police occupations, and for the performance of training and teamwork. Agreeableness showed a generalization of validity for occupations oriented to cooperation and to help others, that is, occupations related to customer service and health occupations, and for the performance of teamwork. Finally, openness to experience is a relevant predictor of training performance and of performance for jobs requiring high levels of creativity [see, for example, 12,13,15,18,32]. Regarding the academic context, conscientiousness has proven to be the strongest predictor of relevant criteria such as grade point average or academic dishonesty, among others. Likewise, academic performance (GPA) also correlated significantly with openness to experience and agreeableness [1,10,11,17,33,34].”

 

  1. Development of hypotheses - I recommend that each hypothesis be preceded by a brief justification.

 

Thank you for the recommendation. We have proceeded to include the following information in order to clarify the hypothesis proposed.

In relation with the hypothesis 3 we include the next paragraph (Lines 260-262):

 

“Regarding the effects of faking, the findings mentioned above shows that this phenomenon has a direct impact on the validity of personality measures. Hence, we posit the following hypothesis:”

 

For the hypothesis 4 we include the next phrase in the paragraph (Lines 268-270):

 

“This theory maintains that faking produces a decrease in the reliability of personality. Therefore, we proposed the hypotheses 4:”

 

Finally, the following justification was included for the hypothesis 5 (Lines 273-275):

 

“Likewise, this theory proposes that faking causes an increase in the mean scores and a reduction in variability (lower standard deviations) producing range restriction of the scores. In this sense, the last hypothesis of this study is:”

 

  1. Sample - there is no information about the date and form of the survey. Also about the method of sampling. This should be completed in order to meet the criteria of scientific quality.

 

Thank you very much for this point. Following this recommendation, we included a paragraph with this information (Line 284-291):

 

“To carry out this experimental study, the voluntary participation of university students was requested by posting notices in faculties and other public spaces of the University of Santiago de Compostela (e.g., libraries, academic management units, or university residences). The sample collection was carried out between the months of January and June 2017 and between those same months of 2018. In order to attract students, they were offered economic compensation (10€) in exchange for their participation. To perform the study, face-to-face small groups of between 10 and 15 persons were organized and all subjects provided informed consent to participate in the study.”

 

 

  1. The methods for verifying the reliability of instruments are basic and should ideally be further strengthened.

 

Thank you very much for this suggestion. In subsection 2.2. measures, we had already included in the description of the measures information on the reliability coefficients obtained for each measure and the type of method used to calculate these coefficients. In all cases, the reliability coefficient used was Conbrach's alpha, that is, the internal consistency coefficient. Even so, we made some modifications to the text in order to clarify this information.

Personality measure (lines 303-307):

“Respect to the reliability of this measure, the internal consistency coefficients (Cronbach’s alpha) were .71, .73, .80, .66, .80 for emotional stability (ES), extraversion (EX), openness to experience (OE), agreeableness (A), and conscientiousness (C), respectively. The test-retest reliabilities (for a four-week interval) reported were .91, .90, .79, .65, and .72 for ES, EX, OE, A, and C, respectively.”

 

GPA measure (lines 313-316):

“Salgado and Tauriz [22] developed an empirical distribution of GPA reliability and found an average reliability coefficient of 0.83, therefore, the reliability coefficient used was α = .83.”

 

CDTE measure (line 321):

 “The internal consistency coefficient (Cronbach’s alpha) for this measure was α = .89 (n = 803).”

 

CDCE measure (line 325):

“The internal consistency reliability coefficient was α = .86 (n = 794).”

 

CDAN measure (line 331):

“The Cronbach's alpha was .89 (n = 799).”

 

  1. The conclusions need to be reinforced and specific implications need to be pointed out.

We really appreciate this important suggestion and have modified the paper in accordance with it. With the objective to clarify the contributions of this study we modify and expand the information of the section 4 and 4.1. (lines 579 to 704):

 

4.Discussion

“This study had four main objectives. First, to determine whether a quasi-ipsative FC (algebraically independent) personality measure predicts academic performance ratings and academic grades under honest and faking response-instructions and, particularly, to examine the predictive validity of conscientiousness (Hypothesis 1). The second was to check whether the performance measurement method affects predictive validity (Hypothesis 2). The third objective was to check whether the effects of faking occur independently of the performance measurement method used (Hypothesis 3). The last objective was to test the effects of faking on the reliability and range restriction of the personality scores that Salgado [42] proposed in his theoretical model of the effects of faking (Hypothesis 4 and 5).

In relation to the first objective, the results obtained showed that conscientiousness is the best predictor of performance when quasi-ipsative FC measures are used with significant r values found ​​in all cases under honest conditions. These results are similar to those obtained by Salgado and Táuriz [22] and Salgado et al. [19], who found that the conscientiousness factor was the best predictor of academic performance. It was also observed that emotional stability and extraversion were predictors of various academic performance criteria. However, the significant correlations obtained between these factors and the performance measures vary in each analyzed sample. With openness to experience and agreeableness, a similar situation occurred, and although they have not stood out as predictors of academic performance, they have obtained significant correlations with some of the performance criteria analyzed. These results reveal the considerable variability that occurs between the experimental designs of the honest condition. This is, therefore, the first unique contribution of this study.

Regarding the faking condition, the results show the robustness of conscientiousness as a predictor of academic performance, showing significant correlations with almost all the analyzed performance variables. Likewise, significant correlations have been found with emotional stability for task and contextual performance criteria, which shows that this factor is a valid predictor of performance even under faking conditions. These results follow the line of those obtained in the honest condition of this study and of the results of previous meta-analyses of the predictive validity of the FC quasi-ipsative inventories (in honest conditions) that indicated that emotional stability is an adequate predictor of performance (Salgado et al., 2015). This is the second unique contribution of this study.

Therefore, these findings show that personality evaluated with an algebraically independent quasi-ipsative FC inventory predicts performance even under faking response-conditions. However, there is a reduction in the effect sizes of the correlations, although it cannot be totally attributable to faking in the present study, since when the correlations corrected for measurement error and restriction in the range are examined, they do not approximate the values ​​of the correlations under honest response-conditions. As we have indicated, the results of the honest samples reflect a significant variability in the results between the experimental designs, which shows that there are variables, other than faking, that could be affecting the results. So, this is the third unique contribution of this study.

With respect to the second goal, this study showed that conscientiousness is a better predictor of performance when it is assessed with rating scales than with academic grades. This contribution is also unique to this study. Moreover, the results join the growing empirical evidence that indicates that the performance measurement method is a powerful moderator of the validity of predictive instruments, for example, of cognitive ability tests [78] and the selection interview [79]. The evidence provided indicates that also in the case of the measurement of personality, the performance measurement method can have important effects on validity.

In relation to the third objective, this study has contributed by showing that the faking effect occurs independently of the performance measures used, although the reduction in the validity coefficient was greater when performance was measured with performance ratings scales. Hence, this is the fifth contribution of this study.

Finally, the sixth contribution of this study has to do with the psychometric theory of the effects of faking (Hypothesis 4 and 5) [42]. According to this theory, if subjects distort their answers, the reliability and validity of the questionnaires will be attenuated, due to an increase in measurement error and a reduction in the range of scores. This effect has been verified in the present study. The alpha coefficient of internal consistency obtained under faking instructions was lower than that obtained under honest response-conditions. It could also be observed that there was a certain degree of range restriction in four of the personality factors (the exception was conscientiousness). Therefore, the study has contributed in a unique way by testing the predictions of the psychometric theory of the effects of faking on experimental conditions and with a type of personality inventory not examined until now. The results provide empirical evidence to support the theory's predictions.

In conclusion, this study represents a unique empirical contribution since it is the first research that has simultaneously examined the criterion validity of the quasi-ipsative FC inventory under honest and faking conditions for academic criteria and the results have been compared in three samples.

The results obtained allow us to conclude that the personality evaluated with quasi-ipsative FC inventory (without algebraic independence) predicts performance even under faking conditions. Specifically, it has been found that conscientiousness is the best predictor of academic performance, regardless of the response-condition (honest or faking) or the experimental design in which it is evaluated (between or within-subject design).

The current study has also made it possible to analyze the moderating effect that the type of performance measure has on the predictive validity of the quasi-ipsative FC inventory in honest and faking conditions. A topic that has not been analyzed in the field of academic performance. We were found that conscientiousness is a better predicts of performance when it is evaluated with self-report rating scales than with academic grades in both response-conditions. Therefore, results have shown in our study that the type of performance measure is a powerful moderator of the validity of the predictive instruments. Regarding this issue, it has also been shown that faking reduces the validity coefficients of both types of criteria measures, although higher reduction invalidity has been observed when using self-report rating data.

Finally, this research has provided empirical evidence that supports the predictions of the psychometric theory of the effects of faking [42]. The results have shown that the reliability coefficients are smaller and there is range restriction in the faking condition compared to the honest one. Being the first study to analyze this effect in a quasi-ipsative FC inventory.

 

4.1. Theoretical and Practical Implications

 

The results of this study have implications for both the theory and practice of personality assessment in applied contexts. From a theoretical point of view, this is the first study that provides empirical evidence of the effects of faking on predict validity of a quasi-ipsative FC that provides non-algebraically dependent scores. The results obtained suggest that this type of FC questionnaire is a robust instrument that controls faking effects on predictive validity.

Moreover, in relation to the theory of personality assessment, a relevant implication of the results is that the psychometric effects of faking do not seem to be the only factors in reducing the predictive validity of the personality measures. When the effects derived from faking on the reliability (internal consistency) and the range restriction have been controlled for and the validity coefficients have been corrected, a notable difference between the validity coefficients obtained under honest and faking response-conditions could still be observed, which should not have been that great once the effects had been psychometrically corrected. This implies that other variables and not only faking would affect the predictive validity of these measures.

We speculate that the change (reduction) in the predictive validity coefficients may also be due to other idiosyncratic factors (e.g., changes in the response mode of individuals) or fatigue (response to a large questionnaire on two consecutive occasions, which required more than an hour of work) or practice (less involvement on the second occasion, with less elaborate answers). Future studies should examine the potential contribution of these factors (and others) to the reduction in predictive validity.

In relation to the practice of personality assessment, an important implication is that predictive validity is considerable when a quasi-ipsative FC personality inventory is used even in faking response-conditions, and that the validity is even better when broad academic performance criteria are examined. For this reason, a recommendation for evaluation professionals in applied contexts (e.g., student admission processes, selection processes for internship and training) is to use quasi-ipsative FC inventories without algebraic dependence since, in addition to being good predictors, they are robust against faking.

Furthermore, based in the findings obtained, we recommend that the observed validity coefficients be corrected for range restriction to establish a less biased estimator of validity.

Finally, the results of this study also suggest that the measurement method of performance is a powerful moderator of the validity of predictive instruments, therefore professionals must be aware that the validity of personality instruments (in our case, the quasi-ipsative FC inventories) is not identical for all modes of measuring academic performance and must use the appropriate coefficient for the type of measure of the criterion to be used in each case, remembering that the validity coefficients are lower for academic grades than for self-reported performance ratings.”

 

As a result of the modifications and substantial revisions, we feel that this new version of our manuscript provides a far sharper focus and presentation of our key findings. We thank Reviewer 3 for his/her time and valuable suggestions.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Dear Authors,
Thank you for making the amendments. In my opinion the paper is now ready for publication.
Congratulations.

Back to TopTop